path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
retail/clustering/bqml/bqml_scaled_clustering.ipynb | ###Markdown
View on GitHub How to build k-means clustering models for market segmentation using BigQuery MLA common marketing analytics challenge is to understand consumer behavior and develop customer attributes or archetypes. As organizations get better at tackling this problem, they can activate marketing strategies to incorporate additional customer knowledge into their campaigns. Clustering algorithms are a common vehicle to address this challenge. They allow businesses to better segment and understand their customers and users. In the field of Machine Learning, which is a combination of both art and science, unsupervised learning may require more art compared to supervised learning algorithms. By definition, unsupervised learning has no single metric to guide the algorithm's learning process. Instead, the data science team will need to work hand in hand with business owners to determine feature selection, optimal number of clusters (the number of clusters is often abbreviated as k), and most importantly, to gain a deeper understanding of what each cluster represents. How can clustering algorithms help businesses succeed?Clustering algorithms can help companies identify groups of similar customers that can be used for targeting in advertising campaigns. This is paramount as we are breathing a prediction era where customers expect personalization from brands. Using a public sample Google Analytics 360 e-commerce dataset on BigQuery, you will learn how to create and deploy clustering algorithms in production. You will also get an example of how to navigate unsupervised learning. Keep in mind, your clusters will be even more meaningful when you bring additional data. ObjectiveBy the end of this notebook, you will know how to:* Explore features to understand what might be interesting for a clustering model* Pre-process data into the correct format needed to create a clustering model using BigQuery ML* Train (and deploy) the k-means model in BigQuery ML* Evaluate the model* Make predictions using the model* Write the results to be used for batch prediction, for example, to send ads based on segmentation DatasetThe [Google Analytics Sample](https://console.cloud.google.com/marketplace/details/obfuscated-ga360-data/obfuscated-ga360-data?filter=solution-type:dataset) dataset, which is hosted publicly on BigQuery, is a dataset that provides 12 months (August 2016 to August 2017) of obfuscated Google Analytics 360 data from the [Google Merchandise Store](https://www.googlemerchandisestore.com/), a real e-commerce store that sells Google-branded merchandise. Costs This tutorial uses billable components of Google Cloud Platform:* BigQuery* BigQuery MLLearn about [BigQuery pricing](https://cloud.google.com/bigquery/pricing), [BigQuery MLpricing](https://cloud.google.com/bigquery-ml/pricing) and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. PIP install packages and dependencies
###Code
!pip install google-cloud-bigquery
!pip install google-cloud-bigquery-storage
!pip install pandas-gbq
# Reservation package needed to setup flex slots for flat-rate pricing
!pip install google-cloud-bigquery-reservation
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Set up your Google Cloud Platform project_The following steps are required, regardless of your notebook environment._1. [Select or create a project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)1. [Enable the AI Platform APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)1. Enter your project ID and region in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook._Note_: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Set project ID and authenticateUpdate your Project ID below. The rest of the notebook will run using these credentials.
###Code
PROJECT_ID = "UPDATE TO YOUR PROJECT ID"
REGION = 'US'
DATA_SET_ID = 'bqml_kmeans' # Ensure you first create a data set in BigQuery
!gcloud config set project $PROJECT_ID
# If you have not built the Data Set, the following command will build it for you
# !bq mk --location=$REGION --dataset $PROJECT_ID:$DATA_SET_ID
###Output
_____no_output_____
###Markdown
Import libraries and define constants
###Code
from google.cloud import bigquery
import numpy as np
import pandas as pd
import pandas_gbq
import matplotlib.pyplot as plt
pd.set_option('display.float_format', lambda x: '%.3f' % x) # used to display float format
client = bigquery.Client(project=PROJECT_ID)
###Output
_____no_output_____
###Markdown
Data exploration and preparationPrior to building your models, you are typically expected to invest a significant amount of time cleaning, exploring, and aggregating your dataset in a meaningful way for modeling. For the purpose of this demo, we aren't showing this step only to prioritize showcasing clustering with k-means in BigQuery ML. Building synthetic dataOur goal is to use both online (GA360) and offline (CRM) data. You can use your own CRM data, however, in this case since we don't have CRM data to showcase, we will instead generate synthetic data. We will generate estimated House Hold Income, and Gender. To do so, we will hash fullVisitorID and build simple rules based on the last digit of the hash. When you run this process with your own data, you can join CRM data with several dimensions, but this is just an example of what is possible.
###Code
# We start with GA360 data, and will eventually build synthetic CRM as an example.
# This block is the first step, just working with GA360
ga360_only_view = 'GA360_View'
shared_dataset_ref = client.dataset(DATA_SET_ID)
ga360_view_ref = shared_dataset_ref.table(ga360_only_view)
ga360_view = bigquery.Table(ga360_view_ref)
ga360_query = '''
SELECT
fullVisitorID,
ABS(farm_fingerprint(fullVisitorID)) AS Hashed_fullVisitorID, # This will be used to generate random data.
MAX(device.operatingSystem) AS OS, # We can aggregate this because an OS is tied to a fullVisitorID.
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Apparel' THEN 1 ELSE 0 END) AS Apparel,
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Office' THEN 1 ELSE 0 END) AS Office,
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Electronics' THEN 1 ELSE 0 END) AS Electronics,
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Limited Supply' THEN 1 ELSE 0 END) AS LimitedSupply,
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Accessories' THEN 1 ELSE 0 END) AS Accessories,
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Shop by Brand' THEN 1 ELSE 0 END) AS ShopByBrand,
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Bags' THEN 1 ELSE 0 END) AS Bags,
ROUND (SUM (productPrice/1000000),2) AS productPrice_USD
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_*`,
UNNEST(hits) AS hits,
UNNEST(hits.product) AS hits_product
WHERE
_TABLE_SUFFIX BETWEEN '20160801'
AND '20160831'
AND geoNetwork.country = 'United States'
AND type = 'EVENT'
GROUP BY
1,
2
'''
ga360_view.view_query = ga360_query.format(PROJECT_ID)
ga360_view = client.create_table(ga360_view) # API request
print(f"Successfully created view at {ga360_view.full_table_id}")
# Show a sample of GA360 data
ga360_query_df = f'''
SELECT * FROM {ga360_view.full_table_id.replace(":", ".")} LIMIT 5
'''
job_config = bigquery.QueryJobConfig()
# Start the query
query_job = client.query(ga360_query_df, job_config=job_config) #API Request
df_ga360 = query_job.result()
df_ga360 = df_ga360.to_dataframe()
df_ga360
# Create synthetic CRM data in SQL
CRM_only_view = 'CRM_View'
shared_dataset_ref = client.dataset(DATA_SET_ID)
CRM_view_ref = shared_dataset_ref.table(CRM_only_view)
CRM_view = bigquery.Table(CRM_view_ref)
# Query below works by hashing the fullVisitorID, which creates a random distribution.
# We use modulo to artificially split gender and hhi distribution.
CRM_query = '''
SELECT
fullVisitorID,
IF
(MOD(Hashed_fullVisitorID,2) = 0,
"M",
"F") AS gender,
CASE
WHEN MOD(Hashed_fullVisitorID,10) = 0 THEN 55000
WHEN MOD(Hashed_fullVisitorID,10) < 3 THEN 65000
WHEN MOD(Hashed_fullVisitorID,10) < 7 THEN 75000
WHEN MOD(Hashed_fullVisitorID,10) < 9 THEN 85000
WHEN MOD(Hashed_fullVisitorID,10) = 9 THEN 95000
ELSE
Hashed_fullVisitorID
END
AS hhi
FROM (
SELECT
fullVisitorID,
ABS(farm_fingerprint(fullVisitorID)) AS Hashed_fullVisitorID,
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_*`,
UNNEST(hits) AS hits,
UNNEST(hits.product) AS hits_product
WHERE
_TABLE_SUFFIX BETWEEN '20160801'
AND '20160831'
AND geoNetwork.country = 'United States'
AND type = 'EVENT'
GROUP BY
1,
2)
'''
CRM_view.view_query = CRM_query.format(PROJECT_ID)
CRM_view = client.create_table(CRM_view) # API request
print(f"Successfully created view at {CRM_view.full_table_id}")
# See an output of the synthetic CRM data
CRM_query_df = f'''
SELECT * FROM {CRM_view.full_table_id.replace(":", ".")} LIMIT 5
'''
job_config = bigquery.QueryJobConfig()
# Start the query
query_job = client.query(CRM_query_df, job_config=job_config) #API Request
df_CRM = query_job.result()
df_CRM = df_CRM.to_dataframe()
df_CRM
###Output
_____no_output_____
###Markdown
Build a final view for to use as trainding data for clusteringYou may decide to change the view below based on your specific dataset. This is fine, and is exactly why we're creating a view. All steps subsequent to this will reference this view. If you change the SQL below, you won't need to modify other parts of the notebook.
###Code
# Build a final view, which joins GA360 data with CRM data
final_data_view = 'Final_View'
shared_dataset_ref = client.dataset(DATA_SET_ID)
final_view_ref = shared_dataset_ref.table(final_data_view)
final_view = bigquery.Table(final_view_ref)
final_data_query = f'''
SELECT
g.*,
c.* EXCEPT(fullVisitorId)
FROM {ga360_view.full_table_id.replace(":", ".")} g
JOIN {CRM_view.full_table_id.replace(":", ".")} c
ON g.fullVisitorId = c.fullVisitorId
'''
final_view.view_query = final_data_query.format(PROJECT_ID)
final_view = client.create_table(final_view) # API request
print(f"Successfully created view at {final_view.full_table_id}")
# Show final data used prior to modeling
sql_demo = f'''
SELECT * FROM {final_view.full_table_id.replace(":", ".")} LIMIT 5
'''
job_config = bigquery.QueryJobConfig()
# Start the query
query_job = client.query(sql_demo, job_config=job_config) #API Request
df_demo = query_job.result()
df_demo = df_demo.to_dataframe()
df_demo
###Output
_____no_output_____
###Markdown
Create our initial modelIn this section, we will build our initial k-means model. We won't focus on optimal k or other hyperparemeters just yet.Some additional points: 1. We remove fullVisitorId as an input, even though it is grouped at that level because we don't need fullVisitorID as a feature for clustering. fullVisitorID should never be used as feature.2. We have both categorical as well as numerical features3. We do not have to normalize any numerical features, as BigQuery ML will automatically do this for us. Build a function to build our modelWe will build a simple python function to build our model, rather than doing everything in SQL. This approach means we can asynchronously start several models and let BQ run in parallel.
###Code
def makeModel (n_Clusters, Model_Name):
sql =f'''
CREATE OR REPLACE MODEL `{PROJECT_ID}.{DATA_SET_ID}.{Model_Name}`
OPTIONS(model_type='kmeans',
kmeans_init_method = 'KMEANS++',
num_clusters={n_Clusters}) AS
SELECT * except(fullVisitorID, Hashed_fullVisitorID) FROM `{final_view.full_table_id.replace(":", ".")}`
'''
job_config = bigquery.QueryJobConfig()
client.query(sql, job_config=job_config) # Make an API request.
# Let's start with a simple test to ensure everything works.
# After running makeModel(), allow a few minutes for training to complete.
model_test_name = "test"
makeModel(3, model_test_name)
# After training is completed, you can either check in the UI, or you can interact with it using list_models().
for model in client.list_models(DATA_SET_ID):
print(model)
###Output
_____no_output_____
###Markdown
Work towards creating a better modelIn this section, we want to determine the proper k value. Determining the right value of k depends completely on the use case. There are straight forward examples that will simply tell you how many clusters are needed. Suppose you are pre-processing hand written digits - this tells us k should be 10. Or perhaps your business stakeholder only wants to deliver three different marketing campaigns and needs you to identify three clusters of customers, then setting k=3 might be meaningful. However, the use case is sometimes more open ended and you may want to explore different numbers of clusters to see how your datapoints group together with the minimal error within each cluster. To accomplish this process, we start by performing the 'Elbow Method', which simply charts loss vs k. Then, we'll also use the Davies-Bouldin score.(https://en.wikipedia.org/wiki/Davies%E2%80%93Bouldin_index) Below we are going to create several models to perform both the Elbow Method and get the Davies-Bouldin score. You may change parameters like low_k and high_k. Our process will create models between these two values. There is an additional parameter called model_prefix_name. We recommend you leave this as its current value. It is used to generate a naming convention for our models.
###Code
# Define upper and lower bound for k, then build individual models for each.
# After running this loop, look at the UI to see several model objects that exist.
low_k = 3
high_k = 15
model_prefix_name = 'kmeans_clusters_'
lst = list(range (low_k, high_k+1)) #build list to iterate through k values
for k in lst:
model_name = model_prefix_name + str(k)
makeModel(k, model_name)
print(f"Model started: {model_name}")
###Output
Model started: kmeans_clusters_3
Model started: kmeans_clusters_4
Model started: kmeans_clusters_5
Model started: kmeans_clusters_6
Model started: kmeans_clusters_7
Model started: kmeans_clusters_8
Model started: kmeans_clusters_9
Model started: kmeans_clusters_10
Model started: kmeans_clusters_11
Model started: kmeans_clusters_12
Model started: kmeans_clusters_13
Model started: kmeans_clusters_14
Model started: kmeans_clusters_15
###Markdown
Select optimal k
###Code
# list all current models
models = client.list_models(DATA_SET_ID) # Make an API request.
print("Listing current models:")
for model in models:
full_model_id = f"{model.dataset_id}.{model.model_id}"
print(full_model_id)
# Remove our sample model from BigQuery, so we only have remaining models from our previous loop
model_id = DATA_SET_ID+"."+model_test_name
client.delete_model(model_id) # Make an API request.
print(f"Deleted model '{model_id}'")
# This will create a dataframe with each model name, the Davies Bouldin Index, and Loss.
# It will be used for the elbow method and to help determine optimal K
df = pd.DataFrame(columns=['davies_bouldin_index', 'mean_squared_distance'])
models = client.list_models(DATA_SET_ID) # Make an API request.
for model in models:
full_model_id = f"{model.dataset_id}.{model.model_id}"
sql =f'''
SELECT
davies_bouldin_index,
mean_squared_distance
FROM ML.EVALUATE(MODEL `{full_model_id}`)
'''
job_config = bigquery.QueryJobConfig()
# Start the query, passing in the extra configuration.
query_job = client.query(sql, job_config=job_config) # Make an API request.
df_temp = query_job.to_dataframe() # Wait for the job to complete.
df_temp['model_name'] = model.model_id
df = pd.concat([df, df_temp], axis=0)
###Output
_____no_output_____
###Markdown
The code below assumes we've used the naming convention originally created in this notebook, and the k value occurs after the 2nd underscore. If you've changed the model_prefix_name variable, then this code might break.
###Code
# This will modify the dataframe above, produce a new field with 'n_clusters', and will sort for graphing
df['n_clusters'] = df['model_name'].str.split('_').map(lambda x: x[2])
df['n_clusters'] = df['n_clusters'].apply(pd.to_numeric)
df = df.sort_values(by='n_clusters', ascending=True)
df
df.plot.line(x='n_clusters', y=['davies_bouldin_index', 'mean_squared_distance'])
###Output
_____no_output_____
###Markdown
Note - when you run this notebook, you will get different results, due to random cluster initialization. If you'd like to consistently return the same cluster for reach run, you may explicitly select your initialization through hyperparameter selection (https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-createkmeans_init_method). Making our k selection: There is no perfect approach or process when determining the optimal k value. It can often be determined by business rules or requirements. In this example, there isn't a simple requirement, so these considerations can also be followed:1. We start with the 'elbow method', which is effectively charting loss vs k. Sometimes, though not always, there's a natural 'elbow' where incremental clusters do not drastically reduce loss. In this specific example, and as you often might find, unfortunately there isn't a natural 'elbow', so we must continue our process. 2. Next, we chart Davies-Bouldin vs k. This score tells us how 'different' each cluster is, with the optimal score at zero. With 5 clusters, we see a score of ~1.4, and only with k>9, do we see better values. 3. Finally, we begin to try to interpret the difference of each model. You can review the evaluation module for various models to understand distributions of our features. With our data, we can look for patterns by gender, house hold income, and shopping habits. Analyze our final clusterThere are 2 options to understand the characteristics of your model. You can either 1) look in the BigQuery UI, or you can 2) programmatically interact with your model object. Below you’ll find a simple example for the latter option.
###Code
model_to_use = 'kmeans_clusters_5' # User can edit this
final_model = DATA_SET_ID+'.'+model_to_use
sql_get_attributes = f'''
SELECT
centroid_id,
feature,
categorical_value
FROM
ML.CENTROIDS(MODEL {final_model})
WHERE
feature IN ('OS','gender')
'''
job_config = bigquery.QueryJobConfig()
# Start the query
query_job = client.query(sql_get_attributes, job_config=job_config) #API Request
df_attributes = query_job.result()
df_attributes = df_attributes.to_dataframe()
df_attributes.head()
# get numerical information about clusters
sql_get_numerical_attributes = f'''
WITH T AS (
SELECT
centroid_id,
ARRAY_AGG(STRUCT(feature AS name,
ROUND(numerical_value,1) AS value)
ORDER BY centroid_id)
AS cluster
FROM ML.CENTROIDS(MODEL {final_model})
GROUP BY centroid_id
),
Users AS(
SELECT
centroid_id,
COUNT(*) AS Total_Users
FROM(
SELECT
* EXCEPT(nearest_centroids_distance)
FROM
ML.PREDICT(MODEL {final_model},
(
SELECT
*
FROM
{final_view.full_table_id.replace(":", ".")}
)))
GROUP BY centroid_id
)
SELECT
centroid_id,
Total_Users,
(SELECT value from unnest(cluster) WHERE name = 'Apparel') AS Apparel,
(SELECT value from unnest(cluster) WHERE name = 'Office') AS Office,
(SELECT value from unnest(cluster) WHERE name = 'Electronics') AS Electronics,
(SELECT value from unnest(cluster) WHERE name = 'LimitedSupply') AS LimitedSupply,
(SELECT value from unnest(cluster) WHERE name = 'Accessories') AS Accessories,
(SELECT value from unnest(cluster) WHERE name = 'ShopByBrand') AS ShopByBrand,
(SELECT value from unnest(cluster) WHERE name = 'Bags') AS Bags,
(SELECT value from unnest(cluster) WHERE name = 'productPrice_USD') AS productPrice_USD,
(SELECT value from unnest(cluster) WHERE name = 'hhi') AS hhi
FROM T LEFT JOIN Users USING(centroid_id)
ORDER BY centroid_id ASC
'''
job_config = bigquery.QueryJobConfig()
# Start the query
query_job = client.query(sql_get_numerical_attributes, job_config=job_config) #API Request
df_numerical_attributes = query_job.result()
df_numerical_attributes = df_numerical_attributes.to_dataframe()
df_numerical_attributes.head()
###Output
_____no_output_____
###Markdown
In addition to the output above, I'll note a few insights we get from our clusters. Cluster 1 - The apparel shopper, which also purchases more often than normal. This (although synthetic data) segment skews female.Cluster 2 - Most likely to shop by brand, and interested in bags. This segment has fewer purchases on average than the first cluster, however, this is the highest value customer.Cluster 3 - The most populated cluster, this one has a small amount of purchases and spends less on average. This segment is the one time buyer, rather than the brand loyalist. Cluster 4 - Most interested in accessories, does not buy as often as cluster 1 and 2, however buys more than cluster 3. Cluster 5 - This is an outlier as only 1 person belongs to this group. Use model to group new website behavior, and then push results to GA360 for marketing activationAfter we have a finalized model, we want to use it for inference. The code below outlines how to score or assign users into clusters. These are labeled as the CENTROID_ID. Although this by itself is helpful, we also recommend a process to ingest these scores back into GA360. The easiest way to export your BigQuery ML predictions from a BigQuery table to Google Analytics 360 is to use the MoDeM (Model Deployment for Marketing, https://github.com/google/modem) reference implementation. MoDeM helps you load data into Google Analytics for eventual activation in Google Ads, Display & Video 360 and Search Ads 360.
###Code
sql_score = f'''
SELECT * EXCEPT(nearest_centroids_distance)
FROM
ML.PREDICT(MODEL {final_model},
(
SELECT
*
FROM
{final_view.full_table_id.replace(":", ".")}
LIMIT 1))
'''
job_config = bigquery.QueryJobConfig()
# Start the query
query_job = client.query(sql_score, job_config=job_config) #API Request
df_score = query_job.result()
df_score = df_score.to_dataframe()
df_score
###Output
_____no_output_____
###Markdown
Clean up: Delete all models and tables
###Code
# Are you sure you want to do this? This is to delete all models
models = client.list_models(DATA_SET_ID) # Make an API request.
for model in models:
full_model_id = f"{model.dataset_id}.{model.model_id}"
client.delete_model(full_model_id) # Make an API request.
print(f"Deleted: {full_model_id}")
# Are you sure you want to do this? This is to delete all tables and views
tables = client.list_tables(DATA_SET_ID) # Make an API request.
for table in tables:
full_table_id = f"{table.dataset_id}.{table.table_id}"
client.delete_table(full_table_id) # Make an API request.
print(f"Deleted: {full_table_id}")
###Output
Deleted: bqml_kmeans.CRM_View
Deleted: bqml_kmeans.Final_View
Deleted: bqml_kmeans.GA360_View
###Markdown
View on GitHub How to build k-means clustering models for market segmentation using BigQuery MLA common marketing analytics challenge is to understand consumer behavior and develop customer attributes or archetypes. As organizations get better at tackling this problem, they can activate marketing strategies to incorporate additional customer knowledge into their campaigns. Clustering algorithms are a common vehicle to address this challenge. They allow businesses to better segment and understand their customers and users. In the field of Machine Learning, which is a combination of both art and science, unsupervised learning may require more art compared to supervised learning algorithms. By definition, unsupervised learning has no single metric to guide the algorithm's learning process. Instead, the data science team will need to work hand in hand with business owners to determine feature selection, optimal number of clusters (the number of clusters is often abbreviated as k), and most importantly, to gain a deeper understanding of what each cluster represents. How can clustering algorithms help businesses succeed?Clustering algorithms can help companies identify groups of similar customers that can be used for targeting in advertising campaigns. This is paramount as we are breathing a prediction era where customers expect personalization from brands. Using a public sample Google Analytics 360 e-commerce dataset on BigQuery, you will learn how to create and deploy clustering algorithms in production. You will also get an example of how to navigate unsupervised learning. Keep in mind, your clusters will be even more meaningful when you bring additional data. ObjectiveBy the end of this notebook, you will know how to:* Explore features to understand what might be interesting for a clustering model* Pre-process data into the correct format needed to create a clustering model using BigQuery ML* Train (and deploy) the k-means model in BigQuery ML* Evaluate the model* Make predictions using the model* Write the results to be used for batch prediction, for example, to send ads based on segmentation DatasetThe [Google Analytics Sample](https://console.cloud.google.com/marketplace/details/obfuscated-ga360-data/obfuscated-ga360-data?filter=solution-type:dataset) dataset, which is hosted publicly on BigQuery, is a dataset that provides 12 months (August 2016 to August 2017) of obfuscated Google Analytics 360 data from the [Google Merchandise Store](https://www.googlemerchandisestore.com/), a real e-commerce store that sells Google-branded merchandise. Costs This tutorial uses billable components of Google Cloud Platform:* BigQuery* BigQuery MLLearn about [BigQuery pricing](https://cloud.google.com/bigquery/pricing), [BigQuery MLpricing](https://cloud.google.com/bigquery-ml/pricing) and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. PIP install packages and dependencies
###Code
!pip install google-cloud-bigquery
!pip install google-cloud-bigquery-storage
!pip install pandas-gbq
# Reservation package needed to setup flex slots for flat-rate pricing
!pip install google-cloud-bigquery-reservation
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Set up your Google Cloud Platform project_The following steps are required, regardless of your notebook environment._1. [Select or create a project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)1. [Enable the AI Platform APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)1. Enter your project ID and region in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook._Note_: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Set project ID and authenticateUpdate your Project ID below. The rest of the notebook will run using these credentials.
###Code
PROJECT_ID = "UPDATE TO YOUR PROJECT ID"
REGION = 'US'
DATA_SET_ID = 'bqml_kmeans' # Ensure you first create a data set in BigQuery
!gcloud config set project $PROJECT_ID
# If you have not built the Data Set, the following command will build it for you
# !bq mk --location=$REGION --dataset $PROJECT_ID:$DATA_SET_ID
###Output
_____no_output_____
###Markdown
Import libraries and define constants
###Code
from google.cloud import bigquery
import numpy as np
import pandas as pd
import pandas_gbq
import matplotlib.pyplot as plt
pd.set_option('display.float_format', lambda x: '%.3f' % x) # used to display float format
client = bigquery.Client(project=PROJECT_ID)
###Output
_____no_output_____
###Markdown
Data exploration and preparationPrior to building your models, you are typically expected to invest a significant amount of time cleaning, exploring, and aggregating your dataset in a meaningful way for modeling. For the purpose of this demo, we aren't showing this step only to prioritize showcasing clustering with k-means in BigQuery ML. Building synthetic dataOur goal is to use both online (GA360) and offline (CRM) data. You can use your own CRM data, however, in this case since we don't have CRM data to showcase, we will instead generate synthetic data. We will generate estimated House Hold Income, and Gender. To do so, we will hash fullVisitorID and build simple rules based on the last digit of the hash. When you run this process with your own data, you can join CRM data with several dimensions, but this is just an example of what is possible.
###Code
# We start with GA360 data, and will eventually build synthetic CRM as an example.
# This block is the first step, just working with GA360
ga360_only_view = 'GA360_View'
shared_dataset_ref = client.dataset(DATA_SET_ID)
ga360_view_ref = shared_dataset_ref.table(ga360_only_view)
ga360_view = bigquery.Table(ga360_view_ref)
ga360_query = '''
SELECT
fullVisitorID,
ABS(farm_fingerprint(fullVisitorID)) AS Hashed_fullVisitorID, # This will be used to generate random data.
MAX(device.operatingSystem) AS OS, # We can aggregate this because an OS is tied to a fullVisitorID.
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Apparel' THEN 1 ELSE 0 END) AS Apparel,
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Office' THEN 1 ELSE 0 END) AS Office,
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Electronics' THEN 1 ELSE 0 END) AS Electronics,
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Limited Supply' THEN 1 ELSE 0 END) AS LimitedSupply,
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Accessories' THEN 1 ELSE 0 END) AS Accessories,
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Shop by Brand' THEN 1 ELSE 0 END) AS ShopByBrand,
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Bags' THEN 1 ELSE 0 END) AS Bags,
ROUND (SUM (productPrice/1000000),2) AS productPrice_USD
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_*`,
UNNEST(hits) AS hits,
UNNEST(hits.product) AS hits_product
WHERE
_TABLE_SUFFIX BETWEEN '20160801'
AND '20160831'
AND geoNetwork.country = 'United States'
AND type = 'EVENT'
GROUP BY
1,
2
'''
ga360_view.view_query = ga360_query.format(PROJECT_ID)
ga360_view = client.create_table(ga360_view) # API request
print("Successfully created view at {}".format(ga360_view.full_table_id))
# Show a sample of GA360 data
ga360_query_df = '''
SELECT * FROM {} LIMIT 5
'''.format(ga360_view.full_table_id.replace(":", "."))
job_config = bigquery.QueryJobConfig()
# Start the query
query_job = client.query(ga360_query_df, job_config=job_config) #API Request
df_ga360 = query_job.result()
df_ga360 = df_ga360.to_dataframe()
df_ga360
# Create synthetic CRM data in SQL
CRM_only_view = 'CRM_View'
shared_dataset_ref = client.dataset(DATA_SET_ID)
CRM_view_ref = shared_dataset_ref.table(CRM_only_view)
CRM_view = bigquery.Table(CRM_view_ref)
# Query below works by hashing the fullVisitorID, which creates a random distribution.
# We use modulo to artificially split gender and hhi distribution.
CRM_query = '''
SELECT
fullVisitorID,
IF
(MOD(Hashed_fullVisitorID,2) = 0,
"M",
"F") AS gender,
CASE
WHEN MOD(Hashed_fullVisitorID,10) = 0 THEN 55000
WHEN MOD(Hashed_fullVisitorID,10) < 3 THEN 65000
WHEN MOD(Hashed_fullVisitorID,10) < 7 THEN 75000
WHEN MOD(Hashed_fullVisitorID,10) < 9 THEN 85000
WHEN MOD(Hashed_fullVisitorID,10) = 9 THEN 95000
ELSE
Hashed_fullVisitorID
END
AS hhi
FROM (
SELECT
fullVisitorID,
ABS(farm_fingerprint(fullVisitorID)) AS Hashed_fullVisitorID,
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_*`,
UNNEST(hits) AS hits,
UNNEST(hits.product) AS hits_product
WHERE
_TABLE_SUFFIX BETWEEN '20160801'
AND '20160831'
AND geoNetwork.country = 'United States'
AND type = 'EVENT'
GROUP BY
1,
2)
'''
CRM_view.view_query = CRM_query.format(PROJECT_ID)
CRM_view = client.create_table(CRM_view) # API request
print("Successfully created view at {}".format(CRM_view.full_table_id))
# See an output of the synthetic CRM data
CRM_query_df = '''
SELECT * FROM {} LIMIT 5
'''.format(CRM_view.full_table_id.replace(":", "."))
job_config = bigquery.QueryJobConfig()
# Start the query
query_job = client.query(CRM_query_df, job_config=job_config) #API Request
df_CRM = query_job.result()
df_CRM = df_CRM.to_dataframe()
df_CRM
###Output
_____no_output_____
###Markdown
Build a final view for to use as trainding data for clusteringYou may decide to change the view below based on your specific dataset. This is fine, and is exactly why we're creating a view. All steps subsequent to this will reference this view. If you change the SQL below, you won't need to modify other parts of the notebook.
###Code
# Build a final view, which joins GA360 data with CRM data
final_data_view = 'Final_View'
shared_dataset_ref = client.dataset(DATA_SET_ID)
final_view_ref = shared_dataset_ref.table(final_data_view)
final_view = bigquery.Table(final_view_ref)
final_data_query = '''
SELECT
g.*,
c.* EXCEPT(fullVisitorId)
FROM {ga360} g
JOIN {crm} c
ON g.fullVisitorId = c.fullVisitorId
'''.format(ga360=ga360_view.full_table_id.replace(":", "."),
crm=CRM_view.full_table_id.replace(":", "."))
final_view.view_query = final_data_query.format(PROJECT_ID)
final_view = client.create_table(final_view) # API request
print("Successfully created view at {}".format(final_view.full_table_id))
# Show final data used prior to modeling
sql_demo = '''
SELECT * FROM {} LIMIT 5
'''.format(final_view.full_table_id.replace(":", "."))
job_config = bigquery.QueryJobConfig()
# Start the query
query_job = client.query(sql_demo, job_config=job_config) #API Request
df_demo = query_job.result()
df_demo = df_demo.to_dataframe()
df_demo
###Output
_____no_output_____
###Markdown
Create our initial modelIn this section, we will build our initial k-means model. We won't focus on optimal k or other hyperparemeters just yet.Some additional points: 1. We remove fullVisitorId as an input, even though it is grouped at that level because we don't need fullVisitorID as a feature for clustering. fullVisitorID should never be used as feature.2. We have both categorical as well as numerical features3. We do not have to normalize any numerical features, as BigQuery ML will automatically do this for us. Build a function to build our modelWe will build a simple python function to build our model, rather than doing everything in SQL. This approach means we can asynchronously start several models and let BQ run in parallel.
###Code
def makeModel (n_Clusters, Model_Name):
sql ='''
CREATE OR REPLACE MODEL
`{pid}.{dsid}.{mn}` OPTIONS(model_type='kmeans',
kmeans_init_method = 'KMEANS++',
num_clusters={n}) AS
SELECT * except(fullVisitorID, Hashed_fullVisitorID) FROM `{vn}`
'''.format(n=n_Clusters,
pid=PROJECT_ID,
dsid=DATA_SET_ID,
mn=Model_Name,
vn=final_view.full_table_id.replace(":", "."))
job_config = bigquery.QueryJobConfig()
client.query(sql, job_config=job_config) # Make an API request.
# Let's start with a simple test to ensure everything works.
# After running makeModel(), allow a few minutes for training to complete.
model_test_name = 'test'
makeModel(3, model_test_name)
# After training is completed, you can either check in the UI, or you can interact with it using list_models().
for model in client.list_models(DATA_SET_ID):
print(model)
###Output
_____no_output_____
###Markdown
Work towards creating a better modelIn this section, we want to determine the proper k value. Determining the right value of k depends completely on the use case. There are straight forward examples that will simply tell you how many clusters are needed. Suppose you are pre-processing hand written digits - this tells us k should be 10. Or perhaps your business stakeholder only wants to deliver three different marketing campaigns and needs you to identify three clusters of customers, then setting k=3 might be meaningful. However, the use case is sometimes more open ended and you may want to explore different numbers of clusters to see how your datapoints group together with the minimal error within each cluster. To accomplish this process, we start by performing the 'Elbow Method', which simply charts loss vs k. Then, we'll also use the Davies-Bouldin score.(https://en.wikipedia.org/wiki/Davies%E2%80%93Bouldin_index) Below we are going to create several models to perform both the Elbow Method and get the Davies-Bouldin score. You may change parameters like low_k and high_k. Our process will create models between these two values. There is an additional parameter called model_prefix_name. We recommend you leave this as its current value. It is used to generate a naming convention for our models.
###Code
# Define upper and lower bound for k, then build individual models for each.
# After running this loop, look at the UI to see several model objects that exist.
low_k = 3
high_k = 15
model_prefix_name = 'kmeans_clusters_'
lst = list(range (low_k, high_k+1)) #build list to iterate through k values
for k in lst:
model_name = model_prefix_name + str(k)
makeModel(k, model_name)
print("Model started: {}".format(model_name))
###Output
Model started: kmeans_clusters_3
Model started: kmeans_clusters_4
Model started: kmeans_clusters_5
Model started: kmeans_clusters_6
Model started: kmeans_clusters_7
Model started: kmeans_clusters_8
Model started: kmeans_clusters_9
Model started: kmeans_clusters_10
Model started: kmeans_clusters_11
Model started: kmeans_clusters_12
Model started: kmeans_clusters_13
Model started: kmeans_clusters_14
Model started: kmeans_clusters_15
###Markdown
Select optimal k
###Code
# list all current models
models = client.list_models(DATA_SET_ID) # Make an API request.
print("Listing current models:")
for model in models:
full_model_id = "{}.{}".format(model.dataset_id,
model.model_id)
print(full_model_id)
# Remove our sample model from BigQuery, so we only have remaining models from our previous loop
model_id = DATA_SET_ID+"."+model_test_name
client.delete_model(model_id) # Make an API request.
print("Deleted model '{}'.".format(model_id))
# This will create a dataframe with each model name, the Davies Bouldin Index, and Loss.
# It will be used for the eblow method and to help determine optimal K
df = pd.DataFrame(columns=['davies_bouldin_index', 'mean_squared_distance'])
models = client.list_models(DATA_SET_ID) # Make an API request.
for model in models:
full_model_id = "{}.{}".format(model.dataset_id,
model.model_id)
sql ='''
SELECT
davies_bouldin_index,
mean_squared_distance
FROM ML.EVALUATE(MODEL `{mn}`)
'''.format(mn=full_model_id)
job_config = bigquery.QueryJobConfig()
# Start the query, passing in the extra configuration.
query_job = client.query(sql, job_config=job_config) # Make an API request.
df_temp = query_job.to_dataframe() # Wait for the job to complete.
df_temp['model_name'] = model.model_id
df = pd.concat([df, df_temp], axis=0)
###Output
_____no_output_____
###Markdown
The code below assumes we've used the naming convention originally created in this notebook, and the k value occurs after the 2nd underscore. If you've changed the model_prefix_name variable, then this code might break.
###Code
# This will modify the dataframe above, produce a new field with 'n_clusters', and will sort for graphing
df['n_clusters'] = df['model_name'].str.split('_').map(lambda x: x[2])
df['n_clusters'] = df['n_clusters'].apply(pd.to_numeric)
df = df.sort_values(by='n_clusters', ascending=True)
df
df.plot.line(x='n_clusters', y=['davies_bouldin_index', 'mean_squared_distance'])
###Output
_____no_output_____
###Markdown
Note - when you run this notebook, you will get different results, due to random cluster initialization. If you'd like to consistently return the same cluster for reach run, you may explicitly select your initialization through hyperparameter selection (https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-createkmeans_init_method). Making our k selection: There is no perfect approach or process when determining the optimal k value. It can often be determined by business rules or requirements. In this example, there isn't a simple requirement, so these considerations can also be followed:1. We start with the 'elbow method', which is effectively charting loss vs k. Sometimes, though not always, there's a natural 'elbow' where incremental clusters do not drastically reduce loss. In this specific example, and as you often might find, unfortunately there isn't a natural 'elbow', so we must continue our process. 2. Next, we chart Davies-Bouldin vs k. This score tells us how 'different' each cluster is, with the optimal score at zero. With 5 clusters, we see a score of ~1.4, and only with k>9, do we see better values. 3. Finally, we begin to try to interpret the difference of each model. You can review the evaluation module for various models to understand distributions of our features. With our data, we can look for patterns by gender, house hold income, and shopping habits. Analyze our final clusterThere are 2 options to understand the characteristics of your model. You can either 1) look in the BigQuery UI, or you can 2) programmatically interact with your model object. Below you’ll find a simple example for the latter option.
###Code
model_to_use = 'kmeans_clusters_5' # User can edit this
final_model = DATA_SET_ID+'.'+model_to_use
sql_get_attributes = '''
SELECT
centroid_id,
feature,
categorical_value
FROM
ML.CENTROIDS(MODEL {})
WHERE
feature IN ('OS','gender')
'''.format(final_model)
job_config = bigquery.QueryJobConfig()
# Start the query
query_job = client.query(sql_get_attributes, job_config=job_config) #API Request
df_attributes = query_job.result()
df_attributes = df_attributes.to_dataframe()
df_attributes.head()
# get numerical information about clusters
sql_get_numerical_attributes = '''
WITH T AS (
SELECT
centroid_id,
ARRAY_AGG(STRUCT(feature AS name,
ROUND(numerical_value,1) AS value)
ORDER BY centroid_id)
AS cluster
FROM ML.CENTROIDS(MODEL {fm})
GROUP BY centroid_id
),
Users AS(
SELECT
centroid_id,
COUNT(*) AS Total_Users
FROM(
SELECT
* EXCEPT(nearest_centroids_distance)
FROM
ML.PREDICT(MODEL {fm},
(
SELECT
*
FROM
{fv}
)))
GROUP BY centroid_id
)
SELECT
centroid_id,
Total_Users,
(SELECT value from unnest(cluster) WHERE name = 'Apparel') AS Apparel,
(SELECT value from unnest(cluster) WHERE name = 'Office') AS Office,
(SELECT value from unnest(cluster) WHERE name = 'Electronics') AS Electronics,
(SELECT value from unnest(cluster) WHERE name = 'LimitedSupply') AS LimitedSupply,
(SELECT value from unnest(cluster) WHERE name = 'Accessories') AS Accessories,
(SELECT value from unnest(cluster) WHERE name = 'ShopByBrand') AS ShopByBrand,
(SELECT value from unnest(cluster) WHERE name = 'Bags') AS Bags,
(SELECT value from unnest(cluster) WHERE name = 'Total_Purchases') AS Total_Purchases,
(SELECT value from unnest(cluster) WHERE name = 'productPrice_USD') AS productPrice_USD,
(SELECT value from unnest(cluster) WHERE name = 'hhi') AS hhi
FROM T LEFT JOIN Users USING(centroid_id)
ORDER BY centroid_id ASC
'''.format(fm=final_model,
fv=final_view.full_table_id.replace(":", ".")
)
job_config = bigquery.QueryJobConfig()
# Start the query
query_job = client.query(sql_get_numerical_attributes, job_config=job_config) #API Request
df_numerical_attributes = query_job.result()
df_numerical_attributes = df_numerical_attributes.to_dataframe()
df_numerical_attributes.head()
###Output
_____no_output_____
###Markdown
In addition to the output above, I'll note a few insights we get from our clusters. Cluster 1 - The apparel shopper, which also purchases more often than normal. This (although synthetic data) segment skews female.Cluster 2 - Most likely to shop by brand, and interested in bags. This segment has fewer purchases on average than the first cluster, however, this is the highest value customer.Cluster 3 - The most populated cluster, this one has a small amount of purchases and spends less on average. This segment is the one time buyer, rather than the brand loyalist. Cluster 4 - Most interested in accessories, does not buy as often as cluster 1 and 2, however buys more than cluster 3. Cluster 5 - This is an outlier as only 1 person belongs to this group. Use model to group new website behavior, and then push results to GA360 for marketing activationAfter we have a finalized model, we want to use it for inference. The code below outlines how to score or assign users into clusters. These are labeled as the CENTROID_ID. Although this by itself is helpful, we also recommend a process to ingest these scores back into GA360. The easiest way to export your BigQuery ML predictions from a BigQuery table to Google Analytics 360 is to use the MoDeM (Model Deployment for Marketing, https://github.com/google/modem) reference implementation. MoDeM helps you load data into Google Analytics for eventual activation in Google Ads, Display & Video 360 and Search Ads 360.
###Code
sql_score = '''
SELECT * EXCEPT(nearest_centroids_distance)
FROM
ML.PREDICT(MODEL {mn},
(
SELECT
*
FROM
{v}
LIMIT 1))
'''.format(mn=final_model, v=final_view.full_table_id.replace(":", "."))
job_config = bigquery.QueryJobConfig()
# Start the query
query_job = client.query(sql_score, job_config=job_config) #API Request
df_score = query_job.result()
df_score = df_score.to_dataframe()
df_score
###Output
_____no_output_____
###Markdown
Clean up: Delete all models and tables
###Code
# Are you sure you want to do this? This is to delete all models
models = client.list_models(DATA_SET_ID) # Make an API request.
for model in models:
full_model_id = "{}.{}".format(model.dataset_id,
model.model_id)
client.delete_model(full_model_id) # Make an API request.
print('Deleted: {}'.format(full_model_id))
# Are you sure you want to do this? This is to delete all tables and views
tables = client.list_tables(DATA_SET_ID) # Make an API request.
for table in tables:
full_table_id = "{}.{}".format(table.dataset_id,
table.table_id)
client.delete_table(full_table_id) # Make an API request.
print('Deleted: {}'.format(full_table_id))
###Output
Deleted: bqml_kmeans.CRM_View
Deleted: bqml_kmeans.Final_View
Deleted: bqml_kmeans.GA360_View
|
experiment_seer.ipynb | ###Markdown
Run SurvTRACE on SEER dataset
###Code
'''SEER data comes from https://seer.cancer.gov/data/
'''
from survtrace.dataset import load_data
from survtrace.evaluate_utils import Evaluator
from survtrace.utils import set_random_seed
from survtrace.model import SurvTraceMulti
from survtrace.train_utils import Trainer
from survtrace.config import STConfig
import matplotlib.pyplot as plt
# define the setup parameters
STConfig['data'] = 'seer'
STConfig['num_hidden_layers'] = 2
STConfig['hidden_size'] = 16
STConfig['intermediate_size'] = 64
STConfig['num_attention_heads'] = 2
STConfig['initializer_range'] = .02
STConfig['early_stop_patience'] = 5
set_random_seed(STConfig['seed'])
hparams = {
'batch_size': 1024,
'weight_decay': 0,
'learning_rate': 1e-4,
'epochs': 100,
}
# load data
df, df_train, df_y_train, df_test, df_y_test, df_val, df_y_val = load_data(STConfig)
# get model
model = SurvTraceMulti(STConfig)
###Output
_____no_output_____
###Markdown
kick off the training
###Code
# initialize a trainer & start training
trainer = Trainer(model)
train_loss_list, val_loss_list = trainer.fit((df_train, df_y_train), (df_val, df_y_val),
batch_size=hparams['batch_size'],
epochs=hparams['epochs'],
learning_rate=hparams['learning_rate'],
weight_decay=hparams['weight_decay'],
val_batch_size=10000,)
# evaluate model
evaluator = Evaluator(df, df_train.index)
evaluator.eval(model, (df_test, df_y_test))
print("done")
plt.plot(train_loss_list, label='train')
plt.plot(val_loss_list, label='val')
plt.legend(fontsize=20)
plt.xlabel('epoch',fontsize=20)
plt.ylabel('loss', fontsize=20)
plt.show()
###Output
_____no_output_____ |
notebooks/v2.0_EDA_churn.ipynb | ###Markdown
PA003: Churn Predict 0.0 Import
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import inflection
import math
from IPython.core.display import HTML
from scipy.stats import shapiro, chi2_contingency
from sklearn import preprocessing as pp
# from sklearn.preprocessing import StandardScaler, MinMaxScaler , RobustScaler
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
0.1.Helper function
###Code
def my_settings():
%matplotlib inline
# plotly settings
plt.style.use( 'ggplot' )
plt.rcParams['figure.figsize'] = [25, 12]
plt.rcParams['font.size'] = 8
# notebook settings
display(HTML('<style>.container{width:100% !important;}</style>'))
np.set_printoptions(suppress=True)
pd.set_option('display.float_format', '{:.3f}'.format)
# seaborn settings
sns.set(rc={'figure.figsize':(25,12)})
sns.set_theme(style = 'darkgrid', font_scale = 1)
my_settings()
def numerical_descriptive_statistical(num_attributes):
"""
Shows the main values for descriptive statistics in numerical variables.
Args:
data ([float64 and int64]): Insert all numerical attributes in the dataset
Returns:
[dataframe]: A dataframe with mean, median, std deviation, skewness, kurtosis, min, max and range
"""
# Central Tendency - Mean, Median
ct1 = pd.DataFrame(num_attributes.apply(np.mean)).T
ct2 = pd.DataFrame(num_attributes.apply(np.median)).T
# Dispersion - std, min, max, range, skew, kurtosis, Shapiro-Wilk Test
d1 = pd.DataFrame(num_attributes.apply(np.std)).T
d2 = pd.DataFrame(num_attributes.apply(min)).T
d3 = pd.DataFrame(num_attributes.apply(max)).T
d4 = pd.DataFrame(num_attributes.apply(lambda x: x.max() - x.min())).T
d5 = pd.DataFrame(num_attributes.apply(lambda x: x.skew())).T
d6 = pd.DataFrame(num_attributes.apply(lambda x: x.kurtosis())).T
d7 = pd.DataFrame(num_attributes.apply(lambda x: 'not normal' if shapiro(x.sample(5000))[1] < 0.05 else 'normal')).T
# concatenate
m = pd.concat([d2, d3, d4, ct1, ct2, d1, d5, d6, d7]).T.reset_index()
m.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis', 'shapiro']
return m
def categorical_descriptive_statstical(data , col):
"""
Shows the the absolute and percent values in categorical variables.
Args:
data ([object]): Insert all categorical attributes in the dataset
Returns:
[dataframe]: A dataframe with absolute and percent values
"""
return pd.DataFrame({'absolute' : data[col].value_counts() , 'percent %': data[col].value_counts(normalize = True) * 100})
def correlation_matrix(data , method):
"""Generates a correlation matrix of numerical variables
Args:correlation_matrix
data ([DataFrame]): [The dataframe of the EDA]
method ([string]): [The method used, it can be ‘pearson’, ‘kendall’ or ‘spearman’]
Returns:
[Image]: [The correlation matrix plot made with seaborn]
"""
# correlation
num_attributes = data.select_dtypes( include = ['int64' , 'float64'])
correlation = num_attributes.corr( method = method)
# correlation.append('exited')
# df_corr = data[correlation].reset_index(drop=True)
# df_corr['exited'] = df_corr['exited'].astype('int')
# mask
mask = np.zeros_like(correlation)
mask = np.triu(np.ones_like(correlation , dtype = np.bool))
# plot - mask = mask ,
ax = sns.heatmap(correlation , fmt = '.2f' , vmin = -1 , vmax = 1, annot = True, cmap = 'YlGnBu' , square = True)
return ax
def without_hue(plot, feature):
total = len(feature)
for p in plot.patches:
percentage = '{:.1f}%'.format(100 * p.get_height()/total)
x = p.get_x() + p.get_width() / 2 - 0.05
y = p.get_y() + p.get_height()
plot.annotate(percentage, (x, y), size = 12)
def plot_cat_overview(df, cat_attributes, target):
cat_attributes.remove(target)
plots_lin = math.ceil(len(cat_attributes)/2)
fig, axs = plt.subplots(plots_lin,2, figsize=(25, 10), facecolor='w', edgecolor='k')
fig.subplots_adjust(hspace = .5, wspace=.20)
axs = axs.ravel()
for c in range(len(cat_attributes)):
ax1 = sns.countplot(ax=axs[c], x=cat_attributes[c],hue=target, data=df)
without_hue(ax1,df1.exited)
def sum_of_na (data):
return pd.DataFrame({'Sum of NA' : data.isna().sum(), '% NA': data.isna().sum()/data.shape[0]})
def lift_score(y, y_pred, **kwargs):
df = pd.DataFrame()
df['true'] = y
df['pred'] = y_pred
df.sort_values('pred', ascending=False, inplace=True)
N = len(df)
churn_total = df['true'].sum() / N
n = int(np.ceil(.1 * N))
data_here = df.iloc[:n, :]
churn_here = data_here['true'].sum() / n
lift = churn_here / churn_total
return lift
def knapsack(W, wt, val):
n = len(val)
K = [[0 for x in range(W + 1)] for x in range(n + 1)]
for i in range(n + 1):
for w in range(W + 1):
if i == 0 or w == 0:
K[i][w] = 0
elif wt[i-1] <= w:
K[i][w] = max(val[i-1] + K[i-1][w-wt[i-1]], K[i-1][w])
else:
K[i][w] = K[i-1][w]
max_val = K[n][W]
keep = [False] * n
res = max_val
w = W
for i in range(n, 0, -1):
if res <= 0: break
if res == K[i - 1][w]: continue
else:
keep[i - 1] = True
res = res - val[i - 1]
w = w - wt[i - 1]
del K
return max_val, keep
###Output
_____no_output_____
###Markdown
0.2. Loading Data
###Code
df_raw = pd.read_csv(r'~/repositorio/churn_predict/data/raw/churn.csv')
df_raw.head()
###Output
_____no_output_____
###Markdown
1.0. Data Description - **RowNumber** : O número da coluna. - **CustomerID** : Identificador único do cliente. - **Surname** : Sobrenome do cliente. - **CreditScore** : A pontuação de Crédito do cliente para o mercado de consumo. - **Geography** : O país onde o cliente reside. - **Gender** : O gênero do cliente. - **Age** : A idade do cliente. - **Tenure** : Número de anos que o cliente permaneceu ativo. - **Balance** : Valor monetário que o cliente tem em sua conta bancária. - **NumOfProducts** : O número de produtos comprado pelo cliente no banco. - **HasCrCard** : Indica se o cliente possui ou não cartão de crédito. - **IsActiveMember** : Indica se o cliente fez pelo menos uma movimentação na conta bancário dentro de 12 meses. - **EstimateSalary** : Estimativa do salário mensal do cliente. - **Exited** : Indica se o cliente está ou não em Churn.
###Code
df1 = df_raw.copy()
df1.columns
df1.duplicated('CustomerId').sum()
df1.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10000 entries, 0 to 9999
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 RowNumber 10000 non-null int64
1 CustomerId 10000 non-null int64
2 Surname 10000 non-null object
3 CreditScore 10000 non-null int64
4 Geography 10000 non-null object
5 Gender 10000 non-null object
6 Age 10000 non-null int64
7 Tenure 10000 non-null int64
8 Balance 10000 non-null float64
9 NumOfProducts 10000 non-null int64
10 HasCrCard 10000 non-null int64
11 IsActiveMember 10000 non-null int64
12 EstimatedSalary 10000 non-null float64
13 Exited 10000 non-null int64
dtypes: float64(2), int64(9), object(3)
memory usage: 1.1+ MB
###Markdown
1.1 Rename Columns
###Code
old_columns=list(df1.columns)
snakecase = lambda x : inflection.underscore(x)
new_columns = map(snakecase , old_columns)
# rename columns
df1.columns = new_columns
###Output
_____no_output_____
###Markdown
1.2. Data Dimensions
###Code
print('Numbers of rows: {}'.format(df1.shape[0]))
print('Numbers of cols: {}'.format(df1.shape[1]))
###Output
Numbers of rows: 10000
Numbers of cols: 14
###Markdown
1.3. Data Types
###Code
df1.dtypes
###Output
_____no_output_____
###Markdown
1.3.1. Change Data Types
###Code
df1.exited = df1.exited.astype('bool')
df1.has_cr_card = df1.has_cr_card.astype('bool')
df1.is_active_member= df1.is_active_member.astype('bool')
###Output
_____no_output_____
###Markdown
1.3.2. Check unique values
###Code
df1.nunique()
###Output
_____no_output_____
###Markdown
1.3.3. Remove Variables
###Code
cols_drop = ['row_number', 'surname', 'customer_id']
df1 = df1.drop(cols_drop , axis = 1)
###Output
_____no_output_____
###Markdown
1.4. Check NA
###Code
df1.isna().sum()
###Output
_____no_output_____
###Markdown
1.5. Data Descriptive
###Code
num_attributes = df1.select_dtypes(include=['int64', 'float64'])
cat_attributes = df1.select_dtypes(exclude=['int64', 'float64'])
###Output
_____no_output_____
###Markdown
1.5.1. Numerical Attributes
###Code
m = numerical_descriptive_statistical(num_attributes)
m
###Output
_____no_output_____
###Markdown
1.5.2. Categorical Attributes
###Code
cat_attributes.columns
x = df1[['geography' , 'exited']].groupby('geography').count().reset_index()
x
plot_cat_overview(cat_attributes, list(cat_attributes.columns), 'exited')
categorical_descriptive_statstical(cat_attributes , 'geography')
categorical_descriptive_statstical(cat_attributes , 'gender')
categorical_descriptive_statstical(cat_attributes , 'has_cr_card')
categorical_descriptive_statstical(cat_attributes , 'is_active_member')
categorical_descriptive_statstical(cat_attributes , 'exited')
###Output
_____no_output_____
###Markdown
1.5.3. Multivariate Analysis
###Code
correlation_matrix(df1 , 'spearman')
###Output
_____no_output_____
###Markdown
1.5.4. Outliers Numerical Attributes
###Code
num_cols = num_attributes.columns.tolist()
i = 1
for col in df1[num_cols]:
plt.subplot(2,3,i)
ax = sns.boxplot( data = df1 , x = col)
i += 1
###Output
_____no_output_____
###Markdown
**Important informations:** - There are outliers in **credit_score, num_of_products and age**- The **churn ratio is 20.37%**- **70.6%** of the members **has credit card**- More than **50% of the clients** are **from France** 2.0. Feature Engineering
###Code
df2 = df1.copy()
df2.head()
###Output
_____no_output_____
###Markdown
2.1. Balance_age
###Code
# balance_per_age
balance_age = df2[['balance', 'age']].groupby('age').mean().reset_index()
balance_age.columns = ['age' , 'balance_age']
# merge
df2 = pd.merge(df2, balance_age, on = 'age' , how = 'left')
###Output
_____no_output_____
###Markdown
2.2. Balance_country
###Code
balance_country = df2.loc[:, ['geography', 'balance']].groupby('geography').mean().reset_index()
balance_country.columns = ['geography', 'balance_per_country']
# merge
df2 = pd.merge(df2, balance_country, on = 'geography', how = 'left')
###Output
_____no_output_____
###Markdown
2.3. Balance_tenure
###Code
balance_tenure = df2.loc[:, ['tenure', 'balance']].groupby('tenure').mean().reset_index()
balance_tenure.columns = ['tenure', 'LTV']
# merge
df2 = pd.merge(df2, balance_tenure, on = 'tenure', how = 'left')
###Output
_____no_output_____
###Markdown
2.3. Salary_gender
###Code
estimated_salary_gender = df2.loc[:, ['gender', 'estimated_salary']].groupby('gender').mean().reset_index()
estimated_salary_gender.columns = ['gender', 'estimated_salary_per_gender']
# merge
df2 = pd.merge(df2, estimated_salary_gender, on = 'gender', how = 'left')
correlation_matrix(df2, 'pearson')
###Output
_____no_output_____
###Markdown
3.0. Data Filtering
###Code
df3 = df2.copy()
###Output
_____no_output_____
###Markdown
4.0. Exploratoria Data Analysis (EDA)
###Code
df4 = df3.copy()
###Output
_____no_output_____
###Markdown
5.0. Data Preparation
###Code
df5 = df4.copy()
df5.columns
df5.head()
mms = pp.MinMaxScaler()
rbs = pp.RobustScaler()
#Balance
df5['balance'] = rbs.fit_transform(df5[['balance']].values)
#EstimatedSalary
df5['estimated_salary'] = rbs.fit_transform(df5[['estimated_salary']].values)
#LTV
df5['LTV'] = rbs.fit_transform(df5[['LTV']].values)
#gender - label encoding
gender_dict = { 'Male':0 , 'Female':1 }
df5['gender'] = df5['gender'].map( gender_dict )
#Geography - One Hot Encoding
# one hot encoding encoding
df5 = pd.get_dummies(df5, prefix=['country'], columns=['geography'])
df5 = pd.get_dummies(df5, prefix=['gender'], columns=['gender'])
questions_encoding = {'True': 1,'False': 0}
df5['is_active_member'] = df5['is_active_member'].map(questions_encoding ,'is_active_member' )
df5['has_cr_card'] = df5['has_cr_card'].map(questions_encoding)
df5['exited'] = df5['exited'].map(questions_encoding)
###Output
_____no_output_____
###Markdown
6.0. Feature Selection
###Code
df6 = df5.copy()
x = df6.drop(['exited'], axis =1)
y = df6.exited
y
###Output
_____no_output_____
###Markdown
7.0. Machine Learning Modelling
###Code
df7 = df6.copy()
###Output
_____no_output_____
###Markdown
8.0. Performance Metrics
###Code
df8 = df7.copy()
###Output
_____no_output_____
###Markdown
9.0. Deploy to Production
###Code
df9 = df8.copy()
###Output
_____no_output_____ |
jupyter_tutorials/tutorial_2D_pristine.ipynb | ###Markdown
Calculating structure and properties of pristine 2D materialsa tutorial by Anne Marie Tan Some things to note before we get started:* Download the python scripts from the [github repository](https://github.com/aztan2/charged-defects-framework) and place them in your home directory on hipergator. Export this location to your PYTHONPATH in `~/.bashrc`.* You will need to launch this notebook from a virtual environment on hipergator in which you have installed python packages like numpy, matplotlib, pymatgen, pandas, nglview (if you want to use the built-in crystal viewer), and of course jupyterlab.* Follow the instructions in the [document on the Hennig group google drive](https://drive.google.com/file/d/15qzXZkK6Wrmor-9JOuGI_-nMmcHZsAoe/view?usp=sharing) to start a Jupyter notebook within a SLURM job on hipergator and connect to it from the web browser running on your local computer.* For the purpose of this tutorial, I will try to keep everything self-contained by executing all commands within the notebook, including navigating directories, executing python funtions and scripts, etc. However, when you actually apply this to a new system, you will probably find it easier to do some of these directly from the command line. Before getting into the defects, let’s start by computing some properties of the pristine monolayer, namely lattice constants, band gaps, and dielectric tensor.
###Code
from qdef2d import slabutils
from qdef2d.io.vasp import incar, kpoints, submit
import importlib
#importlib.reload(slabutils)
importlib.reload(incar)
#importlib.reload(kpoints)
#importlib.reload(submit)
###Output
_____no_output_____
###Markdown
1. Check convergence of energy, lattice constants w.r.t. vacuum spacing: * Obtain structure POSCAR from https://materialsproject.org. \For this exercise, we'll use the example of MoS$_2$. The layered bulk structure can be found at https://materialsproject.org/materials/mp-2815/. \Download the POSCAR file, rename it as POSCAR_bulk and place it in an appropriately-named directory on hipergator, such as `/blue/hennig/yourusername/MoS2/unitcell`. \Enter this directory, replacing the path below with the path to your directory.
###Code
## Jupyter notebook has some built-in "magic commands" to execute certain bash commands such as cd or ls
%cd /blue/hennig/annemarietan/test/MoS2/unitcell
%ls
###Output
[0m[01;34mGGA[0m/ POSCAR_bulk POSCAR_vac_10 POSCAR_vac_15 POSCAR_vac_20
###Markdown
I embedded a simple crystal structure viewer in this notebook, but you can also open this POSCAR in your favourite software and have a look at it. It should be a 6-atom unitcell of the layered bulk MoS$_2$.
###Code
from ase.visualize import view
from pymatgen.io.ase import AseAtomsAdaptor
from pymatgen.io.vasp import Poscar
structure = Poscar.from_file('POSCAR_bulk').structure
atoms = AseAtomsAdaptor.get_atoms(structure)
ngl_handler = view(atoms, viewer='ngl')
ngl_handler.view.add_representation('ball+stick', selection='all')
ngl_handler.view.center()
ngl_handler
###Output
_____no_output_____
###Markdown
Create a subdirectory for calculations with GGA functional. Within that, create subdirectories with POSCARs with different amounts of vacuum spacing, ranging from ~ 10 – 20 Å.
###Code
%mkdir GGA
%cd GGA
for vac in [10,12,14,15,16,18,20]:
slabutils.gen_unitcell_2d('../POSCAR_bulk',vac,from_bulk=True,slabmin=0.0,slabmax=0.5,zaxis='c')
###Output
mkdir: cannot create directory ‘GGA’: File exists
/blue/hennig/annemarietan/test/MoS2/unitcell/GGA
###Markdown
Remember! You can always use `help()` to display the documentation of each function just as you would any other python function/module.
###Code
%ls
###Output
[0m[01;34mvac_10[0m/ [01;34mvac_12[0m/ [01;34mvac_14[0m/ [01;34mvac_15[0m/ [01;34mvac_16[0m/ [01;34mvac_18[0m/ [01;34mvac_20[0m/
###Markdown
As you should see above, this script has created new sub-directories called `vac_10`, `vac_12`, etc. \Let's enter one of them to see what's inside.
###Code
%cd vac_10
%ls
%cat POSCAR
###Output
Mo1 S2
1.0
3.192238 0.000000 0.000000
-1.596119 2.764559 0.000000
0.000000 0.000000 13.128327
Mo S
1 2
direct
0.666667 0.333333 0.500000 Mo
0.333333 0.666667 0.619144 S
0.333333 0.666667 0.380856 S
###Markdown
Using the simple crystal structure viewer again, you should see that you now have a 3-atom unitcell of the MoS$_2$ monolayer surrounded by vaccum.
###Code
#from ase.visualize import view
#from pymatgen.io.ase import AseAtomsAdaptor
#from pymatgen.io.vasp import Poscar
structure = Poscar.from_file('POSCAR').structure
atoms = AseAtomsAdaptor.get_atoms(structure)
ngl_handler = view(atoms, viewer='ngl')
ngl_handler.view.add_representation('ball+stick', selection='all')
ngl_handler.view.center()
ngl_handler
###Output
_____no_output_____
###Markdown
* Prepare the POTCAR by concatenating the appropriate element POTCARs. \We usually use the pseudopotentials suggested by materialsproject. So, in this case, we will use the `Mo_pv` POTCAR for Mo and `S` POTCAR for S. \Note that pymatgen orders the elements in the POSCAR by increasing electronegativity, hence the element POTCARs must be concatenated in the same order.
###Code
%cat /home/annemarietan/POTCAR/POT_GGA_PAW_PBE/Mo_pv/POTCAR /home/annemarietan/POTCAR/POT_GGA_PAW_PBE/S/POTCAR > POTCAR
%ls
###Output
POSCAR POTCAR
###Markdown
* Prepare INCAR for a standard structural relaxation run, but with ISIF = 3 to relax cell parameters as well to get the equilibrium lattice constant.
###Code
incar.generate(runtype='relax',functional='PBE',relaxcell=True)
# if you get a BadPotcarWarning from pymatgen don't worry about it...
%cat INCAR
###Output
PREC = Accurate
ALGO = Fast
LREAL = Auto
ISYM = 0
NELECT = 24
ENCUT = 520
NELM = 120
EDIFF = 1e-05
ISIF = 3
IBRION = 2
NSW = 100
ISMEAR = 1
SIGMA = 0.1
ISPIN = 2
MAGMOM = 1*5.0 2*0.6
LPLANE = True
NCORE = 1
LWAVE = False
LCHARG = True
LMAXMIX = 4
LORBIT = 11
LVHAR = True
###Markdown
* Prepare KPOINTS. For 2D materials, we decided on kpts per reciprocal atom (p.r.a.) > 400. Typically, I try to choose a value of kpts p.r.a. that gives me an easily divisible mesh size, e.g. 12x12x1
###Code
kpoints.generate_uniform(kppa=440)
%cat KPOINTS
###Output
automatically generated KPOINTS with 2d grid density = 440 per reciprocal atom
0
Gamma
12 12 1
###Markdown
* Prepare the submission script. For now, you may leave the queue/nodes/memory/time at their default values, but remember to change the email option to *YOUR* email address! In fact, you may want to just change the default in the `submit.generate()` function itself.
###Code
submit.generate(jobname='MoS2_unitcell', email='[email protected]', time='6:00:00')
%cat submitVASP.sh
%ls
###Output
INCAR KPOINTS POSCAR POTCAR submitVASP.sh
###Markdown
* You should have a POSCAR, POTCAR, INCAR, KPOINTS, and submitVASP.sh file in this directory now. \Go ahead and submit your job by typing `sbatch submitVASP.sh` on hipergator! * Now, let's go back and do the same in all the other `vac_` subdirectories. You can either re-run all the commands/scripts to generate the POTCAR, INCAR, KPOINTS and submitVASP.sh again, or simply copy them from this directory into all the others. (All of these calculations will use the same VASP input files except for the POSCARs which have different vacuum spacings.)
###Code
%cd /blue/hennig/annemarietan/test/MoS2/unitcell/GGA/
%cp vac_10/{INCAR,KPOINTS,POTCAR,submitVASP.sh} vac_12/
%cp vac_10/{INCAR,KPOINTS,POTCAR,submitVASP.sh} vac_14/
%cp vac_10/{INCAR,KPOINTS,POTCAR,submitVASP.sh} vac_16/
%cp vac_10/{INCAR,KPOINTS,POTCAR,submitVASP.sh} vac_18/
%cp vac_10/{INCAR,KPOINTS,POTCAR,submitVASP.sh} vac_20/
###Output
/blue/hennig/annemarietan/test/MoS2/unitcell/GGA
###Markdown
**EXERCISE**: When your jobs are completed, plot out:* final energy (where to find the final energy?) vs. vacuum spacing* in-plane lattice constant vs. vacuum spacingDo you observe convergence of these quantities with increasing vacuum spacing? How do your values compare with those reported in literature? 2. Next, we would like to calculate the band structure of the pristine monolayer: * Enter one of the directories from before and create a subdirectory in it called `bands`.
###Code
%cd /blue/hennig/annemarietan/test/MoS2/unitcell/GGA/vac_10
%mkdir bands
%cd bands
###Output
/blue/hennig/annemarietan/test/MoS2/unitcell/GGA/vac_10
mkdir: cannot create directory ‘bands’: File exists
/blue/hennig/annemarietan/test/MoS2/unitcell/GGA/vac_10/bands
###Markdown
* Copy the CONTCAR from the converged structural relaxation run and rename it POSCAR. \Also copy the CHGCAR, POTCAR and submission script into the new directory. \(For demonstration pruposes, I have run this calculation beforehand and saved the results in `/orange/hennig/annemarietan/precalculated_for_demo/MoS2/unitcell/GGA/vac_10/`, so I will copy the CONTCAR and CHGCAR from there.)
###Code
## %cp ../CONTCAR POSCAR
## %cp ../CHGCAR .
%cp /orange/hennig/annemarietan/precalculated_for_demo/MoS2/unitcell/GGA/vac_10/CONTCAR POSCAR
%cp /orange/hennig/annemarietan/precalculated_for_demo/MoS2/unitcell/GGA/vac_10/CHGCAR .
%cp ../{POTCAR,submitVASP.sh} .
%ls
###Output
CHGCAR POSCAR POTCAR submitVASP.sh
###Markdown
* We need to generate a new INCAR file with a few different tags for the band structure calculation: \`IBRION = -1` and `NSW = 0` specify that only a single ionic step will be performed. \If a CHGCAR or WAVECAR is provided, `ICHARG = 11` or `ICHARG = 10` specify that a non-selfconsistent calculation will be performed, meaning that the charge density will be kept constant throughout the calculation.
###Code
incar.generate(runtype='bands',functional='PBE')
%cat INCAR
###Output
PREC = Accurate
ALGO = Fast
LREAL = Auto
ICHARG = 11
ISYM = 0
NELECT = 24
ENCUT = 520
NELM = 120
EDIFF = 1e-05
ISIF = 2
IBRION = -1
NSW = 0
ISMEAR = 1
SIGMA = 0.1
ISPIN = 2
MAGMOM = 1*5.0 2*0.6
LPLANE = True
NCORE = 8
LWAVE = False
LCHARG = False
LMAXMIX = 4
LORBIT = 11
LVHAR = True
###Markdown
* For the band structure calculation, we need to specify a different type of KPOINTS file. Instead of specifying a uniform k-point grid, we specify a high symmetry path along which we want to evaluate the band structure. The choice of high symmetry path depends on the symmetry inherent to that crystal structure, and is determined using a procedure developed by [Setyawan and Curtarolo](https://arxiv.org/abs/1004.2974). \For a 2D hexagonal structure such as MoS$_2$, the relevant high symmetry path passes through the points Γ-M-K-Γ. You can visualize the kpath using [this online tool](http://materials.duke.edu/awrapper.html). You should get something that looks like this:
###Code
kpoints.generate_line(ndiv=20, dim=2)
%mv KPOINTS_bands KPOINTS
%cat KPOINTS
###Output
Line_mode KPOINTS file
20
Line_mode
Reciprocal
0.0 0.0 0.0 ! \Gamma
0.5 0.0 0.0 ! M
0.5 0.0 0.0 ! M
0.3333333333333333 0.3333333333333333 0.0 ! K
0.3333333333333333 0.3333333333333333 0.0 ! K
0.0 0.0 0.0 ! \Gamma
###Markdown
* You should have a CHGCAR, POSCAR, POTCAR, INCAR, KPOINTS, and submitVASP.sh file in this directory now.Go ahead and submit your job! **EXERCISE**: When your job is completed, plot the band structure diagram using the following script:(Again, for demonstration pruposes, I have run this calculation beforehand and saved the results in `/orange/hennig/annemarietan/precalculated_for_demo/MoS2/unitcell/GGA/vac_10/bands`, so I will run the script from there.)
###Code
%cd /orange/hennig/annemarietan/precalculated_for_demo/MoS2/unitcell/GGA/vac_10/bands
import matplotlib
from pymatgen.io.vasp.outputs import Vasprun
from pymatgen.electronic_structure.plotter import BSPlotter
run = Vasprun("vasprun.xml",parse_projected_eigen=False)
bs = run.get_band_structure("KPOINTS",line_mode=True)
print("Band gap: ",bs.get_band_gap())
bsplot = BSPlotter(bs)
bs.is_spin_polarized = False
bsplot.get_plot(zero_to_efermi=True,vbm_cbm_marker=True).savefig("bandstructure.png")
###Output
/orange/hennig/annemarietan/precalculated_for_demo/MoS2/unitcell/GGA/vac_10/bands
###Markdown
* Repeat this calculation for other vacuum spacings as well. \Do you observe a strong dependence of the band structure and band gap with vacuum spacing? How does your band structure compare with that reported in literature? 3. Now for the dielectric tensor: * Enter one of the directories from before and create a subdirectory in it called `dielectric`.
###Code
%cd /blue/hennig/annemarietan/test/MoS2/unitcell/GGA/vac_10
%mkdir dielectric
%cd dielectric
###Output
/blue/hennig/annemarietan/test/MoS2/unitcell/GGA/vac_10
/blue/hennig/annemarietan/test/MoS2/unitcell/GGA/vac_10/dielectric
###Markdown
* Copy the CONTCAR from the converged structural relaxation run and rename it POSCAR. \Also copy the KPOINTS, POTCAR and submission script into the new directory.
###Code
##%cp ../CONTCAR POSCAR
%cp /orange/hennig/annemarietan/precalculated_for_demo/MoS2/unitcell/GGA/vac_10/CONTCAR POSCAR
%cp ../{KPOINTS,POTCAR,submitVASP.sh} .
%ls
###Output
KPOINTS POSCAR POTCAR submitVASP.sh
###Markdown
* We need to generate a new INCAR file with a few different tags for the dielectric tensor calculation: \`LEPSILON = True` and `LPEAD = True` determine the static (ion-clamped) dielectric matrix using density functional perturbation theory, while `IBRION = 6` determines the ionic contribution to the dielectric tensor by finite differences.
###Code
incar.generate(runtype='dielectric',functional='PBE')
%cat INCAR
###Output
PREC = Accurate
ALGO = Fast
LREAL = Auto
NELECT = 24
ENCUT = 520
NELM = 120
EDIFF = 1e-05
ISIF = 2
IBRION = 6
NSW = 100
ISMEAR = 1
SIGMA = 0.1
ISPIN = 2
MAGMOM = 1*5.0 2*0.6
LPLANE = True
LWAVE = False
LCHARG = False
LMAXMIX = 4
LORBIT = 11
LEPSILON = True
LPEAD = True
|
site/ko/alpha/tutorials/keras/basic_classification.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
첫 번째 신경망 훈련하기: 기초적인 분류 문제 TensorFlow.org에서 보기 구글 코랩(Colab)에서 실행하기 깃허브(GitHub) 소스 보기 Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.이 번역에 개선할 부분이 있다면[tensorflow/docs](https://github.com/tensorflow/docs) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.문서 번역이나 리뷰에 참여하려면[[email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ko)로메일을 보내주시기 바랍니다. 이 튜토리얼에서는 운동화나 셔츠 같은 옷 이미지를 분류하는 신경망 모델을 훈련합니다. 상세 내용을 모두 이해하지 못해도 괜찮습니다. 여기서는 완전한 텐서플로(TensorFlow) 프로그램을 빠르게 살펴 보겠습니다. 자세한 내용은 앞으로 배우면서 더 설명합니다.여기에서는 텐서플로 모델을 만들고 훈련할 수 있는 고수준 API인 [tf.keras](https://www.tensorflow.org/guide/keras)를 사용합니다.
###Code
!pip install tensorflow==2.0.0-alpha0
from __future__ import absolute_import, division, print_function, unicode_literals, unicode_literals
# tensorflow와 tf.keras를 임포트합니다
import tensorflow as tf
from tensorflow import keras
# 헬퍼(helper) 라이브러리를 임포트합니다
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
패션 MNIST 데이터셋 임포트하기 10개의 범주(category)와 70,000개의 흑백 이미지로 구성된 [패션 MNIST](https://github.com/zalandoresearch/fashion-mnist) 데이터셋을 사용하겠습니다. 이미지는 해상도(28x28 픽셀)가 낮고 다음처럼 개별 옷 품목을 나타냅니다: <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> 그림 1. 패션-MNIST 샘플 (Zalando, MIT License). 패션 MNIST는 컴퓨터 비전 분야의 "Hello, World" 프로그램격인 고전 [MNIST](http://yann.lecun.com/exdb/mnist/) 데이터셋을 대신해서 자주 사용됩니다. MNIST 데이터셋은 손글씨 숫자(0, 1, 2 등)의 이미지로 이루어져 있습니다. 여기서 사용하려는 옷 이미지와 동일한 포맷입니다.패션 MNIST는 일반적인 MNIST 보다 조금 더 어려운 문제이고 다양한 예제를 만들기 위해 선택했습니다. 두 데이터셋은 비교적 작기 때문에 알고리즘의 작동 여부를 확인하기 위해 사용되곤 합니다. 코드를 테스트하고 디버깅하는 용도로 좋습니다.네트워크를 훈련하는데 60,000개의 이미지를 사용합니다. 그다음 네트워크가 얼마나 정확하게 이미지를 분류하는지 10,000개의 이미지로 평가하겠습니다. 패션 MNIST 데이터셋은 텐서플로에서 바로 임포트하여 적재할 수 있습니다:
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
load_data() 함수를 호출하면 네 개의 넘파이(NumPy) 배열이 반환됩니다:* `train_images`와 `train_labels` 배열은 모델 학습에 사용되는 *훈련 세트*입니다.* `test_images`와 `test_labels` 배열은 모델 테스트에 사용되는 *테스트 세트*입니다.이미지는 28x28 크기의 넘파이 배열이고 픽셀 값은 0과 255 사이입니다. *레이블*(label)은 0에서 9까지의 정수 배열입니다. 이 값은 이미지에 있는 옷의 *클래스*(class)를 나타냅니다: 레이블 클래스 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 각 이미지는 하나의 레이블에 매핑되어 있습니다. 데이터셋에 *클래스 이름*이 들어있지 않기 때문에 나중에 이미지를 출력할 때 사용하기 위해 별도의 변수를 만들어 저장합니다:
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
데이터 탐색모델을 훈련하기 전에 데이터셋 구조를 살펴보죠. 다음 코드는 훈련 세트에 60,000개의 이미지가 있다는 것을 보여줍니다. 각 이미지는 28x28 픽셀로 표현됩니다:
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
비슷하게 훈련 세트에는 60,000개의 레이블이 있습니다:
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
각 레이블은 0과 9사이의 정수입니다:
###Code
train_labels
###Output
_____no_output_____
###Markdown
테스트 세트에는 10,000개의 이미지가 있습니다. 이 이미지도 28x28 픽셀로 표현됩니다:
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
테스트 세트는 10,000개의 이미지에 대한 레이블을 가지고 있습니다:
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
데이터 전처리네트워크를 훈련하기 전에 데이터를 전처리해야 합니다. 훈련 세트에 있는 첫 번째 이미지를 보면 픽셀 값의 범위가 0~255 사이라는 것을 알 수 있습니다:
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
신경망 모델에 주입하기 전에 이 값의 범위를 0~1 사이로 조정하겠습니다. 이렇게 하려면 255로 나누어야 합니다. *훈련 세트*와 *테스트 세트*를 동일한 방식으로 전처리하는 것이 중요합니다:
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
*훈련 세트*에서 처음 25개 이미지와 그 아래 클래스 이름을 출력해 보죠. 데이터 포맷이 올바른지 확인하고 네트워크 구성과 훈련할 준비를 마칩니다.
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
모델 구성신경망 모델을 만들려면 모델의 층을 구성한 다음 모델을 컴파일합니다. 층 설정신경망의 기본 구성 요소는 *층*(layer)입니다. 층은 주입된 데이터에서 표현을 추출합니다. 아마도 문제를 해결하는데 더 의미있는 표현이 추출될 것입니다.대부분 딥러닝은 간단한 층을 연결하여 구성됩니다. `tf.keras.layers.Dense`와 같은 층들의 가중치(parameter)는 훈련하는 동안 학습됩니다.
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
이 네트워크의 첫 번째 층인 `tf.keras.layers.Flatten`은 2차원 배열(28 x 28 픽셀)의 이미지 포맷을 28 * 28 = 784 픽셀의 1차원 배열로 변환합니다. 이 층은 이미지에 있는 픽셀의 행을 펼쳐서 일렬로 늘립니다. 이 층에는 학습되는 가중치가 없고 데이터를 변환하기만 합니다.픽셀을 펼친 후에는 두 개의 `tf.keras.layers.Dense` 층이 연속되어 연결됩니다. 이 층을 밀집 연결(densely-connected) 또는 완전 연결(fully-connected) 층이라고 부릅니다. 첫 번째 `Dense` 층은 128개의 노드(또는 뉴런)를 가집니다. 두 번째 (마지막) 층은 10개의 노드의 *소프트맥스*(softmax) 층입니다. 이 층은 10개의 확률을 반환하고 반환된 값의 전체 합은 1입니다. 각 노드는 현재 이미지가 10개 클래스 중 하나에 속할 확률을 출력합니다. 모델 컴파일모델을 훈련하기 전에 필요한 몇 가지 설정이 모델 *컴파일* 단계에서 추가됩니다:* *손실 함수*(Loss function)-훈련 하는 동안 모델의 오차를 측정합니다. 모델의 학습이 올바른 방향으로 향하도록 이 함수를 최소화해야 합니다.* *옵티마이저*(Optimizer)-데이터와 손실 함수를 바탕으로 모델의 업데이트 방법을 결정합니다.* *지표*(Metrics)-훈련 단계와 테스트 단계를 모니터링하기 위해 사용합니다. 다음 예에서는 올바르게 분류된 이미지의 비율인 *정확도*를 사용합니다.
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
모델 훈련신경망 모델을 훈련하는 단계는 다음과 같습니다:1. 훈련 데이터를 모델에 주입합니다-이 예에서는 `train_images`와 `train_labels` 배열입니다.2. 모델이 이미지와 레이블을 매핑하는 방법을 배웁니다.3. 테스트 세트에 대한 모델의 예측을 만듭니다-이 예에서는 `test_images` 배열입니다. 이 예측이 `test_labels` 배열의 레이블과 맞는지 확인합니다.훈련을 시작하기 위해 `model.fit` 메서드를 호출하면 모델이 훈련 데이터를 학습합니다:
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
모델이 훈련되면서 손실과 정확도 지표가 출력됩니다. 이 모델은 훈련 세트에서 약 0.88(88%) 정도의 정확도를 달성합니다. 정확도 평가그다음 테스트 세트에서 모델의 성능을 비교합니다:
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('\n테스트 정확도:', test_acc)
###Output
_____no_output_____
###Markdown
테스트 세트의 정확도가 훈련 세트의 정확도보다 조금 낮습니다. 훈련 세트의 정확도와 테스트 세트의 정확도 사이의 차이는 *과대적합*(overfitting) 때문입니다. 과대적합은 머신러닝 모델이 훈련 데이터보다 새로운 데이터에서 성능이 낮아지는 현상을 말합니다. 예측 만들기훈련된 모델을 사용하여 이미지에 대한 예측을 만들 수 있습니다.
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
여기서는 테스트 세트에 있는 각 이미지의 레이블을 예측했습니다. 첫 번째 예측을 확인해 보죠:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
이 예측은 10개의 숫자 배열로 나타납니다. 이 값은 10개의 옷 품목에 상응하는 모델의 신뢰도(confidence)를 나타냅니다. 가장 높은 신뢰도를 가진 레이블을 찾아보죠:
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
모델은 이 이미지가 앵클 부츠(`class_name[9]`)라고 가장 확신하고 있습니다. 이 값이 맞는지 테스트 레이블을 확인해 보죠:
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
10개 클래스에 대한 예측을 모두 그래프로 표현해 보겠습니다:
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
0번째 원소의 이미지, 예측, 신뢰도 점수 배열을 확인해 보겠습니다.
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
몇 개의 이미지의 예측을 출력해 보죠. 올바르게 예측된 레이블은 파란색이고 잘못 예측된 레이블은 빨강색입니다. 숫자는 예측 레이블의 신뢰도 퍼센트(100점 만점)입니다. 신뢰도 점수가 높을 때도 잘못 예측할 수 있습니다.
###Code
# 처음 X 개의 테스트 이미지와 예측 레이블, 진짜 레이블을 출력합니다
# 올바른 예측은 파랑색으로 잘못된 예측은 빨강색으로 나타냅니다
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
마지막으로 훈련된 모델을 사용하여 한 이미지에 대한 예측을 만듭니다.
###Code
# 테스트 세트에서 이미지 하나를 선택합니다
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` 모델은 한 번에 샘플의 묶음 또는 *배치*(batch)로 예측을 만드는데 최적화되어 있습니다. 하나의 이미지를 사용할 때에도 2차원 배열로 만들어야 합니다:
###Code
# 이미지 하나만 사용할 때도 배치에 추가합니다
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
이제 이 이미지의 예측을 만듭니다:
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict`는 2차원 넘파이 배열을 반환하므로 첫 번째 이미지의 예측을 선택합니다:
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
첫 번째 신경망 훈련하기: 기초적인 분류 문제 TensorFlow.org에서 보기 구글 코랩(Colab)에서 실행하기 깃허브(GitHub) 소스 보기 Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.이 번역에 개선할 부분이 있다면[tensorflow/docs](https://github.com/tensorflow/docs) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.문서 번역이나 리뷰에 참여하려면[[email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs)로메일을 보내주시기 바랍니다. 이 튜토리얼에서는 운동화나 셔츠 같은 옷 이미지를 분류하는 신경망 모델을 훈련합니다. 상세 내용을 모두 이해하지 못해도 괜찮습니다. 여기서는 완전한 텐서플로(TensorFlow) 프로그램을 빠르게 살펴 보겠습니다. 자세한 내용은 앞으로 배우면서 더 설명합니다.여기에서는 텐서플로 모델을 만들고 훈련할 수 있는 고수준 API인 [tf.keras](https://www.tensorflow.org/guide/keras)를 사용합니다.
###Code
!pip install tensorflow==2.0.0-alpha0
from __future__ import absolute_import, division, print_function, unicode_literals, unicode_literals
# tensorflow와 tf.keras를 임포트합니다
import tensorflow as tf
from tensorflow import keras
# 헬퍼(helper) 라이브러리를 임포트합니다
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
패션 MNIST 데이터셋 임포트하기 10개의 범주(category)와 70,000개의 흑백 이미지로 구성된 [패션 MNIST](https://github.com/zalandoresearch/fashion-mnist) 데이터셋을 사용하겠습니다. 이미지는 해상도(28x28 픽셀)가 낮고 다음처럼 개별 옷 품목을 나타냅니다: <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> 그림 1. 패션-MNIST 샘플 (Zalando, MIT License). 패션 MNIST는 컴퓨터 비전 분야의 "Hello, World" 프로그램격인 고전 [MNIST](http://yann.lecun.com/exdb/mnist/) 데이터셋을 대신해서 자주 사용됩니다. MNIST 데이터셋은 손글씨 숫자(0, 1, 2 등)의 이미지로 이루어져 있습니다. 여기서 사용하려는 옷 이미지와 동일한 포맷입니다.패션 MNIST는 일반적인 MNIST 보다 조금 더 어려운 문제이고 다양한 예제를 만들기 위해 선택했습니다. 두 데이터셋은 비교적 작기 때문에 알고리즘의 작동 여부를 확인하기 위해 사용되곤 합니다. 코드를 테스트하고 디버깅하는 용도로 좋습니다.네트워크를 훈련하는데 60,000개의 이미지를 사용합니다. 그다음 네트워크가 얼마나 정확하게 이미지를 분류하는지 10,000개의 이미지로 평가하겠습니다. 패션 MNIST 데이터셋은 텐서플로에서 바로 임포트하여 적재할 수 있습니다:
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
load_data() 함수를 호출하면 네 개의 넘파이(NumPy) 배열이 반환됩니다:* `train_images`와 `train_labels` 배열은 모델 학습에 사용되는 *훈련 세트*입니다.* `test_images`와 `test_labels` 배열은 모델 테스트에 사용되는 *테스트 세트*입니다.이미지는 28x28 크기의 넘파이 배열이고 픽셀 값은 0과 255 사이입니다. *레이블*(label)은 0에서 9까지의 정수 배열입니다. 이 값은 이미지에 있는 옷의 *클래스*(class)를 나타냅니다: 레이블 클래스 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 각 이미지는 하나의 레이블에 매핑되어 있습니다. 데이터셋에 *클래스 이름*이 들어있지 않기 때문에 나중에 이미지를 출력할 때 사용하기 위해 별도의 변수를 만들어 저장합니다:
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
데이터 탐색모델을 훈련하기 전에 데이터셋 구조를 살펴보죠. 다음 코드는 훈련 세트에 60,000개의 이미지가 있다는 것을 보여줍니다. 각 이미지는 28x28 픽셀로 표현됩니다:
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
비슷하게 훈련 세트에는 60,000개의 레이블이 있습니다:
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
각 레이블은 0과 9사이의 정수입니다:
###Code
train_labels
###Output
_____no_output_____
###Markdown
테스트 세트에는 10,000개의 이미지가 있습니다. 이 이미지도 28x28 픽셀로 표현됩니다:
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
테스트 세트는 10,000개의 이미지에 대한 레이블을 가지고 있습니다:
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
데이터 전처리네트워크를 훈련하기 전에 데이터를 전처리해야 합니다. 훈련 세트에 있는 첫 번째 이미지를 보면 픽셀 값의 범위가 0~255 사이라는 것을 알 수 있습니다:
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
신경망 모델에 주입하기 전에 이 값의 범위를 0~1 사이로 조정하겠습니다. 이렇게 하려면 255로 나누어야 합니다. *훈련 세트*와 *테스트 세트*를 동일한 방식으로 전처리하는 것이 중요합니다:
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
*훈련 세트*에서 처음 25개 이미지와 그 아래 클래스 이름을 출력해 보죠. 데이터 포맷이 올바른지 확인하고 네트워크 구성과 훈련할 준비를 마칩니다.
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
모델 구성신경망 모델을 만들려면 모델의 층을 구성한 다음 모델을 컴파일합니다. 층 설정신경망의 기본 구성 요소는 *층*(layer)입니다. 층은 주입된 데이터에서 표현을 추출합니다. 아마도 문제를 해결하는데 더 의미있는 표현이 추출될 것입니다.대부분 딥러닝은 간단한 층을 연결하여 구성됩니다. `tf.keras.layers.Dense`와 같은 층들의 가중치(parameter)는 훈련하는 동안 학습됩니다.
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
이 네트워크의 첫 번째 층인 `tf.keras.layers.Flatten`은 2차원 배열(28 x 28 픽셀)의 이미지 포맷을 28 * 28 = 784 픽셀의 1차원 배열로 변환합니다. 이 층은 이미지에 있는 픽셀의 행을 펼쳐서 일렬로 늘립니다. 이 층에는 학습되는 가중치가 없고 데이터를 변환하기만 합니다.픽셀을 펼친 후에는 두 개의 `tf.keras.layers.Dense` 층이 연속되어 연결됩니다. 이 층을 밀집 연결(densely-connected) 또는 완전 연결(fully-connected) 층이라고 부릅니다. 첫 번째 `Dense` 층은 128개의 노드(또는 뉴런)를 가집니다. 두 번째 (마지막) 층은 10개의 노드의 *소프트맥스*(softmax) 층입니다. 이 층은 10개의 확률을 반환하고 반환된 값의 전체 합은 1입니다. 각 노드는 현재 이미지가 10개 클래스 중 하나에 속할 확률을 출력합니다. 모델 컴파일모델을 훈련하기 전에 필요한 몇 가지 설정이 모델 *컴파일* 단계에서 추가됩니다:* *손실 함수*(Loss function)-훈련 하는 동안 모델의 오차를 측정합니다. 모델의 학습이 올바른 방향으로 향하도록 이 함수를 최소화해야 합니다.* *옵티마이저*(Optimizer)-데이터와 손실 함수를 바탕으로 모델의 업데이트 방법을 결정합니다.* *지표*(Metrics)-훈련 단계와 테스트 단계를 모니터링하기 위해 사용합니다. 다음 예에서는 올바르게 분류된 이미지의 비율인 *정확도*를 사용합니다.
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
모델 훈련신경망 모델을 훈련하는 단계는 다음과 같습니다:1. 훈련 데이터를 모델에 주입합니다-이 예에서는 `train_images`와 `train_labels` 배열입니다.2. 모델이 이미지와 레이블을 매핑하는 방법을 배웁니다.3. 테스트 세트에 대한 모델의 예측을 만듭니다-이 예에서는 `test_images` 배열입니다. 이 예측이 `test_labels` 배열의 레이블과 맞는지 확인합니다.훈련을 시작하기 위해 `model.fit` 메서드를 호출하면 모델이 훈련 데이터를 학습합니다:
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
모델이 훈련되면서 손실과 정확도 지표가 출력됩니다. 이 모델은 훈련 세트에서 약 0.88(88%) 정도의 정확도를 달성합니다. 정확도 평가그다음 테스트 세트에서 모델의 성능을 비교합니다:
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('\n테스트 정확도:', test_acc)
###Output
_____no_output_____
###Markdown
테스트 세트의 정확도가 훈련 세트의 정확도보다 조금 낮습니다. 훈련 세트의 정확도와 테스트 세트의 정확도 사이의 차이는 *과대적합*(overfitting) 때문입니다. 과대적합은 머신러닝 모델이 훈련 데이터보다 새로운 데이터에서 성능이 낮아지는 현상을 말합니다. 예측 만들기훈련된 모델을 사용하여 이미지에 대한 예측을 만들 수 있습니다.
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
여기서는 테스트 세트에 있는 각 이미지의 레이블을 예측했습니다. 첫 번째 예측을 확인해 보죠:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
이 예측은 10개의 숫자 배열로 나타납니다. 이 값은 10개의 옷 품목에 상응하는 모델의 신뢰도(confidence)를 나타냅니다. 가장 높은 신뢰도를 가진 레이블을 찾아보죠:
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
모델은 이 이미지가 앵클 부츠(`class_name[9]`)라고 가장 확신하고 있습니다. 이 값이 맞는지 테스트 레이블을 확인해 보죠:
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
10개의 신뢰도를 모두 그래프로 표현해 보겠습니다:
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
0번째 원소의 이미지, 예측, 신뢰도 점수 배열을 확인해 보겠습니다.
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
몇 개의 이미지의 예측을 출력해 보죠. 올바르게 예측된 레이블은 파란색이고 잘못 예측된 레이블은 빨강색입니다. 숫자는 예측 레이블의 신뢰도 퍼센트(100점 만점)입니다. 신뢰도 점수가 높을 때도 잘못 예측할 수 있습니다.
###Code
# 처음 X 개의 테스트 이미지와 예측 레이블, 진짜 레이블을 출력합니다
# 올바른 예측은 파랑색으로 잘못된 예측은 빨강색으로 나타냅니다
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
마지막으로 훈련된 모델을 사용하여 한 이미지에 대한 예측을 만듭니다.
###Code
# 테스트 세트에서 이미지 하나를 선택합니다
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` 모델은 한 번에 샘플의 묶음 또는 *배치*(batch)로 예측을 만드는데 최적화되어 있습니다. 하나의 이미지를 사용할 때에도 2차원 배열로 만들어야 합니다:
###Code
# 이미지 하나만 사용할 때도 배치에 추가합니다
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
이제 이 이미지의 예측을 만듭니다:
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict`는 2차원 넘파이 배열을 반환하므로 첫 번째 이미지의 예측을 선택합니다:
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
첫 번째 신경망 훈련하기: 기초적인 분류 문제 TensorFlow.org에서 보기 구글 코랩(Colab)에서 실행하기 깃허브(GitHub) 소스 보기 Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.이 번역에 개선할 부분이 있다면[tensorflow/docs](https://github.com/tensorflow/docs) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.문서 번역이나 리뷰에 참여하려면[[email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ko)로메일을 보내주시기 바랍니다. 이 튜토리얼에서는 운동화나 셔츠 같은 옷 이미지를 분류하는 신경망 모델을 훈련합니다. 상세 내용을 모두 이해하지 못해도 괜찮습니다. 여기서는 완전한 텐서플로(TensorFlow) 프로그램을 빠르게 살펴 보겠습니다. 자세한 내용은 앞으로 배우면서 더 설명합니다.여기에서는 텐서플로 모델을 만들고 훈련할 수 있는 고수준 API인 [tf.keras](https://www.tensorflow.org/guide/keras)를 사용합니다.
###Code
!pip install tensorflow==2.0.0-alpha0
from __future__ import absolute_import, division, print_function, unicode_literals, unicode_literals
# tensorflow와 tf.keras를 임포트합니다
import tensorflow as tf
from tensorflow import keras
# 헬퍼(helper) 라이브러리를 임포트합니다
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
패션 MNIST 데이터셋 임포트하기 10개의 범주(category)와 70,000개의 흑백 이미지로 구성된 [패션 MNIST](https://github.com/zalandoresearch/fashion-mnist) 데이터셋을 사용하겠습니다. 이미지는 해상도(28x28 픽셀)가 낮고 다음처럼 개별 옷 품목을 나타냅니다: <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> 그림 1. 패션-MNIST 샘플 (Zalando, MIT License). 패션 MNIST는 컴퓨터 비전 분야의 "Hello, World" 프로그램격인 고전 [MNIST](http://yann.lecun.com/exdb/mnist/) 데이터셋을 대신해서 자주 사용됩니다. MNIST 데이터셋은 손글씨 숫자(0, 1, 2 등)의 이미지로 이루어져 있습니다. 여기서 사용하려는 옷 이미지와 동일한 포맷입니다.패션 MNIST는 일반적인 MNIST 보다 조금 더 어려운 문제이고 다양한 예제를 만들기 위해 선택했습니다. 두 데이터셋은 비교적 작기 때문에 알고리즘의 작동 여부를 확인하기 위해 사용되곤 합니다. 코드를 테스트하고 디버깅하는 용도로 좋습니다.네트워크를 훈련하는데 60,000개의 이미지를 사용합니다. 그다음 네트워크가 얼마나 정확하게 이미지를 분류하는지 10,000개의 이미지로 평가하겠습니다. 패션 MNIST 데이터셋은 텐서플로에서 바로 임포트하여 적재할 수 있습니다:
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
load_data() 함수를 호출하면 네 개의 넘파이(NumPy) 배열이 반환됩니다:* `train_images`와 `train_labels` 배열은 모델 학습에 사용되는 *훈련 세트*입니다.* `test_images`와 `test_labels` 배열은 모델 테스트에 사용되는 *테스트 세트*입니다.이미지는 28x28 크기의 넘파이 배열이고 픽셀 값은 0과 255 사이입니다. *레이블*(label)은 0에서 9까지의 정수 배열입니다. 이 값은 이미지에 있는 옷의 *클래스*(class)를 나타냅니다: 레이블 클래스 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 각 이미지는 하나의 레이블에 매핑되어 있습니다. 데이터셋에 *클래스 이름*이 들어있지 않기 때문에 나중에 이미지를 출력할 때 사용하기 위해 별도의 변수를 만들어 저장합니다:
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
데이터 탐색모델을 훈련하기 전에 데이터셋 구조를 살펴보죠. 다음 코드는 훈련 세트에 60,000개의 이미지가 있다는 것을 보여줍니다. 각 이미지는 28x28 픽셀로 표현됩니다:
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
비슷하게 훈련 세트에는 60,000개의 레이블이 있습니다:
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
각 레이블은 0과 9사이의 정수입니다:
###Code
train_labels
###Output
_____no_output_____
###Markdown
테스트 세트에는 10,000개의 이미지가 있습니다. 이 이미지도 28x28 픽셀로 표현됩니다:
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
테스트 세트는 10,000개의 이미지에 대한 레이블을 가지고 있습니다:
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
데이터 전처리네트워크를 훈련하기 전에 데이터를 전처리해야 합니다. 훈련 세트에 있는 첫 번째 이미지를 보면 픽셀 값의 범위가 0~255 사이라는 것을 알 수 있습니다:
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
신경망 모델에 주입하기 전에 이 값의 범위를 0~1 사이로 조정하겠습니다. 이렇게 하려면 255로 나누어야 합니다. *훈련 세트*와 *테스트 세트*를 동일한 방식으로 전처리하는 것이 중요합니다:
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
*훈련 세트*에서 처음 25개 이미지와 그 아래 클래스 이름을 출력해 보죠. 데이터 포맷이 올바른지 확인하고 네트워크 구성과 훈련할 준비를 마칩니다.
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
모델 구성신경망 모델을 만들려면 모델의 층을 구성한 다음 모델을 컴파일합니다. 층 설정신경망의 기본 구성 요소는 *층*(layer)입니다. 층은 주입된 데이터에서 표현을 추출합니다. 아마도 문제를 해결하는데 더 의미있는 표현이 추출될 것입니다.대부분 딥러닝은 간단한 층을 연결하여 구성됩니다. `tf.keras.layers.Dense`와 같은 층들의 가중치(parameter)는 훈련하는 동안 학습됩니다.
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
이 네트워크의 첫 번째 층인 `tf.keras.layers.Flatten`은 2차원 배열(28 x 28 픽셀)의 이미지 포맷을 28 * 28 = 784 픽셀의 1차원 배열로 변환합니다. 이 층은 이미지에 있는 픽셀의 행을 펼쳐서 일렬로 늘립니다. 이 층에는 학습되는 가중치가 없고 데이터를 변환하기만 합니다.픽셀을 펼친 후에는 두 개의 `tf.keras.layers.Dense` 층이 연속되어 연결됩니다. 이 층을 밀집 연결(densely-connected) 또는 완전 연결(fully-connected) 층이라고 부릅니다. 첫 번째 `Dense` 층은 128개의 노드(또는 뉴런)를 가집니다. 두 번째 (마지막) 층은 10개의 노드의 *소프트맥스*(softmax) 층입니다. 이 층은 10개의 확률을 반환하고 반환된 값의 전체 합은 1입니다. 각 노드는 현재 이미지가 10개 클래스 중 하나에 속할 확률을 출력합니다. 모델 컴파일모델을 훈련하기 전에 필요한 몇 가지 설정이 모델 *컴파일* 단계에서 추가됩니다:* *손실 함수*(Loss function)-훈련 하는 동안 모델의 오차를 측정합니다. 모델의 학습이 올바른 방향으로 향하도록 이 함수를 최소화해야 합니다.* *옵티마이저*(Optimizer)-데이터와 손실 함수를 바탕으로 모델의 업데이트 방법을 결정합니다.* *지표*(Metrics)-훈련 단계와 테스트 단계를 모니터링하기 위해 사용합니다. 다음 예에서는 올바르게 분류된 이미지의 비율인 *정확도*를 사용합니다.
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
모델 훈련신경망 모델을 훈련하는 단계는 다음과 같습니다:1. 훈련 데이터를 모델에 주입합니다-이 예에서는 `train_images`와 `train_labels` 배열입니다.2. 모델이 이미지와 레이블을 매핑하는 방법을 배웁니다.3. 테스트 세트에 대한 모델의 예측을 만듭니다-이 예에서는 `test_images` 배열입니다. 이 예측이 `test_labels` 배열의 레이블과 맞는지 확인합니다.훈련을 시작하기 위해 `model.fit` 메서드를 호출하면 모델이 훈련 데이터를 학습합니다:
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
모델이 훈련되면서 손실과 정확도 지표가 출력됩니다. 이 모델은 훈련 세트에서 약 0.88(88%) 정도의 정확도를 달성합니다. 정확도 평가그다음 테스트 세트에서 모델의 성능을 비교합니다:
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('\n테스트 정확도:', test_acc)
###Output
_____no_output_____
###Markdown
테스트 세트의 정확도가 훈련 세트의 정확도보다 조금 낮습니다. 훈련 세트의 정확도와 테스트 세트의 정확도 사이의 차이는 *과대적합*(overfitting) 때문입니다. 과대적합은 머신러닝 모델이 훈련 데이터보다 새로운 데이터에서 성능이 낮아지는 현상을 말합니다. 예측 만들기훈련된 모델을 사용하여 이미지에 대한 예측을 만들 수 있습니다.
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
여기서는 테스트 세트에 있는 각 이미지의 레이블을 예측했습니다. 첫 번째 예측을 확인해 보죠:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
이 예측은 10개의 숫자 배열로 나타납니다. 이 값은 10개의 옷 품목에 상응하는 모델의 신뢰도(confidence)를 나타냅니다. 가장 높은 신뢰도를 가진 레이블을 찾아보죠:
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
모델은 이 이미지가 앵클 부츠(`class_name[9]`)라고 가장 확신하고 있습니다. 이 값이 맞는지 테스트 레이블을 확인해 보죠:
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
10개의 신뢰도를 모두 그래프로 표현해 보겠습니다:
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
0번째 원소의 이미지, 예측, 신뢰도 점수 배열을 확인해 보겠습니다.
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
몇 개의 이미지의 예측을 출력해 보죠. 올바르게 예측된 레이블은 파란색이고 잘못 예측된 레이블은 빨강색입니다. 숫자는 예측 레이블의 신뢰도 퍼센트(100점 만점)입니다. 신뢰도 점수가 높을 때도 잘못 예측할 수 있습니다.
###Code
# 처음 X 개의 테스트 이미지와 예측 레이블, 진짜 레이블을 출력합니다
# 올바른 예측은 파랑색으로 잘못된 예측은 빨강색으로 나타냅니다
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
마지막으로 훈련된 모델을 사용하여 한 이미지에 대한 예측을 만듭니다.
###Code
# 테스트 세트에서 이미지 하나를 선택합니다
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` 모델은 한 번에 샘플의 묶음 또는 *배치*(batch)로 예측을 만드는데 최적화되어 있습니다. 하나의 이미지를 사용할 때에도 2차원 배열로 만들어야 합니다:
###Code
# 이미지 하나만 사용할 때도 배치에 추가합니다
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
이제 이 이미지의 예측을 만듭니다:
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict`는 2차원 넘파이 배열을 반환하므로 첫 번째 이미지의 예측을 선택합니다:
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
첫 번째 신경망 훈련하기: 기초적인 분류 문제 TensorFlow.org에서 보기 구글 코랩(Colab)에서 실행하기 깃허브(GitHub) 소스 보기 Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.이 번역에 개선할 부분이 있다면[tensorflow/docs](https://github.com/tensorflow/docs) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.문서 번역이나 리뷰에 참여하려면 [[email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs)로메일을 보내주시기 바랍니다. 이 튜토리얼에서는 운동화나 셔츠 같은 옷 이미지를 분류하는 신경망 모델을 훈련합니다. 상세 내용을 모두 이해하지 못해도 괜찮습니다. 여기서는 완전한 텐서플로(TensorFlow) 프로그램을 빠르게 살펴 보겠습니다. 자세한 내용은 앞으로 배우면서 더 설명합니다.여기에서는 텐서플로 모델을 만들고 훈련할 수 있는 고수준 API인 [tf.keras](https://www.tensorflow.org/guide/keras)를 사용합니다.
###Code
!pip install tensorflow==2.0.0-alpha0
from __future__ import absolute_import, division, print_function, unicode_literals
# tensorflow와 tf.keras를 임포트합니다
import tensorflow as tf
from tensorflow import keras
# 헬퍼(helper) 라이브러리를 임포트합니다
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
패션 MNIST 데이터셋 임포트하기 10개의 범주(category)와 70,000개의 흑백 이미지로 구성된 [패션 MNIST](https://github.com/zalandoresearch/fashion-mnist) 데이터셋을 사용하겠습니다. 이미지는 해상도(28x28 픽셀)가 낮고 다음처럼 개별 옷 품목을 나타냅니다: <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> 그림 1. 패션-MNIST 샘플 (Zalando, MIT License). 패션 MNIST는 컴퓨터 비전 분야의 "Hello, World" 프로그램격인 고전 [MNIST](http://yann.lecun.com/exdb/mnist/) 데이터셋을 대신해서 자주 사용됩니다. MNIST 데이터셋은 손글씨 숫자(0, 1, 2 등)의 이미지로 이루어져 있습니다. 여기서 사용하려는 옷 이미지와 동일한 포맷입니다.패션 MNIST는 일반적인 MNIST 보다 조금 더 어려운 문제이고 다양한 예제를 만들기 위해 선택했습니다. 두 데이터셋은 비교적 작기 때문에 알고리즘의 작동 여부를 확인하기 위해 사용되곤 합니다. 코드를 테스트하고 디버깅하는 용도로 좋습니다.네트워크를 훈련하는데 60,000개의 이미지를 사용합니다. 그다음 네트워크가 얼마나 정확하게 이미지를 분류하는지 10,000개의 이미지로 평가하겠습니다. 패션 MNIST 데이터셋은 텐서플로에서 바로 임포트하여 적재할 수 있습니다:
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
load_data() 함수를 호출하면 네 개의 넘파이(NumPy) 배열이 반환됩니다:* `train_images`와 `train_labels` 배열은 모델 학습에 사용되는 *훈련 세트*입니다.* `test_images`와 `test_labels` 배열은 모델 테스트에 사용되는 *테스트 세트*입니다.이미지는 28x28 크기의 넘파이 배열이고 픽셀 값은 0과 255 사이입니다. *레이블*(label)은 0에서 9까지의 정수 배열입니다. 이 값은 이미지에 있는 옷의 *클래스*(class)를 나타냅니다: 레이블 클래스 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 각 이미지는 하나의 레이블에 매핑되어 있습니다. 데이터셋에 *클래스 이름*이 들어있지 않기 때문에 나중에 이미지를 출력할 때 사용하기 위해 별도의 변수를 만들어 저장합니다:
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
데이터 탐색모델을 훈련하기 전에 데이터셋 구조를 살펴보죠. 다음 코드는 훈련 세트에 60,000개의 이미지가 있다는 것을 보여줍니다. 각 이미지는 28x28 픽셀로 표현됩니다:
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
비슷하게 훈련 세트에는 60,000개의 레이블이 있습니다:
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
각 레이블은 0과 9사이의 정수입니다:
###Code
train_labels
###Output
_____no_output_____
###Markdown
테스트 세트에는 10,000개의 이미지가 있습니다. 이 이미지도 28x28 픽셀로 표현됩니다:
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
테스트 세트는 10,000개의 이미지에 대한 레이블을 가지고 있습니다:
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
데이터 전처리네트워크를 훈련하기 전에 데이터를 전처리해야 합니다. 훈련 세트에 있는 첫 번째 이미지를 보면 픽셀 값의 범위가 0~255 사이라는 것을 알 수 있습니다:
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
신경망 모델에 주입하기 전에 이 값의 범위를 0~1 사이로 조정하겠습니다. 이렇게 하려면 255로 나누어야 합니다. *훈련 세트*와 *테스트 세트*를 동일한 방식으로 전처리하는 것이 중요합니다:
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
*훈련 세트*에서 처음 25개 이미지와 그 아래 클래스 이름을 출력해 보죠. 데이터 포맷이 올바른지 확인하고 네트워크 구성과 훈련할 준비를 마칩니다.
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
모델 구성신경망 모델을 만들려면 모델의 층을 구성한 다음 모델을 컴파일합니다. 층 설정신경망의 기본 구성 요소는 *층*(layer)입니다. 층은 주입된 데이터에서 표현을 추출합니다. 아마도 문제를 해결하는데 더 의미있는 표현이 추출될 것입니다.대부분 딥러닝은 간단한 층을 연결하여 구성됩니다. `tf.keras.layers.Dense`와 같은 층들의 가중치(parameter)는 훈련하는 동안 학습됩니다.
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
이 네트워크의 첫 번째 층인 `tf.keras.layers.Flatten`은 2차원 배열(28 x 28 픽셀)의 이미지 포맷을 28 * 28 = 784 픽셀의 1차원 배열로 변환합니다. 이 층은 이미지에 있는 픽셀의 행을 펼쳐서 일렬로 늘립니다. 이 층에는 학습되는 가중치가 없고 데이터를 변환하기만 합니다.픽셀을 펼친 후에는 두 개의 `tf.keras.layers.Dense` 층이 연속되어 연결됩니다. 이 층을 밀집 연결(densely-connected) 또는 완전 연결(fully-connected) 층이라고 부릅니다. 첫 번째 `Dense` 층은 128개의 노드(또는 뉴런)를 가집니다. 두 번째 (마지막) 층은 10개의 노드의 *소프트맥스*(softmax) 층입니다. 이 층은 10개의 확률을 반환하고 반환된 값의 전체 합은 1입니다. 각 노드는 현재 이미지가 10개 클래스 중 하나에 속할 확률을 출력합니다. 모델 컴파일모델을 훈련하기 전에 필요한 몇 가지 설정이 모델 *컴파일* 단계에서 추가됩니다:* *손실 함수*(Loss function)-훈련 하는 동안 모델의 오차를 측정합니다. 모델의 학습이 올바른 방향으로 향하도록 이 함수를 최소화해야 합니다.* *옵티마이저*(Optimizer)-데이터와 손실 함수를 바탕으로 모델의 업데이트 방법을 결정합니다.* *지표*(Metrics)-훈련 단계와 테스트 단계를 모니터링하기 위해 사용합니다. 다음 예에서는 올바르게 분류된 이미지의 비율인 *정확도*를 사용합니다.
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
모델 훈련신경망 모델을 훈련하는 단계는 다음과 같습니다:1. 훈련 데이터를 모델에 주입합니다-이 예에서는 `train_images`와 `train_labels` 배열입니다.2. 모델이 이미지와 레이블을 매핑하는 방법을 배웁니다.3. 테스트 세트에 대한 모델의 예측을 만듭니다-이 예에서는 `test_images` 배열입니다. 이 예측이 `test_labels` 배열의 레이블과 맞는지 확인합니다.훈련을 시작하기 위해 `model.fit` 메서드를 호출하면 모델이 훈련 데이터를 학습합니다:
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
모델이 훈련되면서 손실과 정확도 지표가 출력됩니다. 이 모델은 훈련 세트에서 약 0.88(88%) 정도의 정확도를 달성합니다. 정확도 평가그다음 테스트 세트에서 모델의 성능을 비교합니다:
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('\n테스트 정확도:', test_acc)
###Output
_____no_output_____
###Markdown
테스트 세트의 정확도가 훈련 세트의 정확도보다 조금 낮습니다. 훈련 세트의 정확도와 테스트 세트의 정확도 사이의 차이는 *과대적합*(overfitting) 때문입니다. 과대적합은 머신러닝 모델이 훈련 데이터보다 새로운 데이터에서 성능이 낮아지는 현상을 말합니다. 예측 만들기훈련된 모델을 사용하여 이미지에 대한 예측을 만들 수 있습니다.
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
여기서는 테스트 세트에 있는 각 이미지의 레이블을 예측했습니다. 첫 번째 예측을 확인해 보죠:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
이 예측은 10개의 숫자 배열로 나타납니다. 이 값은 10개의 옷 품목에 상응하는 모델의 신뢰도(confidence)를 나타냅니다. 가장 높은 신뢰도를 가진 레이블을 찾아보죠:
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
모델은 이 이미지가 앵클 부츠(`class_name[9]`)라고 가장 확신하고 있습니다. 이 값이 맞는지 테스트 레이블을 확인해 보죠:
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
10개의 신뢰도를 모두 그래프로 표현해 보겠습니다:
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
0번째 원소의 이미지, 예측, 신뢰도 점수 배열을 확인해 보겠습니다.
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
몇 개의 이미지의 예측을 출력해 보죠. 올바르게 예측된 레이블은 파란색이고 잘못 예측된 레이블은 빨강색입니다. 숫자는 예측 레이블의 신뢰도 퍼센트(100점 만점)입니다. 신뢰도 점수가 높을 때도 잘못 예측할 수 있습니다.
###Code
# 처음 X 개의 테스트 이미지와 예측 레이블, 진짜 레이블을 출력합니다
# 올바른 예측은 파랑색으로 잘못된 예측은 빨강색으로 나타냅니다
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
마지막으로 훈련된 모델을 사용하여 한 이미지에 대한 예측을 만듭니다.
###Code
# 테스트 세트에서 이미지 하나를 선택합니다
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` 모델은 한 번에 샘플의 묶음 또는 *배치*(batch)로 예측을 만드는데 최적화되어 있습니다. 하나의 이미지를 사용할 때에도 2차원 배열로 만들어야 합니다:
###Code
# 이미지 하나만 사용할 때도 배치에 추가합니다
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
이제 이 이미지의 예측을 만듭니다:
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict`는 2차원 넘파이 배열을 반환하므로 첫 번째 이미지의 예측을 선택합니다:
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
첫 번째 신경망 훈련하기: 기초적인 분류 문제 TensorFlow.org에서 보기 구글 코랩(Colab)에서 실행하기 깃허브(GitHub) 소스 보기 Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.이 번역에 개선할 부분이 있다면[tensorflow/docs](https://github.com/tensorflow/docs) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.문서 번역이나 리뷰에 참여하려면 [[email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs)로메일을 보내주시기 바랍니다. 이 튜토리얼에서는 운동화나 셔츠 같은 옷 이미지를 분류하는 신경망 모델을 훈련합니다. 상세 내용을 모두 이해하지 못해도 괜찮습니다. 여기서는 완전한 텐서플로(TensorFlow) 프로그램을 빠르게 살펴 보겠습니다. 자세한 내용은 앞으로 배우면서 더 설명합니다.여기에서는 텐서플로 모델을 만들고 훈련할 수 있는 고수준 API인 [tf.keras](https://www.tensorflow.org/guide/keras)를 사용합니다.
###Code
!pip install tensorflow==2.0.0-alpha0
from __future__ import absolute_import, division, print_function, unicode_literals
# tensorflow와 tf.keras를 임포트합니다
import tensorflow as tf
from tensorflow import keras
# 헬퍼(helper) 라이브러리를 임포트합니다
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
패션 MNIST 데이터셋 임포트하기 10개의 범주(category)와 70,000개의 흑백 이미지로 구성된 [패션 MNIST](https://github.com/zalandoresearch/fashion-mnist) 데이터셋을 사용하겠습니다. 이미지는 해상도(28x28 픽셀)가 낮고 다음처럼 개별 옷 품목을 나타냅니다: <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> 그림 1. 패션-MNIST 샘플 (Zalando, MIT License). 패션 MNIST는 컴퓨터 비전 분야의 "Hello, World" 프로그램격인 고전 [MNIST](http://yann.lecun.com/exdb/mnist/) 데이터셋을 대신해서 자주 사용됩니다. MNIST 데이터셋은 손글씨 숫자(0, 1, 2 등)의 이미지로 이루어져 있습니다. 여기서 사용하려는 옷 이미지와 동일한 포맷입니다.패션 MNIST는 일반적인 MNIST 보다 조금 더 어려운 문제이고 다양한 예제를 만들기 위해 선택했습니다. 두 데이터셋은 비교적 작기 때문에 알고리즘의 작동 여부를 확인하기 위해 사용되곤 합니다. 코드를 테스트하고 디버깅하는 용도로 좋습니다.네트워크를 훈련하는데 60,000개의 이미지를 사용합니다. 그다음 네트워크가 얼마나 정확하게 이미지를 분류하는지 10,000개의 이미지로 평가하겠습니다. 패션 MNIST 데이터셋은 텐서플로에서 바로 임포트하여 적재할 수 있습니다:
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
load_data() 함수를 호출하면 네 개의 넘파이(NumPy) 배열이 반환됩니다:* `train_images`와 `train_labels` 배열은 모델 학습에 사용되는 *훈련 세트*입니다.* `test_images`와 `test_labels` 배열은 모델 테스트에 사용되는 *테스트 세트*입니다.이미지는 28x28 크기의 넘파이 배열이고 픽셀 값은 0과 255 사이입니다. *레이블*(label)은 0에서 9까지의 정수 배열입니다. 이 값은 이미지에 있는 옷의 *클래스*(class)를 나타냅니다: 레이블 클래스 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 각 이미지는 하나의 레이블에 매핑되어 있습니다. 데이터셋에 *클래스 이름*이 들어있지 않기 때문에 나중에 이미지를 출력할 때 사용하기 위해 별도의 변수를 만들어 저장합니다:
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
데이터 탐색모델을 훈련하기 전에 데이터셋 구조를 살펴보죠. 다음 코드는 훈련 세트에 60,000개의 이미지가 있다는 것을 보여줍니다. 각 이미지는 28x28 픽셀로 표현됩니다:
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
비슷하게 훈련 세트에는 60,000개의 레이블이 있습니다:
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
각 레이블은 0과 9사이의 정수입니다:
###Code
train_labels
###Output
_____no_output_____
###Markdown
테스트 세트에는 10,000개의 이미지가 있습니다. 이 이미지도 28x28 픽셀로 표현됩니다:
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
테스트 세트는 10,000개의 이미지에 대한 레이블을 가지고 있습니다:
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
데이터 전처리네트워크를 훈련하기 전에 데이터를 전처리해야 합니다. 훈련 세트에 있는 첫 번째 이미지를 보면 픽셀 값의 범위가 0~255 사이라는 것을 알 수 있습니다:
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
신경망 모델에 주입하기 전에 이 값의 범위를 0~1 사이로 조정하겠습니다. 이렇게 하려면 255로 나누어야 합니다. *훈련 세트*와 *테스트 세트*를 동일한 방식으로 전처리하는 것이 중요합니다:
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
*훈련 세트*에서 처음 25개 이미지와 그 아래 클래스 이름을 출력해 보죠. 데이터 포맷이 올바른지 확인하고 네트워크 구성과 훈련할 준비를 마칩니다.
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
모델 구성신경망 모델을 만들려면 모델의 층을 구성한 다음 모델을 컴파일합니다. 층 설정신경망의 기본 구성 요소는 *층*(layer)입니다. 층은 주입된 데이터에서 표현을 추출합니다. 아마도 문제를 해결하는데 더 의미있는 표현이 추출될 것입니다.대부분 딥러닝은 간단한 층을 연결하여 구성됩니다. `tf.keras.layers.Dense`와 같은 층들의 가중치(parameter)는 훈련하는 동안 학습됩니다.
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
이 네트워크의 첫 번째 층인 `tf.keras.layers.Flatten`은 2차원 배열(28 x 28 픽셀)의 이미지 포맷을 28 * 28 = 784 픽셀의 1차원 배열로 변환합니다. 이 층은 이미지에 있는 픽셀의 행을 펼쳐서 일렬로 늘립니다. 이 층에는 학습되는 가중치가 없고 데이터를 변환하기만 합니다.픽셀을 펼친 후에는 두 개의 `tf.keras.layers.Dense` 층이 연속되어 연결됩니다. 이 층을 밀집 연결(densely-connected) 또는 완전 연결(fully-connected) 층이라고 부릅니다. 첫 번째 `Dense` 층은 128개의 노드(또는 뉴런)를 가집니다. 두 번째 (마지막) 층은 10개의 노드의 *소프트맥스*(softmax) 층입니다. 이 층은 10개의 확률을 반환하고 반환된 값의 전체 합은 1입니다. 각 노드는 현재 이미지가 10개 클래스 중 하나에 속할 확률을 출력합니다. 모델 컴파일모델을 훈련하기 전에 필요한 몇 가지 설정이 모델 *컴파일* 단계에서 추가됩니다:* *손실 함수*(Loss function)-훈련 하는 동안 모델의 오차를 측정합니다. 모델의 학습이 올바른 방향으로 향하도록 이 함수를 최소화해야 합니다.* *옵티마이저*(Optimizer)-데이터와 손실 함수를 바탕으로 모델의 업데이트 방법을 결정합니다.* *지표*(Metrics)-훈련 단계와 테스트 단계를 모니터링하기 위해 사용합니다. 다음 예에서는 올바르게 분류된 이미지의 비율인 *정확도*를 사용합니다.
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
모델 훈련신경망 모델을 훈련하는 단계는 다음과 같습니다:1. 훈련 데이터를 모델에 주입합니다-이 예에서는 `train_images`와 `train_labels` 배열입니다.2. 모델이 이미지와 레이블을 매핑하는 방법을 배웁니다.3. 테스트 세트에 대한 모델의 예측을 만듭니다-이 예에서는 `test_images` 배열입니다. 이 예측이 `test_labels` 배열의 레이블과 맞는지 확인합니다.훈련을 시작하기 위해 `model.fit` 메서드를 호출하면 모델이 훈련 데이터를 학습합니다:
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
모델이 훈련되면서 손실과 정확도 지표가 출력됩니다. 이 모델은 훈련 세트에서 약 0.88(88%) 정도의 정확도를 달성합니다. 정확도 평가그다음 테스트 세트에서 모델의 성능을 비교합니다:
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('\n테스트 정확도:', test_acc)
###Output
_____no_output_____
###Markdown
테스트 세트의 정확도가 훈련 세트의 정확도보다 조금 낮습니다. 훈련 세트의 정확도와 테스트 세트의 정확도 사이의 차이는 *과대적합*(overfitting) 때문입니다. 과대적합은 머신러닝 모델이 훈련 데이터보다 새로운 데이터에서 성능이 낮아지는 현상을 말합니다. 예측 만들기훈련된 모델을 사용하여 이미지에 대한 예측을 만들 수 있습니다.
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
여기서는 테스트 세트에 있는 각 이미지의 레이블을 예측했습니다. 첫 번째 예측을 확인해 보죠:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
이 예측은 10개의 숫자 배열로 나타납니다. 이 값은 10개의 옷 품목에 상응하는 모델의 신뢰도(confidence)를 나타냅니다. 가장 높은 신뢰도를 가진 레이블을 찾아보죠:
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
모델은 이 이미지가 앵클 부츠(`class_name[9]`)라고 가장 확신하고 있습니다. 이 값이 맞는지 테스트 레이블을 확인해 보죠:
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
10개의 신뢰도를 모두 그래프로 표현해 보겠습니다:
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
0번째 원소의 이미지, 예측, 신뢰도 점수 배열을 확인해 보겠습니다.
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
몇 개의 이미지의 예측을 출력해 보죠. 올바르게 예측된 레이블은 파란색이고 잘못 예측된 레이블은 빨강색입니다. 숫자는 예측 레이블의 신뢰도 퍼센트(100점 만점)입니다. 신뢰도 점수가 높을 때도 잘못 예측할 수 있습니다.
###Code
# 처음 X 개의 테스트 이미지와 예측 레이블, 진짜 레이블을 출력합니다
# 올바른 예측은 파랑색으로 잘못된 예측은 빨강색으로 나타냅니다
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
마지막으로 훈련된 모델을 사용하여 한 이미지에 대한 예측을 만듭니다.
###Code
# 테스트 세트에서 이미지 하나를 선택합니다
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` 모델은 한 번에 샘플의 묶음 또는 *배치*(batch)로 예측을 만드는데 최적화되어 있습니다. 하나의 이미지를 사용할 때에도 2차원 배열로 만들어야 합니다:
###Code
# 이미지 하나만 사용할 때도 배치에 추가합니다
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
이제 이 이미지의 예측을 만듭니다:
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict`는 2차원 넘파이 배열을 반환하므로 첫 번째 이미지의 예측을 선택합니다:
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
첫 번째 신경망 훈련하기: 기초적인 분류 문제 TensorFlow.org에서 보기 구글 코랩(Colab)에서 실행하기 깃허브(GitHub) 소스 보기 Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.이 번역에 개선할 부분이 있다면[tensorflow/docs](https://github.com/tensorflow/docs) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.문서 번역이나 리뷰에 참여하려면 [[email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs)로메일을 보내주시기 바랍니다. 이 튜토리얼에서는 운동화나 셔츠 같은 옷 이미지를 분류하는 신경망 모델을 훈련합니다. 상세 내용을 모두 이해하지 못해도 괜찮습니다. 여기서는 완전한 텐서플로(TensorFlow) 프로그램을 빠르게 살펴 보겠습니다. 자세한 내용은 앞으로 배우면서 더 설명합니다.여기에서는 텐서플로 모델을 만들고 훈련할 수 있는 고수준 API인 [tf.keras](https://www.tensorflow.org/guide/keras)를 사용합니다.
###Code
!pip install tensorflow==2.0.0-alpha0
from __future__ import absolute_import, division, print_function, unicode_literals, unicode_literals
# tensorflow와 tf.keras를 임포트합니다
import tensorflow as tf
from tensorflow import keras
# 헬퍼(helper) 라이브러리를 임포트합니다
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
패션 MNIST 데이터셋 임포트하기 10개의 범주(category)와 70,000개의 흑백 이미지로 구성된 [패션 MNIST](https://github.com/zalandoresearch/fashion-mnist) 데이터셋을 사용하겠습니다. 이미지는 해상도(28x28 픽셀)가 낮고 다음처럼 개별 옷 품목을 나타냅니다: <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> 그림 1. 패션-MNIST 샘플 (Zalando, MIT License). 패션 MNIST는 컴퓨터 비전 분야의 "Hello, World" 프로그램격인 고전 [MNIST](http://yann.lecun.com/exdb/mnist/) 데이터셋을 대신해서 자주 사용됩니다. MNIST 데이터셋은 손글씨 숫자(0, 1, 2 등)의 이미지로 이루어져 있습니다. 여기서 사용하려는 옷 이미지와 동일한 포맷입니다.패션 MNIST는 일반적인 MNIST 보다 조금 더 어려운 문제이고 다양한 예제를 만들기 위해 선택했습니다. 두 데이터셋은 비교적 작기 때문에 알고리즘의 작동 여부를 확인하기 위해 사용되곤 합니다. 코드를 테스트하고 디버깅하는 용도로 좋습니다.네트워크를 훈련하는데 60,000개의 이미지를 사용합니다. 그다음 네트워크가 얼마나 정확하게 이미지를 분류하는지 10,000개의 이미지로 평가하겠습니다. 패션 MNIST 데이터셋은 텐서플로에서 바로 임포트하여 적재할 수 있습니다:
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
load_data() 함수를 호출하면 네 개의 넘파이(NumPy) 배열이 반환됩니다:* `train_images`와 `train_labels` 배열은 모델 학습에 사용되는 *훈련 세트*입니다.* `test_images`와 `test_labels` 배열은 모델 테스트에 사용되는 *테스트 세트*입니다.이미지는 28x28 크기의 넘파이 배열이고 픽셀 값은 0과 255 사이입니다. *레이블*(label)은 0에서 9까지의 정수 배열입니다. 이 값은 이미지에 있는 옷의 *클래스*(class)를 나타냅니다: 레이블 클래스 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 각 이미지는 하나의 레이블에 매핑되어 있습니다. 데이터셋에 *클래스 이름*이 들어있지 않기 때문에 나중에 이미지를 출력할 때 사용하기 위해 별도의 변수를 만들어 저장합니다:
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
데이터 탐색모델을 훈련하기 전에 데이터셋 구조를 살펴보죠. 다음 코드는 훈련 세트에 60,000개의 이미지가 있다는 것을 보여줍니다. 각 이미지는 28x28 픽셀로 표현됩니다:
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
비슷하게 훈련 세트에는 60,000개의 레이블이 있습니다:
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
각 레이블은 0과 9사이의 정수입니다:
###Code
train_labels
###Output
_____no_output_____
###Markdown
테스트 세트에는 10,000개의 이미지가 있습니다. 이 이미지도 28x28 픽셀로 표현됩니다:
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
테스트 세트는 10,000개의 이미지에 대한 레이블을 가지고 있습니다:
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
데이터 전처리네트워크를 훈련하기 전에 데이터를 전처리해야 합니다. 훈련 세트에 있는 첫 번째 이미지를 보면 픽셀 값의 범위가 0~255 사이라는 것을 알 수 있습니다:
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
신경망 모델에 주입하기 전에 이 값의 범위를 0~1 사이로 조정하겠습니다. 이렇게 하려면 255로 나누어야 합니다. *훈련 세트*와 *테스트 세트*를 동일한 방식으로 전처리하는 것이 중요합니다:
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
*훈련 세트*에서 처음 25개 이미지와 그 아래 클래스 이름을 출력해 보죠. 데이터 포맷이 올바른지 확인하고 네트워크 구성과 훈련할 준비를 마칩니다.
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
모델 구성신경망 모델을 만들려면 모델의 층을 구성한 다음 모델을 컴파일합니다. 층 설정신경망의 기본 구성 요소는 *층*(layer)입니다. 층은 주입된 데이터에서 표현을 추출합니다. 아마도 문제를 해결하는데 더 의미있는 표현이 추출될 것입니다.대부분 딥러닝은 간단한 층을 연결하여 구성됩니다. `tf.keras.layers.Dense`와 같은 층들의 가중치(parameter)는 훈련하는 동안 학습됩니다.
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
이 네트워크의 첫 번째 층인 `tf.keras.layers.Flatten`은 2차원 배열(28 x 28 픽셀)의 이미지 포맷을 28 * 28 = 784 픽셀의 1차원 배열로 변환합니다. 이 층은 이미지에 있는 픽셀의 행을 펼쳐서 일렬로 늘립니다. 이 층에는 학습되는 가중치가 없고 데이터를 변환하기만 합니다.픽셀을 펼친 후에는 두 개의 `tf.keras.layers.Dense` 층이 연속되어 연결됩니다. 이 층을 밀집 연결(densely-connected) 또는 완전 연결(fully-connected) 층이라고 부릅니다. 첫 번째 `Dense` 층은 128개의 노드(또는 뉴런)를 가집니다. 두 번째 (마지막) 층은 10개의 노드의 *소프트맥스*(softmax) 층입니다. 이 층은 10개의 확률을 반환하고 반환된 값의 전체 합은 1입니다. 각 노드는 현재 이미지가 10개 클래스 중 하나에 속할 확률을 출력합니다. 모델 컴파일모델을 훈련하기 전에 필요한 몇 가지 설정이 모델 *컴파일* 단계에서 추가됩니다:* *손실 함수*(Loss function)-훈련 하는 동안 모델의 오차를 측정합니다. 모델의 학습이 올바른 방향으로 향하도록 이 함수를 최소화해야 합니다.* *옵티마이저*(Optimizer)-데이터와 손실 함수를 바탕으로 모델의 업데이트 방법을 결정합니다.* *지표*(Metrics)-훈련 단계와 테스트 단계를 모니터링하기 위해 사용합니다. 다음 예에서는 올바르게 분류된 이미지의 비율인 *정확도*를 사용합니다.
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
모델 훈련신경망 모델을 훈련하는 단계는 다음과 같습니다:1. 훈련 데이터를 모델에 주입합니다-이 예에서는 `train_images`와 `train_labels` 배열입니다.2. 모델이 이미지와 레이블을 매핑하는 방법을 배웁니다.3. 테스트 세트에 대한 모델의 예측을 만듭니다-이 예에서는 `test_images` 배열입니다. 이 예측이 `test_labels` 배열의 레이블과 맞는지 확인합니다.훈련을 시작하기 위해 `model.fit` 메서드를 호출하면 모델이 훈련 데이터를 학습합니다:
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
모델이 훈련되면서 손실과 정확도 지표가 출력됩니다. 이 모델은 훈련 세트에서 약 0.88(88%) 정도의 정확도를 달성합니다. 정확도 평가그다음 테스트 세트에서 모델의 성능을 비교합니다:
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('\n테스트 정확도:', test_acc)
###Output
_____no_output_____
###Markdown
테스트 세트의 정확도가 훈련 세트의 정확도보다 조금 낮습니다. 훈련 세트의 정확도와 테스트 세트의 정확도 사이의 차이는 *과대적합*(overfitting) 때문입니다. 과대적합은 머신러닝 모델이 훈련 데이터보다 새로운 데이터에서 성능이 낮아지는 현상을 말합니다. 예측 만들기훈련된 모델을 사용하여 이미지에 대한 예측을 만들 수 있습니다.
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
여기서는 테스트 세트에 있는 각 이미지의 레이블을 예측했습니다. 첫 번째 예측을 확인해 보죠:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
이 예측은 10개의 숫자 배열로 나타납니다. 이 값은 10개의 옷 품목에 상응하는 모델의 신뢰도(confidence)를 나타냅니다. 가장 높은 신뢰도를 가진 레이블을 찾아보죠:
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
모델은 이 이미지가 앵클 부츠(`class_name[9]`)라고 가장 확신하고 있습니다. 이 값이 맞는지 테스트 레이블을 확인해 보죠:
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
10개의 신뢰도를 모두 그래프로 표현해 보겠습니다:
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
0번째 원소의 이미지, 예측, 신뢰도 점수 배열을 확인해 보겠습니다.
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
몇 개의 이미지의 예측을 출력해 보죠. 올바르게 예측된 레이블은 파란색이고 잘못 예측된 레이블은 빨강색입니다. 숫자는 예측 레이블의 신뢰도 퍼센트(100점 만점)입니다. 신뢰도 점수가 높을 때도 잘못 예측할 수 있습니다.
###Code
# 처음 X 개의 테스트 이미지와 예측 레이블, 진짜 레이블을 출력합니다
# 올바른 예측은 파랑색으로 잘못된 예측은 빨강색으로 나타냅니다
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
마지막으로 훈련된 모델을 사용하여 한 이미지에 대한 예측을 만듭니다.
###Code
# 테스트 세트에서 이미지 하나를 선택합니다
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` 모델은 한 번에 샘플의 묶음 또는 *배치*(batch)로 예측을 만드는데 최적화되어 있습니다. 하나의 이미지를 사용할 때에도 2차원 배열로 만들어야 합니다:
###Code
# 이미지 하나만 사용할 때도 배치에 추가합니다
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
이제 이 이미지의 예측을 만듭니다:
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict`는 2차원 넘파이 배열을 반환하므로 첫 번째 이미지의 예측을 선택합니다:
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____ |
notebooks/2. Input generation.ipynb | ###Markdown
The header file is a bit too compact to process opening some of the info will give us flexibility
###Code
def header_info_extractor(data_header):
'''
data_header: pandas dataframe of loaded csv file which describes the images
'''
image_files = list(data_header['IMAGE_FILENAME'].values)
labels = data_header['LABEL'].values.astype(str)
label_set = sorted(list(set(labels)))
new_data_block = []
for row in zip(image_files, labels):
file_name = row[0].split('_')
new_data_block.append(file_name[1:-1] + [row[1]])
new_data_block = np.array(new_data_block)
# chaning labels to numbers can help data processing
for i, x in enumerate(label_set):
new_data_block[new_data_block[:,-1] == x,-1] = i
new_data_block = new_data_block.astype(np.int)
return new_data_block, image_files, label_set
# testing the function
data_header = pd.read_csv('../data/gicsd_labels.csv', sep=', ', engine='python')
new_data_block, image_files, classes = header_info_extractor(data_header)
###Output
_____no_output_____
###Markdown
From the information we learned, we can only use the blue channel. This will generate single-channel image
###Code
def load_image(image_file):
'''
image_file: file name of the image in dataset
return: blue channel of the loaded image
'''
file_path = os.path.join('../data','images', image_file)
image_bgr = cv2.imread(file_path)
return image_bgr[:,:,0]
# testing the function
gray_image = load_image(image_files[10])
plt.imshow(gray_image, cmap='gray'); plt.title('gray image'); plt.axis('off');
###Output
_____no_output_____
###Markdown
To load images, I will use the pytorch's dataset structure. Because it's easy to use and understand. Adding the already written funtions in the class will give us prettier interface for using the dataset.
###Code
class CardImageDataset():
def __init__(self, root_dir='../data', header_file='gicsd_labels.csv', image_dir='images'):
'''
root_dir: location of the dataset dir
header_file: location of the dataset header in the dataset directory
image_dir: location of the images
'''
header_path = os.path.join(root_dir,header_file)
self.data_header = pd.read_csv(header_path, sep=', ', engine='python')
self.image_dir = os.path.join(root_dir,image_dir)
self.header_info, self.image_files, self.classes = self.header_info_extractor()
self.length = len(self.image_files)
def __len__(self):
return self.length
def __getitem__(self, idx):
gray_image = self.load_image(self.image_files[idx])
label = self.header_info[idx,-1]
return {'image': gray_image, 'label': label}
def load_image(self, image_file):
'''
image_file: file name of the image in dataset
return: blue channel of the loaded image
'''
file_path = os.path.join(self.image_dir, image_file)
image_bgr = cv2.imread(file_path)
return image_bgr[:,:,0]
def header_info_extractor(self):
'''
data_header: pandas dataframe of loaded csv file which describes the images
'''
image_files = list(self.data_header['IMAGE_FILENAME'].values)
labels = self.data_header['LABEL'].values.astype(str)
label_set = sorted(list(set(labels)))
new_data_block = []
for row in zip(image_files, labels):
file_name = row[0].split('_')
new_data_block.append(file_name[1:-1] + [row[1]])
new_data_block = np.array(new_data_block)
# chaning labels to numbers can help data processing
for i, x in enumerate(label_set):
new_data_block[new_data_block[:,-1] == x,-1] = i
new_data_block = new_data_block.astype(np.int)
return new_data_block, image_files, label_set
# testing the class
dataset = CardImageDataset(root_dir='../data', header_file='gicsd_labels.csv', image_dir='images')
print('dataset length: ', len(dataset))
plt.imshow(dataset[10]['image'], cmap='gray'); plt.title('class {}'.format(dataset[10]['label'])); plt.axis('off');
###Output
_____no_output_____ |
Document/Examples.ipynb | ###Markdown
Exercise1_3Consider a Markov chain with state space \{1, 2, 3\} and transition matrix$$P =\left(\begin{array}{cc} 0.4 & 0.2 & 0.4\\0.6 & 0 & 0.4 \\0.2 & 0.5 & 0.3\end{array}\right)$$
###Code
states = [1, 2, 3]
trans = np.array([[0.4, 0.2, 0.4],
[0.6, 0 , 0.4],
[0.2, 0.5, 0.3]])
rw = RandomWalk(states, trans)
###Output
_____no_output_____
###Markdown
what is the probability in the long run that the chain is in state 1?Solve this problem two different ways:1) by raising the matrix to a high power:
###Code
rw.trans_power(1000)
###Output
_____no_output_____
###Markdown
2) by directly computing the invariant probability vector as a left eigenvector:
###Code
rw.final_dist()
###Output
_____no_output_____
###Markdown
Exercise1_4Do the same with$$P =\left(\begin{array}{cc} 0.2 & 0.4 & 0.4\\0.1 & 0.5 & 0.4 \\0.6 & 0.3 & 0.1\end{array}\right)$$
###Code
states = [1, 2, 3]
trans = np.array([[0.2, 0.4, 0.4],
[0.1, 0.5, 0.4],
[0.6, 0.3, 0.1]])
rw = RandomWalk(states, trans)
###Output
_____no_output_____
###Markdown
1) by raising the matrix to a high power:
###Code
rw.trans_power(1000)
###Output
_____no_output_____
###Markdown
2) by directly computing the invariant probability vector as a left eigenvector:
###Code
rw.final_dist()
###Output
_____no_output_____
###Markdown
Exercise1_5Consider the Markov chain with state space $ S = \{0, ..., 5\} $ and transition matrix:$$P =\left(\begin{array}{cc} 0.5 & 0.5 & 0 & 0 & 0 & 0\\0.3 & 0.7 & 0 & 0 & 0 & 0 \\0 & 0 & 0.1 & 0 & 0 & 0.9 \\0.25 & 0.25 & 0 & 0 & 0.25 & 0.25 \\0 & 0 & 0.7 & 0 & 0.3 & 0 \\0 & 0.2 & 0 & 0.2 & 0.2 & 0.4 \\\end{array}\right)$$
###Code
states = list(range(6))
trans = np.array([[0.5, 0.5, 0 , 0 , 0 , 0 ],
[0.3, 0.7, 0 , 0 , 0 , 0 ],
[0 , 0 , 0.1, 0 , 0 , 0.9],
[.25, .25, 0 , 0 , .25, .25],
[0 , 0 , 0.7, 0 , 0.3, 0 ],
[0 , 0.2, 0 , 0.2, 0.2, 0.4]])
rw = RandomWalk(states, trans)
###Output
_____no_output_____
###Markdown
What are the communication classes? Which ones are recurrent and which are transient?
###Code
rw.get_typeof_classes()
###Output
_____no_output_____
###Markdown
Suppose the system starts in state 0. What is the probability that it will be in state 0 at some large time? Answer the same question assuming the system starts in state 5.
###Code
p_1000 = rw.trans_power(1000)
print(p_1000[0, 0])
print(p_1000[5, 5])
###Output
0.37499999999998634
8.081964030507363e-71
###Markdown
Exercise1_8Consider simple random walk on the graph below. (Recall that simple random walk on a graph is the Markov chain which at each time moves to an adjacent vertex, each adjacent vertex having the same probability):$$P =\left(\begin{array}{cc} 0 & 1/3 & 1/3 & 1/3 & 0 \\1/3 & 0 & 1/3 & 0 & 1/3 \\1/2 & 1/2 & 0 & 0 & 0 \\1/2 & 0 & 0 & 0 & 1/2 \\0 & 1/2 & 0 & 1/2 & 0 \\\end{array}\right)$$
###Code
states = list(range(5))
trans = np.array([[0, 1/3, 1/3, 1/3, 0],
[1/3, 0, 1/3, 0, 1/3],
[1/2, 1/2, 0, 0, 0],
[1/2, 0, 0, 0, 1/2],
[0 , 1/2, 0, 1/2, 0]])
rw = RandomWalk(states, trans)
###Output
_____no_output_____
###Markdown
a) In the long run, what function of time is spent in vertex A?
###Code
final_dist = rw.final_dist()
print(final_dist[0])
###Output
0.2500000000000003
###Markdown
4. Optimal_Stopping Exercise4_1Consider a simple random walk ($p = 1/2$) with absorbing boundaries on $\{0,1,2,...,10\}$. Suppose the fallowing payoff function is given:$$[0,2,4,3,10,0,6,4,3,3,0]$$Find the optimal stopping rule and give the expected payoff starting at each site.
###Code
states = list(range(11))
trans = np.array([[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[.5,0,.5, 0, 0, 0, 0, 0, 0, 0, 0],
[0, .5,0,.5, 0, 0, 0, 0, 0, 0, 0],
[0, 0, .5,0,.5, 0, 0, 0, 0, 0, 0],
[0, 0, 0, .5,0,.5, 0, 0, 0, 0, 0],
[0, 0, 0, 0, .5,0,.5, 0, 0, 0, 0],
[0, 0, 0, 0, 0, .5,0,.5, 0, 0, 0],
[0, 0, 0, 0, 0, 0, .5,0,.5, 0, 0],
[0, 0, 0, 0, 0, 0, 0, .5,0,.5, 0],
[0, 0, 0, 0, 0, 0, 0, 0, .5,0,.5],
[0, 0, 0, 0, 0, 0, 0 ,0, 0, 0, 1]])
rw = RandomWalk(states, trans, payoff=[0,2,4,3,10,0,6,4,3,3,0])
best_policy = rw.best_policy()
print(best_policy)
###Output
{'continue': [1, 2, 3, 5, 6, 7, 8], 'stop': [0, 4, 9, 10]}
###Markdown
| Google Colab | GitHub || :---: | :---: || Run in Google Colab | View Source on GitHub | Pyrandwalk Examples Version : 1.1----- This example set contains bellow examples from the first reference (Introduction to Stochastic Processes): Finite_Markov_Chains Exercise1_2 Exercise1_3 Exercise1_4 Exercise1_5 Exercise1_8 Countable_Markov_Chains Continous_Time_Markov_Chains Optimal_Stopping Exercise4_1
###Code
!pip -q -q install pyrandwalk
from pyrandwalk import *
import numpy as np
###Output
/usr/lib/python3/dist-packages/secretstorage/dhcrypto.py:15: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
/usr/lib/python3/dist-packages/secretstorage/util.py:19: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
###Markdown
1. Finite_Markov_Chains Exercise1_2Consider a Markov chain with state space \{0, 1\} and transition matrix$$P =\left(\begin{array}{cc} 1/3 & 2/3\\3/4 & 1/4\end{array}\right)$$
###Code
states = [0, 1]
trans = np.array([[1/3, 2/3],
[3/4, 1/4]])
rw = RandomWalk(states, trans)
###Output
_____no_output_____
###Markdown
Assuming that the chain starts in state 0 at time n = 0, what is the probability that it is in state 1 at time n = 3?
###Code
third_step_trans = rw.trans_power(3)
print(third_step_trans)
print("ANSWER:", third_step_trans[0, 1])
###Output
[[0.49537037 0.50462963]
[0.56770833 0.43229167]]
ANSWER: 0.5046296296296295
|
resources/lab_04/lab_04_exercise.ipynb | ###Markdown
Lab 04 - "Artificial Neural Networks (ANNs)" AssignmentsEMBA 58/59 - W8/3 - "AI Coding for Executives", University of St. Gallen In the last lab we learned how to implement, train, and apply our first **Artificial Neural Network (ANN)** using a Python library named `PyTorch`. The `PyTorch` library is an open-source machine learning library for Python, used for a variety of applications such as image classification and natural language processing. In this lab, we aim to leverage that knowledge by applying it to a set of self-coding assignments. But before we do so let's start with a motivational video by NVIDIA:
###Code
from IPython.display import YouTubeVideo
# NVIDIA: "The Deep Learning Revolution"
YouTubeVideo('Dy0hJWltsyE', width=1000, height=500)
###Output
_____no_output_____
###Markdown
As always, pls. don't hesitate to ask all your questions either during the lab, post them in our CANVAS (StudyNet) forum (https://learning.unisg.ch), or send us an email (using the course email). 1. Assignment Objectives: Similar today's lab session, after today's self-coding assignments you should be able to:> 1. Understand the basic concepts, intuitions and major building blocks of **Artificial Neural Networks (ANNs)**.> 2. Know how to use Python's **PyTorch library** to train and evaluate neural network based models.> 3. Understand how to apply neural networks to **classify images** of handwritten digits.> 4. Know how to **interpret the detection results** of the network as well as its **reconstruction loss**. 2. Setup of the Jupyter Notebook Environment Similar to the previous labs, we need to import a couple of Python libraries that allow for data analysis and data visualization. We will mostly use the `PyTorch`, `Numpy`, `Sklearn`, `Matplotlib`, `Seaborn` and a few utility libraries throughout this lab:
###Code
# import standard python libraries
import os, urllib, io
from datetime import datetime
import numpy as np
###Output
_____no_output_____
###Markdown
Import the Python machine / deep learning libraries:
###Code
# import the PyTorch deep learning libary
import torch, torchvision
import torch.nn.functional as F
from torch import nn, optim
###Output
_____no_output_____
###Markdown
Import the sklearn classification metrics:
###Code
# import sklearn classification evaluation library
from sklearn import metrics
from sklearn.metrics import classification_report, confusion_matrix
###Output
_____no_output_____
###Markdown
Import Python plotting libraries:
###Code
# import matplotlib, seaborn, and PIL data visualization libary
import matplotlib.pyplot as plt
import seaborn as sns
from PIL import Image
###Output
_____no_output_____
###Markdown
Enable notebook matplotlib inline plotting:
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Create a structure of notebook sub-directories to store the data as well as the trained neural network models:
###Code
if not os.path.exists('./data'): os.makedirs('./data') # create data directory
if not os.path.exists('./models'): os.makedirs('./models') # create trained models directory
###Output
_____no_output_____
###Markdown
Set a random `seed` value to obtain reproducable results:
###Code
# init deterministic seed
seed_value = 1234
np.random.seed(seed_value) # set numpy seed
torch.manual_seed(seed_value) # set pytorch seed CPU
###Output
_____no_output_____
###Markdown
3. Artifcial Neural Networks (ANNs) Assignments 3.1 Fashion MNIST Dataset Download and Data Assessment The **Fashion-MNIST database** is a large database of Zalando articles that is commonly used for training various image processing systems. The database is widely used for training and testing in the field of machine learning. Let's have a brief look into a couple of sample images contained in the dataset: Source: https://www.kaggle.com/c/insar-fashion-mnist-challenge Further details on the dataset can be obtained via Zalando research's [github page](https://github.com/zalandoresearch/fashion-mnist). The **Fashion-MNIST database** is a dataset of Zalando's article images, consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. Zalando created this dataset with the intention of providing a replacement for the popular **MNIST** handwritten digits dataset. It is a useful addition as it is a bit more complex, but still very easy to use. It shares the same image size and train/test split structure as MNIST, and can therefore be used as a drop-in replacement. It requires minimal efforts on preprocessing and formatting the distinct images. Let's download, transform and inspect the training images of the dataset. Therefore, let's first define the directory in which we aim to store the training data:
###Code
train_path = './data/train_fashion_mnist'
###Output
_____no_output_____
###Markdown
Now, let's download the training data accordingly:
###Code
# define pytorch transformation into tensor format
transf = torchvision.transforms.Compose([torchvision.transforms.ToTensor()])
# download and transform training images
fashion_mnist_train_data = torchvision.datasets.FashionMNIST(root=train_path, train=True, transform=transf, download=True)
###Output
_____no_output_____
###Markdown
Verify the number of training images downloaded:
###Code
# determine the number of training data images
len(fashion_mnist_train_data)
###Output
_____no_output_____
###Markdown
Next, we need to map each numerical label to its fashion item, which will be useful throughout the lab:
###Code
fashion_classes = {0: 'T-shirt/top',
1: 'Trouser',
2: 'Pullover',
3: 'Dress',
4: 'Coat',
5: 'Sandal',
6: 'Shirt',
7: 'Sneaker',
8: 'Bag',
9: 'Ankle boot'}
###Output
_____no_output_____
###Markdown
Let's now define the directory in which we aim to store the evaluation data:
###Code
eval_path = './data/eval_fashion_mnist'
###Output
_____no_output_____
###Markdown
And download the evaluation data accordingly:
###Code
# define pytorch transformation into tensor format
transf = torchvision.transforms.Compose([torchvision.transforms.ToTensor()])
# download and transform training images
fashion_mnist_eval_data = torchvision.datasets.FashionMNIST(root=eval_path, train=False, transform=transf, download=True)
###Output
_____no_output_____
###Markdown
Let's also verify the number of evaluation images downloaded:
###Code
# determine the number of evaluation data images
len(fashion_mnist_eval_data)
###Output
_____no_output_____
###Markdown
3.2 Artificial Neural Network (ANN) Model Training and Evaluation We recommend you to try the following exercises as part of the self-coding session:**Exercise 1: Train the neural network architecture of the lab for less epochs and evaluate its prediction accuracy.** > Decrease the number of training epochs to **5 epochs** and re-run the network training. Load and evaluate the model exhibiting the lowest training loss. What kind of behaviour in terms of prediction accuracy can be observed with decreasing the number of training epochs?
###Code
#### Step 1. define and init neural network architecture #############################################################
# ***************************************************
# INSERT YOUR SOLUTION/CODE HERE
# ***************************************************
#### Step 2. define loss, training hyperparameters and dataloader ####################################################
# ***************************************************
# INSERT YOUR SOLUTION/CODE HERE
# ***************************************************
#### Step 3. run model training ######################################################################################
# ***************************************************
# INSERT YOUR SOLUTION/CODE HERE
# ***************************************************
#### Step 4. run model evaluation ####################################################################################
# ***************************************************
# INSERT YOUR SOLUTION/CODE HERE
# ***************************************************
###Output
_____no_output_____
###Markdown
**Exercise 2: Evaluation of "shallow" vs. "deep" neural network architectures.** > In addition to the architecture of the lab notebook, evaluate further (more **shallow** as well as more **deep**) neural network architectures by (1) either **removing or adding** layers to the network and/or (2) increasing/decreasing the number of neurons per layer. Train a model (using the architectures you selected) for at least **20 training epochs**. Analyze the prediction performance of the trained models in terms of training time and prediction accuracy.
###Code
#### Step 1. define and init neural network architecture #############################################################
# ***************************************************
# INSERT YOUR SOLUTION/CODE HERE
# ***************************************************
#### Step 2. define loss, training hyperparameters and dataloader ####################################################
# ***************************************************
# INSERT YOUR SOLUTION/CODE HERE
# ***************************************************
#### Step 3. run model training ######################################################################################
# ***************************************************
# INSERT YOUR SOLUTION/CODE HERE
# ***************************************************
#### Step 4. run model evaluation ####################################################################################
# ***************************************************
# INSERT YOUR SOLUTION/CODE HERE
# ***************************************************
###Output
_____no_output_____ |
Examples/Notebooks/Blank.ipynb | ###Markdown
Blank[![GitHubBadge]][GitHubLink] [![ColabBadge]][ColabLink]Blank notebook with setup code to display [Plotly.swift](https://github.com/vojtamolda/Plotly.swift) charts.[ColabBadge]: https://colab.research.google.com/assets/colab-badge.svg "Run notebook in Google Colab"[ColabLink]: https://colab.research.google.com/github/vojtamolda/Plotly.swift/blob/main/Examples/Notebooks/Blank.ipynb[GitHubBadge]: https://img.shields.io/badge/|-Edit_on_GitHub-green.svg?logo=github "Edit notebook's source code on GitHub"[GitHubLink]: https://github.com/vojtamolda/Plotly.swift/blob/main/Examples/Notebooks/Blank.ipynb
###Code
%install '.package(url: "https://github.com/vojtamolda/Plotly.swift.git", .branch("main"))' Plotly
print("\u{001B}[2J") // Clear Output
%include "EnableIPythonDisplay.swift"
import Plotly
###Output
_____no_output_____
###Markdown
Blank[![GitHubBadge]][GitHubLink] [![ColabBadge]][ColabLink]Blank notebook with setup code to display [Plotly.swift](https://github.com/vojtamolda/Plotly.swift) charts.[ColabBadge]: https://colab.research.google.com/assets/colab-badge.svg "Run notebook in Google Colab"[ColabLink]: https://colab.research.google.com/github/vojtamolda/Plotly.swift/blob/master/Examples/Notebooks/Blank.ipynb[GitHubBadge]: https://img.shields.io/badge/|-Edit_on_GitHub-green.svg?logo=github "Edit notebook's source code on GitHub"[GitHubLink]: https://github.com/vojtamolda/Plotly.swift/blob/master/Examples/Notebooks/Blank.ipynb
###Code
%install '.package(url: "https://github.com/vojtamolda/Plotly.swift.git", .branch("master"))' Plotly
print("\u{001B}[2J") // Clear Output
%include "EnableIPythonDisplay.swift"
import Plotly
###Output
_____no_output_____ |
2021/Day 02.ipynb | ###Markdown
Controlling a submarine* https://adventofcode.com/2021/day/2Time to figure out how to dive with a submarine. The first task is a common one: interpret instructions that map out the submarine path.
###Code
from __future__ import annotations
from dataclasses import dataclass, replace
from enum import Enum
from functools import reduce
from typing import Iterable
class SubmarineDirection(Enum):
forward = (1, 0)
down = (0, 1)
up = (0, -1)
@dataclass
class SubmarineMove:
direction: SubmarineDirection
dpos: int = 0
ddepth: int = 0
@classmethod
def from_line(cls, line: str) -> SubmarineMove:
dir, amount = line.split()
direction = SubmarineDirection[dir]
return cls(direction, *direction.value) * int(amount)
def __mul__(self, amount: int) -> SubmarineMove:
return replace(self, dpos=self.dpos * amount, ddepth=self.ddepth * amount)
@dataclass
class SubmarinePosition:
position: int = 0
depth: int = 0
def change_position(self, move: SubmarineMove) -> SubmarinePosition:
return replace(
self, position=self.position + move.dpos, depth=self.depth + move.ddepth
)
@classmethod
def from_moves(cls, moves: Iterable) -> SubmarinePosition:
return reduce(cls.change_position, moves, cls())
test_moves = [SubmarineMove.from_line(line) for line in """\
forward 5
down 5
forward 8
up 3
down 8
forward 2
""".splitlines()]
test_pos = SubmarinePosition.from_moves(test_moves)
assert test_pos.position == 15
assert test_pos.depth == 10
assert test_pos.position * test_pos.depth == 150
import aocd
moves = [SubmarineMove.from_line(line) for line in aocd.get_data(day=2, year=2021).splitlines()]
submarine_pos = SubmarinePosition.from_moves(moves)
print("Part 1:", submarine_pos.depth * submarine_pos.position)
###Output
Part 1: 2039912
###Markdown
Part 2: reinterpreting the directionsNow, instead of a simple 2-direction vector problem, we have a slightly more complicated set of moves. The way that the submarine depth changes now depends on the `aim` value, and the `up` and `down` commands only affect the aim.Rather than re-create the `SubmarineMove` class only to rename `ddepth` (delta depth) to `daim` (delta aim), I'm just going to reinterpret the `ddepth` value as delta aim here. Welcome to Technical Debt, 101! :-DIt means we only have to provide a new `SubmarinePosition` implementation to achieve part 2.
###Code
@dataclass
class AimedSubmarinePosition(SubmarinePosition):
aim: int = 0
def change_position(self, move: SubmarineMove) -> AimedSubmarinePosition:
return replace(
self,
position=self.position + move.dpos,
depth=self.depth + (self.aim * move.dpos),
aim=self.aim + move.ddepth, # delta depth is really delta aim
)
test_pos = AimedSubmarinePosition.from_moves(test_moves)
assert test_pos.position == 15
assert test_pos.depth == 60
assert test_pos.position * test_pos.depth == 900
submarine_pos = AimedSubmarinePosition.from_moves(moves)
print("Part 2:", submarine_pos.depth * submarine_pos.position)
###Output
Part 2: 1942068080
|
1_Programacao-em-Python/9_Funcoes.ipynb | ###Markdown
9. Funções- Trechos de programa que recebem um determinado nome e podem ser chamados várias vezes durante a execução- Principais vantagens: reutilização de código, modularidade e facilidade de manutenção do sistema 9.1. Função sem parâmetro e sem retorno
###Code
def mensagem():
print('Texto da função')
mensagem()
mensagem()
mensagem()
###Output
Texto da função
Texto da função
Texto da função
###Markdown
9.2. Função com passagem de parâmetro
###Code
def mensagem(texto):
print(texto)
mensagem('texto 1')
mensagem('texto 2')
mensagem('texto 3')
def soma(a, b):
print(a + b)
soma(2, 3)
soma(3, 3)
soma(1, 2)
###Output
6
3
###Markdown
9.3. Função com passagem de parâmetros e retorno
###Code
def soma(a, b):
return a + b
soma(3, 2)
# r = 7
r = soma(3, 2)
print(r)
def calcula_energia_potencial_gravitacional(m, h, g = 10):
'''
Calcula a energia potencial gravitacional
Argumentos:
m: massa, entrada como uma variável float
h: altura, entrada como uma variável float
Argumento opcional:
g: aceleração gravitacional, com valor default de 10
'''
e = g * m * h
return e
calcula_energia_potencial_gravitacional(30, 12)
calcula_energia_potencial_gravitacional(30, 12, 9.8)
help(calcula_energia_potencial_gravitacional)
###Output
Help on function calcula_energia_potencial_gravitacional in module __main__:
calcula_energia_potencial_gravitacional(m, h, g=10)
Calcula a energia potencial gravitacional
Argumentos:
m: massa, entrada como uma variável float
h: altura, entrada como uma variável float
Argumento opcional:
g: aceleração gravitacional, com valor default de 10
|
C4/W4/ungraded_labs/C4_W4_Lab_2_Sunspots_DNN.ipynb | ###Markdown
Ungraded Lab: Predicting Sunspots with Neural Networks (DNN only)In the remaining labs for this week, you will move away from synthetic time series and start building models for real world data. In particular, you will train on the [Sunspots](https://www.kaggle.com/datasets/robervalt/sunspots) dataset: a monthly record of sunspot numbers from January 1749 to July 2018. You will first build a deep neural network here composed of dense layers. This will act as your baseline so you can compare it to the next lab where you will use a more complex architecture.Let's begin! ImportsYou will use the same imports as before with the addition of the [csv](https://docs.python.org/3/library/csv.html) module. You will need this to parse the CSV file containing the dataset.
###Code
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import csv
###Output
_____no_output_____
###Markdown
UtilitiesYou will only have the `plot_series()` dataset here because you no longer need the synthetic data generation functions.
###Code
def plot_series(x, y, format="-", start=0, end=None,
title=None, xlabel=None, ylabel=None, legend=None ):
"""
Visualizes time series data
Args:
x (array of int) - contains values for the x-axis
y (array of int or tuple of arrays) - contains the values for the y-axis
format (string) - line style when plotting the graph
label (string) - tag for the line
start (int) - first time step to plot
end (int) - last time step to plot
title (string) - title of the plot
xlabel (string) - label for the x-axis
ylabel (string) - label for the y-axis
legend (list of strings) - legend for the plot
"""
# Setup dimensions of the graph figure
plt.figure(figsize=(10, 6))
# Check if there are more than two series to plot
if type(y) is tuple:
# Loop over the y elements
for y_curr in y:
# Plot the x and current y values
plt.plot(x[start:end], y_curr[start:end], format)
else:
# Plot the x and y values
plt.plot(x[start:end], y[start:end], format)
# Label the x-axis
plt.xlabel(xlabel)
# Label the y-axis
plt.ylabel(ylabel)
# Set the legend
if legend:
plt.legend(legend)
# Set the title
plt.title(title)
# Overlay a grid on the graph
plt.grid(True)
# Draw the graph on screen
plt.show()
###Output
_____no_output_____
###Markdown
Download and Preview the DatasetYou can now download the dataset and inspect the contents. The link in class is from Laurence's repo but we also hosted it in the link below.
###Code
# Download the dataset
!wget https://storage.googleapis.com/tensorflow-1-public/course4/Sunspots.csv
###Output
_____no_output_____
###Markdown
Running the cell below, you'll see that there are only three columns in the dataset:1. untitled column containing the month number2. Date which has the format `YYYY-MM-DD`3. Mean Total Sunspot Number
###Code
# Preview the dataset
!head Sunspots.csv
###Output
_____no_output_____
###Markdown
For this lab and the next, you will only need the month number and the mean total sunspot number. You will load those into memory and convert it to arrays that represents a time series.
###Code
# Initialize lists
time_step = []
sunspots = []
# Open CSV file
with open('./Sunspots.csv') as csvfile:
# Initialize reader
reader = csv.reader(csvfile, delimiter=',')
# Skip the first line
next(reader)
# Append row and sunspot number to lists
for row in reader:
time_step.append(int(row[0]))
sunspots.append(float(row[2]))
# Convert lists to numpy arrays
time = np.array(time_step)
series = np.array(sunspots)
# Preview the data
plot_series(time, series, xlabel='Month', ylabel='Monthly Mean Total Sunspot Number')
###Output
_____no_output_____
###Markdown
Split the DatasetNext, you will split the dataset into training and validation sets. There are 3235 points in the dataset and you will use the first 3000 for training.
###Code
# Define the split time
split_time = 3000
# Get the train set
time_train = time[:split_time]
x_train = series[:split_time]
# Get the validation set
time_valid = time[split_time:]
x_valid = series[split_time:]
###Output
_____no_output_____
###Markdown
Prepare Features and LabelsYou can then prepare the dataset windows as before. The window size is set to 30 points (equal to 2.5 years) but feel free to change later on if you want to experiment.
###Code
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
"""Generates dataset windows
Args:
series (array of float) - contains the values of the time series
window_size (int) - the number of time steps to include in the feature
batch_size (int) - the batch size
shuffle_buffer(int) - buffer size to use for the shuffle method
Returns:
dataset (TF Dataset) - TF Dataset containing time windows
"""
# Generate a TF Dataset from the series values
dataset = tf.data.Dataset.from_tensor_slices(series)
# Window the data but only take those with the specified size
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
# Flatten the windows by putting its elements in a single batch
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
# Create tuples with features and labels
dataset = dataset.map(lambda window: (window[:-1], window[-1]))
# Shuffle the windows
dataset = dataset.shuffle(shuffle_buffer)
# Create batches of windows
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
# Parameters
window_size = 30
batch_size = 32
shuffle_buffer_size = 1000
# Generate the dataset windows
train_set = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
###Output
_____no_output_____
###Markdown
Build the ModelThe model will be 3-layer dense network as shown below.
###Code
# Build the model
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(30, input_shape=[window_size], activation="relu"),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1)
])
# Print the model summary
model.summary()
###Output
_____no_output_____
###Markdown
Tune the Learning RateYou can pick a learning rate by running the same learning rate scheduler code from previous labs.
###Code
# Set the learning rate scheduler
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
# Initialize the optimizer
optimizer = tf.keras.optimizers.SGD(momentum=0.9)
# Set the training parameters
model.compile(loss=tf.keras.losses.Huber(), optimizer=optimizer)
# Train the model
history = model.fit(train_set, epochs=100, callbacks=[lr_schedule])
# Define the learning rate array
lrs = 1e-8 * (10 ** (np.arange(100) / 20))
# Set the figure size
plt.figure(figsize=(10, 6))
# Set the grid
plt.grid(True)
# Plot the loss in log scale
plt.semilogx(lrs, history.history["loss"])
# Increase the tickmarks size
plt.tick_params('both', length=10, width=1, which='both')
# Set the plot boundaries
plt.axis([1e-8, 1e-3, 0, 100])
###Output
_____no_output_____
###Markdown
Train the ModelOnce you've picked a learning rate, you can rebuild the model and start training.
###Code
# Reset states generated by Keras
tf.keras.backend.clear_session()
# Build the Model
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(30, input_shape=[window_size], activation="relu"),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1)
])
# Set the learning rate
learning_rate = 2e-5
# Set the optimizer
optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum=0.9)
# Set the training parameters
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
# Train the model
history = model.fit(train_set,epochs=100)
###Output
_____no_output_____
###Markdown
Model PredictionNow see if the model generates good results. If you used the default parameters of this notebook, you should see the predictions follow the shape of the ground truth with an MAE of around 15.
###Code
def model_forecast(model, series, window_size, batch_size):
"""Uses an input model to generate predictions on data windows
Args:
model (TF Keras Model) - model that accepts data windows
series (array of float) - contains the values of the time series
window_size (int) - the number of time steps to include in the window
batch_size (int) - the batch size
Returns:
forecast (numpy array) - array containing predictions
"""
# Generate a TF Dataset from the series values
dataset = tf.data.Dataset.from_tensor_slices(series)
# Window the data but only take those with the specified size
dataset = dataset.window(window_size, shift=1, drop_remainder=True)
# Flatten the windows by putting its elements in a single batch
dataset = dataset.flat_map(lambda w: w.batch(window_size))
# Create batches of windows
dataset = dataset.batch(batch_size).prefetch(1)
# Get predictions on the entire dataset
forecast = model.predict(dataset)
return forecast
# Reduce the original series
forecast_series = series[split_time-window_size:-1]
# Use helper function to generate predictions
forecast = model_forecast(model, forecast_series, window_size, batch_size)
# Drop single dimensional axis
results = forecast.squeeze()
# Plot the results
plot_series(time_valid, (x_valid, results))
# Compute the MAE
print(tf.keras.metrics.mean_absolute_error(x_valid, results).numpy())
###Output
_____no_output_____ |
pipelines/ligtheweight-component-and-container-operations/ligtheweight-component-and-container-operations.ipynb | ###Markdown
Combining pythong lightweigt components and container operations This notebook demos: * Defining a Kubeflow pipeline with the KFP SDK, combinging python lightweight components operations and container operations* Creating an experiment and submitting pipelines to the KFP run time environment using the KFP SDK Reference documentation: * https://www.kubeflow.org/docs/pipelines/sdk/sdk-overview/* https://www.kubeflow.org/docs/pipelines/sdk/build-component/ Prerequisites: Install or update the pipelines SDKYou may need to **restart your notebook kernel** after updating the KFP sdk.This notebook is intended to be run from a Kubeflow notebook server. (From other environments, you would need to pass different arguments to the `kfp.Client` constructor.)
###Code
# You may need to restart your notebook kernel after updating
!python3 -m pip install kfp-server-api --upgrade --user
!python3 -m pip install kfp --upgrade --user
###Output
_____no_output_____
###Markdown
Setup
###Code
EXPERIMENT_NAME = 'Combining pythong lightweigt components and container operations' # Name of the experiment in the UI
BASE_IMAGE = 'tensorflow/tensorflow:2.0.0b0-py3' # Base image used for components in the pipeline
import kfp
import os
from kfp import compiler
from kfp import components
from kfp import gcp
###Output
_____no_output_____
###Markdown
Create pipeline component Create a python function
###Code
def add(a: float, b: float) -> float:
'''Calculates sum of two arguments'''
print(a, '+', b, '=', a + b)
return a + b
###Output
_____no_output_____
###Markdown
Build a pipeline component from the function
###Code
# Convert the function to a pipeline operation.
add_op = components.func_to_container_op(
add,
base_image=BASE_IMAGE,
)
###Output
_____no_output_____
###Markdown
Create Reusable Components
###Code
echo_op = kfp.components.load_component_from_file(os.path.join(os.path.abspath(os.curdir), 'component.yaml'))
###Output
_____no_output_____
###Markdown
Build a pipeline using the component
###Code
def calc_pipeline(
a:float =0,
b:float =7,
):
#Passing pipeline parameter and a constant value as operation arguments
add_task = add_op(a, 4) #Returns a dsl.ContainerOp class instance.
#You can create explicit dependency between the tasks using xyz_task.after(abc_task)
add_2_task = add_op(a, b)
add_3_task = add_op(add_task.output, add_2_task.output)
echo_op(add_3_task.output,add_2_task.output)
###Output
_____no_output_____
###Markdown
Compile and run the pipelineKubeflow Pipelines lets you group pipeline runs by *Experiments*. You can create a new experiment, or call `kfp.Client().list_experiments()` to see existing ones.If you don't specify the experiment name, the `Default` experiment will be used.You can directly run a pipeline given its function definition:
###Code
# Specify pipeline argument values
arguments = {'a': '7', 'b': '8'}
# Launch a pipeline run given the pipeline function definition
kfp.Client().create_run_from_pipeline_func(calc_pipeline, arguments=arguments,
experiment_name=EXPERIMENT_NAME)
###Output
_____no_output_____ |
ml/notebooks/Pipeline mongoRaw to clean before EDA.ipynb | ###Markdown

###Code
with open("./data/raw_data_2021_01_11_19_59_10.pkl", "rb") as fh:
raw_data = pickle.load(fh)
raw_data.shape
clean_data=preprocessing(raw_data)
clean_data.head()
clean_numeric(clean_data)
#to check if data have inf and nan numbers
# print(np.any(np.isinf(clean_data['cena_za_metr'])))
# print(clean_data['cena_za_metr'].isnull().sum())
# print(np.where(np.isnan(clean_data['cena_za_metr'])))
# print(clean_data.describe())
atrlist=get_atrakcyjnosc()
domlist=get_otodom()
map_atrakcyjnosc2(clean_data,atrlist)
map_otodom2(clean_data,domlist)
clean_data['opis_clean']=clean_data['opis'].progress_apply(lambda x: spacy_tokenizer_lemmatizer(x)).apply(lambda x: ' '.join(x))
pkl_file = './data/clean_data_with_opis_clean' + str(datetime.now().strftime('%Y_%m_%d_%H_%M_%S')) + '.pkl'
#saving df into pickle
clean_data.to_pickle(pkl_file)
with open("./data/clean_data_with_opis_clean2021_01_05_17_26_25.pkl", "rb") as fh:
clean_data = pickle.load(fh)
clean_data.head()
clean_data.shape
######Actual collection from mongo preprocessed and create pickle from this
# Connection to mongodb and loading data into df
client = MongoClient(url_link)
db = client.GUMTREE
collection = db.mieszkania
data_mongo = pd.DataFrame(list(collection.find()))
with open("./data/raw_data_2021_01_11_19_59_10.pkl", "rb") as fh:
raw_data = pickle.load(fh)
raw_data.drop_duplicates(subset=['mieszkanie_url'],inplace=True)
raw_data.shape
raw_data=preprocessing(raw_data)
raw_data=clean_numeric(raw_data)
atrlist=get_atrakcyjnosc()
domlist=get_otodom()
map_atrakcyjnosc2(raw_data,atrlist)
map_otodom2(raw_data,domlist)
raw_data['opis_clean']=raw_data['opis'].progress_apply(lambda x: spacy_tokenizer_lemmatizer(x)).progress_apply(lambda x: ' '.join(x))
pkl_file = './data/data_clean' + str(datetime.now().strftime('%Y_%m_%d_%H_%M_%S')) + '.pkl'
#saving df into pickle
raw_data.to_pickle(pkl_file)
data_mongo.head()
pkl_file = './data/data_mongo_' + str(datetime.now().strftime('%Y_%m_%d_%H_%M_%S')) + '.pkl'
#saving df into pickle
data_mongo.to_pickle(pkl_file)
###Output
_____no_output_____ |
21jk1-0512.ipynb | ###Markdown
以下は[ここ](https://ja.wikipedia.org/wiki/Luhn%E3%82%A2%E3%83%AB%E3%82%B4%E3%83%AA%E3%82%BA%E3%83%A0)にあったコードを用いて作成した.
###Code
def check_number(digits):
_sum = 0
alt = False
for d in reversed(str(digits)):
d = int(d)
assert 0 <= d <= 9
if alt:
d *= 2
if d > 9:
d -= 9
_sum += d
alt = not alt
return (_sum % 10) == 0
from IPython.display import HTML
from ipywidgets import interact
from ipywidgets import interact,Dropdown,IntSlider
@interact
def _(n="49927398716"):
check = check_number(n)
if check:
print("{}は正しい".format(n))
else:
print("{}は正しくない".format(n))
###Output
_____no_output_____ |
examples/1d_multiple_constraints_example.ipynb | ###Markdown
Define a kernel and functionHere we define a kernel. The function is drawn at random from the GP and is corrupted my Gaussian noise
###Code
# Measurement noise
noise_var = 0.05 ** 2
noise_var2 = 1e-5
# Bounds on the inputs variable
bounds = [(-10., 10.)]
# Define Kernel
kernel = GPy.kern.RBF(input_dim=len(bounds), variance=2., lengthscale=1.0, ARD=True)
kernel2 = kernel.copy()
# set of parameters
parameter_set = safeopt.linearly_spaced_combinations(bounds, 1000)
# Initial safe point
x0 = np.zeros((1, len(bounds)))
# Generate function with safe initial point at x=0
def sample_safe_fun():
fun = safeopt.sample_gp_function(kernel, bounds, noise_var, 100)
while True:
fun2 = safeopt.sample_gp_function(kernel2, bounds, noise_var2, 100)
if fun2(0, noise=False) > 1:
break
def combined_fun(x, noise=True):
return np.hstack([fun(x, noise), fun2(x, noise)])
return combined_fun
###Output
_____no_output_____
###Markdown
Interactive run of the algorithm
###Code
# Define the objective function
fun = sample_safe_fun()
# The statistical model of our objective function and safety constraint
y0 = fun(x0)
gp = GPy.models.GPRegression(x0, y0[:, 0, None], kernel, noise_var=noise_var)
gp2 = GPy.models.GPRegression(x0, y0[:, 1, None], kernel2, noise_var=noise_var2)
# The optimization routine
# opt = safeopt.SafeOptSwarm([gp, gp2], [-np.inf, 0.], bounds=bounds, threshold=0.2)
opt = safeopt.SafeOpt([gp, gp2], parameter_set, [-np.inf, 0.], lipschitz=None, threshold=0.1)
def plot():
# Plot the GP
opt.plot(100)
# Plot the true function
y = fun(parameter_set, noise=False)
for manager, true_y in zip(mpl._pylab_helpers.Gcf.get_all_fig_managers(), y.T):
figure = manager.canvas.figure
figure.gca().plot(parameter_set, true_y, color='C2', alpha=0.3)
plot()
# Obtain next query point
x_next = opt.optimize()
# Get a measurement from the real system
y_meas = fun(x_next)
# Add this to the GP model
opt.add_new_data_point(x_next, y_meas)
plot()
###Output
_____no_output_____ |
Project/YOLOv3-Tensorflow.ipynb | ###Markdown
Create a folder for checkpoints of weights
###Code
%mkdir checkpoints
###Output
_____no_output_____
###Markdown
importing the required libraries
###Code
import cv2
import numpy as np
import tensorflow as tf
from absl import logging
from itertools import repeat
from PIL import Image
from matplotlib import pyplot as plt
from tensorflow.keras import Model
from tensorflow.keras.layers import Add, Concatenate, Lambda
from tensorflow.keras.layers import Conv2D, Input, LeakyReLU
from tensorflow.keras.layers import MaxPool2D, UpSampling2D, ZeroPadding2D
from tensorflow.keras.regularizers import l2
from tensorflow.keras.losses import binary_crossentropy
from tensorflow.keras.losses import sparse_categorical_crossentropy
yolo_iou_threshold = 0.6 # Intersection Over Union (iou) threshold
yolo_score_threshold = 0.6 # Score threshold
weightyolov3 = 'yolov3.weights' # the path to the weight file
weights = 'checkpoints/yolov3.tf' # the path to the checkpoints file
size = 416 # resize an image
checkpoints = 'checkpoints/yolov3.tf'
num_classes = 80 # number of classes in the model
###Output
_____no_output_____
###Markdown
List of layers in YOLOv3 Fully Convolutional Network (FCN)
###Code
YOLO_V3_LAYERS = [
'yolo_darknet',
'yolo_conv_0',
'yolo_output_0',
'yolo_conv_1',
'yolo_output_1',
'yolo_conv_2',
'yolo_output_2'
]
# The function to load weights from pretrained model
def load_darknet_weights(model, weights_file):
wf = open(weights_file, 'rb')
major, minor, revision, seen, _ = np.fromfile(wf, dtype=np.int32, count=5)
layers = YOLO_V3_LAYERS
for layer_name in layers:
sub_model = model.get_layer(layer_name)
for i, layer in enumerate(sub_model.layers):
if not layer.name.startswith('conv2d'):
continue
batch_norm = None
if i + 1 < len(sub_model.layers) and \
sub_model.layers[i + 1].name.startswith('batch_norm'):
batch_norm = sub_model.layers[i + 1]
logging.info("{}/{} {}".format(
sub_model.name, layer.name, 'bn' if batch_norm else 'bias'))
filters = layer.filters
size = layer.kernel_size[0]
in_dim = layer.input_shape[-1]
if batch_norm is None:
conv_bias = np.fromfile(wf, dtype=np.float32, count=filters)
else:
bn_weights = np.fromfile(wf, dtype=np.float32, count=4*filters)
bn_weights = bn_weights.reshape((4, filters))[[1, 0, 2, 3]]
conv_shape = (filters, in_dim, size, size)
conv_weights = np.fromfile(wf, dtype=np.float32, count=np.product(conv_shape))
conv_weights = conv_weights.reshape(conv_shape).transpose([2, 3, 1, 0])
if batch_norm is None:
layer.set_weights([conv_weights, conv_bias])
else:
layer.set_weights([conv_weights])
batch_norm.set_weights(bn_weights)
assert len(wf.read()) == 0, 'failed to read weights'
wf.close()
# The function to calculate IoU
def interval_overlap(interval_1, interval_2):
x1, x2 = interval_1
x3, x4 = interval_2
if x3 < x1:
return 0 if x4 < x1 else (min(x2,x4) - x1)
else:
return 0 if x2 < x3 else (min(x2,x4) - x3)
def intersectionOverUnion(box1, box2):
intersect_w = interval_overlap([box1.xmin, box1.xmax], [box2.xmin, box2.xmax])
intersect_h = interval_overlap([box1.ymin, box1.ymax], [box2.ymin, box2.ymax])
intersect_area = intersect_w * intersect_h
w1, h1 = box1.xmax-box1.xmin, box1.ymax-box1.ymin
w2, h2 = box2.xmax-box2.xmin, box2.ymax-box2.ymin
union_area = w1*h1 + w2*h2 - intersect_area
return float(intersect_area) / union_area
# The function to draw bounding boxes, class names, probability and objects which we want to detect
def draw_outputs(img, outputs, class_names, white_list=None):
boxes, score, classes, nums = outputs
boxes, score, classes, nums = boxes[0], score[0], classes[0], nums[0]
wh = np.flip(img.shape[0:2])
for i in range(nums):
if white_list is not None and class_names[int(classes[i])] not in white_list:
continue
x1y1 = tuple((np.array(boxes[i][0:2]) * wh).astype(np.int32))
x2y2 = tuple((np.array(boxes[i][2:4]) * wh).astype(np.int32))
img = cv2.rectangle(img, x1y1, x2y2, (255, 0, 0), 2)
img = cv2.putText(img, '{} {:.4f}'.format(
class_names[int(classes[i])], score[i]),
x1y1, cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0, 0, 255), 2)
return img
# The function to normalize the outputs to speed up learning
class BatchNormalization(tf.keras.layers.BatchNormalization):
def call(self, x, training=False):
if training is None: training = tf.constant(False)
training = tf.logical_and(training, self.trainable)
return super().call(x, training)
yolo_anchors = np.array([(10, 13), (16, 30), (33, 23), (30, 61), (62, 45),
(59, 119), (116, 90), (156, 198), (373, 326)], np.float32) / 416
yolo_anchor_masks = np.array([[6, 7, 8], [3, 4, 5], [0, 1, 2]])
def DarknetConv(x, filters, size, strides=1, batch_norm=True):
if strides == 1:
padding = 'same'
else:
x = ZeroPadding2D(((1, 0), (1, 0)))(x) # top left half-padding
padding = 'valid'
x = Conv2D(filters=filters, kernel_size=size,
strides=strides, padding=padding,
use_bias=not batch_norm, kernel_regularizer=l2(0.0005))(x)
if batch_norm:
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.1)(x)
return x
def DarknetResidual(x, filters):
previous = x
x = DarknetConv(x, filters // 2, 1)
x = DarknetConv(x, filters, 3)
x = Add()([previous , x])
return x
def DarknetBlock(x, filters, blocks):
x = DarknetConv(x, filters, 3, strides=2)
for _ in repeat(None, blocks):
x = DarknetResidual(x, filters)
return x
def Darknet(name=None):
x = inputs = Input([None, None, 3])
x = DarknetConv(x, 32, 3)
x = DarknetBlock(x, 64, 1)
x = DarknetBlock(x, 128, 2)
x = x_36 = DarknetBlock(x, 256, 8)
x = x_61 = DarknetBlock(x, 512, 8)
x = DarknetBlock(x, 1024, 4)
return tf.keras.Model(inputs, (x_36, x_61, x), name=name)
def YoloConv(filters, name=None):
def yolo_conv(x_in):
if isinstance(x_in, tuple):
inputs = Input(x_in[0].shape[1:]), Input(x_in[1].shape[1:])
x, x_skip = inputs
x = DarknetConv(x, filters, 1)
x = UpSampling2D(2)(x)
x = Concatenate()([x, x_skip])
else:
x = inputs = Input(x_in.shape[1:])
x = DarknetConv(x, filters, 1)
x = DarknetConv(x, filters * 2, 3)
x = DarknetConv(x, filters, 1)
x = DarknetConv(x, filters * 2, 3)
x = DarknetConv(x, filters, 1)
return Model(inputs, x, name=name)(x_in)
return yolo_conv
def YoloOutput(filters, anchors, classes, name=None):
def yolo_output(x_in):
x = inputs = Input(x_in.shape[1:])
x = DarknetConv(x, filters * 2, 3)
x = DarknetConv(x, anchors * (classes + 5), 1, batch_norm=False)
x = Lambda(lambda x: tf.reshape(x, (-1, tf.shape(x)[1], tf.shape(x)[2],
anchors, classes + 5)))(x)
return tf.keras.Model(inputs, x, name=name)(x_in)
return yolo_output
def yolo_boxes(pred, anchors, classes):
grid_size = tf.shape(pred)[1]
box_xy, box_wh, score, class_probs = tf.split(pred, (2, 2, 1, classes), axis=-1)
box_xy = tf.sigmoid(box_xy)
score = tf.sigmoid(score)
class_probs = tf.sigmoid(class_probs)
pred_box = tf.concat((box_xy, box_wh), axis=-1)
grid = tf.meshgrid(tf.range(grid_size), tf.range(grid_size))
grid = tf.expand_dims(tf.stack(grid, axis=-1), axis=2)
box_xy = (box_xy + tf.cast(grid, tf.float32)) / tf.cast(grid_size, tf.float32)
box_wh = tf.exp(box_wh) * anchors
box_x1y1 = box_xy - box_wh / 2
box_x2y2 = box_xy + box_wh / 2
bbox = tf.concat([box_x1y1, box_x2y2], axis=-1)
return bbox, score, class_probs, pred_box
# The function to suppress non-maximum
def nonMaximumSuppression(outputs, anchors, masks, classes):
boxes, conf, out_type = [], [], []
for output in outputs:
boxes.append(tf.reshape(output[0], (tf.shape(output[0])[0], -1, tf.shape(output[0])[-1])))
conf.append(tf.reshape(output[1], (tf.shape(output[1])[0], -1, tf.shape(output[1])[-1])))
out_type.append(tf.reshape(output[2], (tf.shape(output[2])[0], -1, tf.shape(output[2])[-1])))
bbox = tf.concat(boxes, axis=1)
confidence = tf.concat(conf, axis=1)
class_probs = tf.concat(out_type, axis=1)
scores = confidence * class_probs
boxes, scores, classes, valid_detections = tf.image.combined_non_max_suppression(
boxes=tf.reshape(bbox, (tf.shape(bbox)[0], -1, 1, 4)),
scores=tf.reshape(
scores, (tf.shape(scores)[0], -1, tf.shape(scores)[-1])),
max_output_size_per_class=100,
max_total_size=100,
iou_threshold=yolo_iou_threshold,
score_threshold=yolo_score_threshold)
return boxes, scores, classes, valid_detections
# The main function
def YoloV3(size=None, channels=3, anchors=yolo_anchors,
masks=yolo_anchor_masks, classes=80, training=False):
x = inputs = Input([size, size, channels])
x_36, x_61, x = Darknet(name='yolo_darknet')(x)
x = YoloConv(512, name='yolo_conv_0')(x)
output_0 = YoloOutput(512, len(masks[0]), classes, name='yolo_output_0')(x)
x = YoloConv(256, name='yolo_conv_1')((x, x_61))
output_1 = YoloOutput(256, len(masks[1]), classes, name='yolo_output_1')(x)
x = YoloConv(128, name='yolo_conv_2')((x, x_36))
output_2 = YoloOutput(128, len(masks[2]), classes, name='yolo_output_2')(x)
if training:
return Model(inputs, (output_0, output_1, output_2), name='yolov3')
boxes_0 = Lambda(lambda x: yolo_boxes(x, anchors[masks[0]], classes),
name='yolo_boxes_0')(output_0)
boxes_1 = Lambda(lambda x: yolo_boxes(x, anchors[masks[1]], classes),
name='yolo_boxes_1')(output_1)
boxes_2 = Lambda(lambda x: yolo_boxes(x, anchors[masks[2]], classes),
name='yolo_boxes_2')(output_2)
outputs = Lambda(lambda x: nonMaximumSuppression(x, anchors, masks, classes),
name='nonMaximumSuppression')((boxes_0[:3], boxes_1[:3], boxes_2[:3]))
return Model(inputs, outputs, name='yolov3')
# The loss function
def YoloLoss(anchors, classes=80, ignore_thresh=0.5):
def yolo_loss(y_true, y_pred):
pred_box, pred_obj, pred_class, pred_xywh = yolo_boxes(
y_pred, anchors, classes)
pred_xy = pred_xywh[..., 0:2]
pred_wh = pred_xywh[..., 2:4]
true_box, true_obj, true_class_idx = tf.split(
y_true, (4, 1, 1), axis=-1)
true_xy = (true_box[..., 0:2] + true_box[..., 2:4]) / 2
true_wh = true_box[..., 2:4] - true_box[..., 0:2]
box_loss_scale = 2 - true_wh[..., 0] * true_wh[..., 1]
grid_size = tf.shape(y_true)[1]
grid = tf.meshgrid(tf.range(grid_size), tf.range(grid_size))
grid = tf.expand_dims(tf.stack(grid, axis=-1), axis=2)
true_xy = true_xy * tf.cast(grid_size, tf.float32) - \
tf.cast(grid, tf.float32)
true_wh = tf.math.log(true_wh / anchors)
true_wh = tf.where(tf.math.is_inf(true_wh),
tf.zeros_like(true_wh), true_wh)
obj_mask = tf.squeeze(true_obj, -1)
# ignore when Intersection Over Union is over threshold
true_box_flat = tf.boolean_mask(true_box, tf.cast(obj_mask, tf.bool))
best_iou = tf.reduce_max(intersectionOverUnion(
pred_box, true_box_flat), axis=-1)
ignore_mask = tf.cast(best_iou < ignore_thresh, tf.float32)
xy_loss = obj_mask * box_loss_scale * \
tf.reduce_sum(tf.square(true_xy - pred_xy), axis=-1)
wh_loss = obj_mask * box_loss_scale * \
tf.reduce_sum(tf.square(true_wh - pred_wh), axis=-1)
obj_loss = binary_crossentropy(true_obj, pred_obj)
obj_loss = obj_mask * obj_loss + \
(1 - obj_mask) * ignore_mask * obj_loss
class_loss = obj_mask * sparse_categorical_crossentropy(
true_class_idx, pred_class)
xy_loss = tf.reduce_sum(xy_loss, axis=(1, 2, 3))
wh_loss = tf.reduce_sum(wh_loss, axis=(1, 2, 3))
obj_loss = tf.reduce_sum(obj_loss, axis=(1, 2, 3))
class_loss = tf.reduce_sum(class_loss, axis=(1, 2, 3))
return xy_loss + wh_loss + obj_loss + class_loss
return yolo_loss
# The function to transform targets outputs tuple of shape
@tf.function
def transform_targets_for_output(y_true, grid_size, anchor_idxs, classes):
N = tf.shape(y_true)[0]
y_true_out = tf.zeros(
(N, grid_size, grid_size, tf.shape(anchor_idxs)[0], 6))
anchor_idxs = tf.cast(anchor_idxs, tf.int32)
indexes = tf.TensorArray(tf.int32, 1, dynamic_size=True)
updates = tf.TensorArray(tf.float32, 1, dynamic_size=True)
idx = 0
for i in tf.range(N):
for j in tf.range(tf.shape(y_true)[1]):
if tf.equal(y_true[i][j][2], 0):
continue
anchor_eq = tf.equal(
anchor_idxs, tf.cast(y_true[i][j][5], tf.int32))
if tf.reduce_any(anchor_eq):
box = y_true[i][j][0:4]
box_xy = (y_true[i][j][0:2] + y_true[i][j][2:4]) / 2
anchor_idx = tf.cast(tf.where(anchor_eq), tf.int32)
grid_xy = tf.cast(box_xy // (1/grid_size), tf.int32)
indexes = indexes.write(
idx, [i, grid_xy[1], grid_xy[0], anchor_idx[0][0]])
updates = updates.write(
idx, [box[0], box[1], box[2], box[3], 1, y_true[i][j][4]])
idx += 1
return tf.tensor_scatter_nd_update(
y_true_out, indexes.stack(), updates.stack())
def transform_targets(y_train, anchors, anchor_masks, classes):
outputs = []
grid_size = 13
anchors = tf.cast(anchors, tf.float32)
anchor_area = anchors[..., 0] * anchors[..., 1]
box_wh = y_train[..., 2:4] - y_train[..., 0:2]
box_wh = tf.tile(tf.expand_dims(box_wh, -2),
(1, 1, tf.shape(anchors)[0], 1))
box_area = box_wh[..., 0] * box_wh[..., 1]
intersection = tf.minimum(box_wh[..., 0], anchors[..., 0]) * \
tf.minimum(box_wh[..., 1], anchors[..., 1])
iou = intersection / (box_area + anchor_area - intersection)
anchor_idx = tf.cast(tf.argmax(iou, axis=-1), tf.float32)
anchor_idx = tf.expand_dims(anchor_idx, axis=-1)
y_train = tf.concat([y_train, anchor_idx], axis=-1)
for anchor_idxs in anchor_masks:
outputs.append(transform_targets_for_output(
y_train, grid_size, anchor_idxs, classes))
grid_size *= 2
return tuple(outputs) # [x, y, w, h, obj, class]
def preprocess_image(x_train, size):
return (tf.image.resize(x_train, (size, size))) / 255
# Creating the model, loading weights and class names
yolo = YoloV3(classes=num_classes)
load_darknet_weights(yolo, weightyolov3)
yolo.save_weights(checkpoints)
class_names = ["person", "bicycle", "car", "motorbike", "aeroplane", "bus", "train", "truck",
"boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench",
"bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe",
"backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard",
"sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard",
"tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl",
"banana","apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut",
"cake","chair", "sofa", "pottedplant", "bed", "diningtable", "toilet", "tvmonitor", "laptop",
"mouse","remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink",
"refrigerator","book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"]
def detect_objects(img_path, white_list):
image = img_path
img = tf.image.decode_image(open(image, 'rb').read(), channels=3)
img = tf.expand_dims(img, 0)
img = preprocess_image(img, size)
boxes, scores, classes, nums = yolo(img)
img = cv2.imread(image)
img = draw_outputs(img, (boxes, scores, classes, nums), class_names, white_list)
cv2.imwrite('detected_{:}'.format(img_path), img)
detected = Image.open('detected_{:}'.format(img_path))
detected.show()
plt.title('Detected image')
plt.imshow(detected)
detect_objects('test.jpg', ['bear'])
###Output
_____no_output_____ |
crawling/crawling_101.ipynb | ###Markdown
selenium 라이브러리 사용
###Code
# open chrome
browser = webdriver.Chrome('/Users/klee30810/Downloads/chromedriver')
# go to url
url = 'https://www.naver.com'
browser.get(url)
###Output
_____no_output_____
###Markdown
url : https://주소.com/.... ? 파라미터(변수=값) 변수=값&변수&값
###Code
search_words = ['청주+글램핑','청주+레스토랑']
for word in search_words:
print(word)
url = f'https://www.google.com/search?q={word}'
print(url)
browser.get(url)
###Output
청주+글램핑
https://www.google.com/search?q=청주+글램핑
청주+레스토랑
https://www.google.com/search?q=청주+레스토랑
###Markdown
띄어쓰기로 값 구분홍길동 내용 포장부모태그 자손태그 존재
###Code
html = browser.page_source
html
browser = webdriver.Chrome('/Users/klee30810/Downloads/chromedriver')
url = 'https://news.naver.com/main/read.naver?mode=LSD&mid=shm&sid1=101&oid=018&aid=0005107017'
browser.get(url)
from bs4 import BeautifulSoup
html = browser.page_source
soup = BeautifulSoup(html, 'html.parser')
# title = soup.select('h3') # find all h3 tags
# title = soup.select('.tts_head') # find class tts_head tag
title = soup.select('#articleTitle')[0] # find id articleTitle tag, 일반적으로 한 페이지에서 하나만 나옴
print(len(title)) # h3 tag 갯수 확인
title
title.text # 태그 기호를 제외한 내용 가져오기
company = soup.select('div.press_logo > a > img')[0]['alt'] # tag가 가진 속성명 추출
company
# 부모 자손 태그
search = soup.select('div > strong.media_end_summary')
search
###Output
_____no_output_____ |
CidadeAgil_notebook.ipynb | ###Markdown
**Dados importados, bibliotecas de plot e dataframe.** **Premissas iniciais:**Através do uso um modelo não supervisionado, como no caso de uma clusterização, encontrar nos municípios do Estado de São Paulo grupos similares baseando-se na comparação de métricas de saúde.**Variaveis iniciais:** (info adicional sobre os indicadores no diário de projeto)* 1 - Mortalidade_infantil* 2 - IDHM_Educacao* 3 - Densidade_demográfica* 4 - Renda_per_capita* 5 - Grau_de_Urbanizacao* 6 - Indice_de_Gini* 7 - Esgoto_Sanitario* 8 - Qtd_Estabelecimentos* 9 - Total_Medicos* 10 - Total_Doses_Aplicadas Importamos as bibliotecas:- Pandas para leitura e manipulação do Dataframe- Matplotlib e Seaborn para a plotagem de gráficos- Numpy para operações matemáticasEm seguida usamos a função do pandas .read_csv para carregar nossa base de dados:
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
df_raw = pd.read_csv('https://raw.githubusercontent.com/magemongo/CA_MU_DS_VINCE/master/dados_municipios_1.csv', encoding='UTF-8', error_bad_lines=False, sep=';')
df_raw
'''a = df_raw[['Coleta_de_Lixo','Abastecimento_de_Agua','Esgoto_Sanitario']]
df_raw['infra_media'] = (a['Coleta_de_Lixo'] + a['Abastecimento_de_Agua'] + a['Esgoto_Sanitario'])/3'''
# possivel união dos indicadores Coleta de Lixo, Abastecimento de Agua e Esgoto Sanitário em uma coluna unificada
###Output
_____no_output_____
###Markdown
Foram selecionados da base de dados original apenas os 10 indicadores que achamos mais relevantes para nossa análise, e então adicionamos os mesmos a um dataset separado chamado df_saude.
###Code
columns = ['Cod_IBGE','Mortalidade_infantil','IDHM_Educacao','Densidade_demográfica','Renda_per_capita','Grau_de_Urbanizacao','Indice_de_Gini','Esgoto_Sanitario','Qtd_Estabelecimentos','Total_Medicos','Total_Doses_Aplicadas']
df_saude = df_raw[columns]
#df_saude.set_index('Cod_IBGE', inplace= True)
df_saude.head()
###Output
_____no_output_____
###Markdown
**Limpeza e Normalização dos dados**. Nessa etapa verificamos a existencia de valores nulos em nosso dataset, e então damos o tratamento adequado a cada caso.Também faremos aqui a normalização de dados absolutos, de forma que sejam sempre valores relativos, e portanto, menos enviesados.Vemos por exemplo que em nosso dataset existem 28 valores nulos na coluna Total_Medicos.
###Code
df_saude.isnull().sum()
###Output
_____no_output_____
###Markdown
Porém como seus valores são absolutos fica dificil a visualização do seu comportamento:
###Code
sns.boxplot(df_saude['Total_Medicos'])
###Output
_____no_output_____
###Markdown
Primeiro vamos transformal o número total em uma taxa por mil habitantes utilizando uma função simples. Gerando assim uma nova coluna chamada Razao_Medico_mil_Hab.Observamos agora em um boxplot como é a distribuição de nossos dados.É possível observar que em nosso dados existe um número expressivo de valores extremos, e como não queremos deixar valores vazios utilizaremos a mediana em vez da média, por ser uma medida central menos sensível a *outliers*.
###Code
df_saude['Razao_Medico_mil_Hab'] = (df_saude['Total_Medicos']*1000)/df_raw['Populacao'] #Transforma os dados absolutos para relativos.
sns.boxplot(df_saude['Razao_Medico_mil_Hab'])
df_saude['Razao_Medico_mil_Hab'].replace(np.nan, df_saude['Razao_Medico_mil_Hab'].median(), inplace=True) #Sustitui os valores nulos pela mediana
df_saude.drop('Total_Medicos', axis=1, inplace=True) #Exclui a coluta Total_Medicos
###Output
/usr/local/lib/python3.6/dist-packages/pandas/core/generic.py:6746: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self._update_inplace(new_data)
/usr/local/lib/python3.6/dist-packages/pandas/core/frame.py:3997: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
errors=errors,
###Markdown
Veremos agora se resolvemos o problema dos dados nulos em nosso dataset:
###Code
df_saude.isnull().sum()
###Output
_____no_output_____
###Markdown
Outra parte da limpeza dos dados é verificar que cada variavel está registrada como o tipo adequado de dado, categórico, contínuo ou inteiro. Verificamos isso coma função .dtypes.Vemos que Mortalidade_infantil está registrado como object (categórico), quando queremos que seja um float (contínuo). Descobrimos que isso acontece pela presença de strings " - " em algumas das linhas, então temos mais alguns dados nulos a serem tratatos:
###Code
df_saude.dtypes
df_saude['Mortalidade_infantil'].replace('-', np.nan, inplace=True) #substitui as strings '-' por valores nulos.
df_saude['Mortalidade_infantil'] = df_saude['Mortalidade_infantil'].astype(float) #converte Mortalidade_infantil de object para float
sns.boxplot(df_saude['Mortalidade_infantil'])
df_saude['Mortalidade_infantil'].replace(np.nan, df_saude['Mortalidade_infantil'].median(), inplace=True) #usamos novamente a mediana ao inves da média pela presença de muitos outliers
###Output
/usr/local/lib/python3.6/dist-packages/pandas/core/generic.py:6746: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self._update_inplace(new_data)
###Markdown
Por fim vamos converter os ultimos dados absolutos em relativos, no caso Total_Doses_Aplicadas e Qtd_Esbalecimentos, fazendo uma taxa simples por mil habitantes transformamos seus valores e renomeamos as colunas para Doses_Aplicadas_mil_Hab e Estab_por_mil_Hab e vamos ver como esta nosso dataset:
###Code
columns = ['Qtd_Estabelecimentos','Total_Doses_Aplicadas']
for column in columns:
df_saude[column] = (df_saude[column]*1000)/df_raw['Populacao']
df_saude.rename(columns={'Total_Doses_Aplicadas':'Doses_Aplicadas_mil_Hab', 'Qtd_Estabelecimentos':'Estab_por_mil_Hab'}, inplace=True)
df_saude.head()
###Output
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:4: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
after removing the cwd from sys.path.
/usr/local/lib/python3.6/dist-packages/pandas/core/frame.py:4133: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
errors=errors,
###Markdown
**Análises estatisticas dos dados, detecção e remoção de *outliers*, conclusões prévias e especulações.** Agora nosso dataset está pronto para começarmos a fazer nossas primeiras suposições. Usando o seaborn, plotamos um mata de calor com a correlação de nossas variaveis, é do nosso melhor interesse que não existam correlações com valores maiores do que 0.80 em módulo, pois isso pode significar uma autocorrelação e prejudicar a qualidade do modelo que pretendemos utilizar:
###Code
df_saude.set_index('Cod_IBGE', inplace= True)
plt.figure(figsize=(15,8))
sns.heatmap(df_saude.corr(), annot=True)
###Output
_____no_output_____
###Markdown
Inicialmente optamos por usar o IDHM como uma de nossas variáveis, porém percebemos uma correlação maior que a recomendada com Renda_per_capita. Nossa suposição é a de que o IDHM, por ser um índice muito refinado que inclusive contém a renda de uma população em seu cálculo, estava criando distorções em nosso dataset. Partindo dessa premissa decidimos utilizar apenas o IDHM_Educacao, por se tratar de um um indicador mais acertivo dentro da nossa análise. Porém ainda não podemos dizer muito sobre nossas dados. Utilizando boxplots, podemos ver a presença de muitos outliers que nos impedem de visualizar corretamente nossos dados.
###Code
sns.boxplot(data=df_saude, orient='h')
print('Boxplot por variável:')
print()
###Output
Boxplot por variável:
###Markdown
Como não podemos excluir municípios de nossas análises, vamos separar os outliers em um novo dataset:
###Code
pos_3q = []
pre_1q = []
for column in df_saude: #definição das fences de outliers usando a regra da amplitude interquartil (IQR)*1.5
temp_1 = (np.quantile(df_saude[column],0.25))
temp_3 = (np.quantile(df_saude[column],0.75))
print(column,':', temp_3)
print('Se for < que',(temp_1 - 1.5*(temp_3 - temp_1)),'Ou se for > que', (temp_3 + 1.5*(temp_3 - temp_1)))
pos_3q.append(temp_3 + 1.5*(temp_3 - temp_1))
pre_1q.append(temp_1 - 1.5*(temp_3 - temp_1))
###Output
Mortalidade_infantil : 23.26
Se for < que -11.14 Ou se for > que 43.900000000000006
IDHM_Educacao : 0.7090000000000001
Se for < que 0.5465 Ou se for > que 0.8065000000000002
Densidade_demográfica : 125.71
Se for < que -133.83999999999997 Ou se for > que 281.44
Renda_per_capita : 674.87
Se for < que 253.42000000000004 Ou se for > que 927.74
Grau_de_Urbanizacao : 96.06
Se for < que 62.81000000000001 Ou se for > que 116.00999999999999
Indice_de_Gini : 0.495
Se for < que 0.3025 Ou se for > que 0.6105
Esgoto_Sanitario : 98.71
Se for < que 76.36 Ou se for > que 112.11999999999999
Estab_por_mil_Hab : 2.073111740722825
Se for < que -0.7884048441390191 Ou se for > que 3.7900216916399314
Doses_Aplicadas_mil_Hab : 583.587786259542
Se for < que 110.62389547036088 Ou se for > que 867.3661207330506
Razao_Medico_mil_Hab : 1.7425939756036843
Se for < que -1.137483947519387 Ou se for > que 3.4706407294775268
###Markdown
Vamos analisar algumas das distribuições filtrando os outliers superiores ao terceiro quartil.- Renda_per_capita:
###Code
filter = df_saude['Renda_per_capita'] < 927.74
sns.distplot(df_saude[filter]['Renda_per_capita'])
print('número de registros desconsiderando os filtrados:',df_saude[filter]['Renda_per_capita'].shape[0])
sns.boxplot(df_saude[filter]['Renda_per_capita'])
print('novo boxplot desconsiderando dados filtrados:')
###Output
novo boxplot desconsiderando dados filtrados:
###Markdown
- Doses_Aplicadas_mil_Hab:
###Code
filter = df_saude['Doses_Aplicadas_mil_Hab'] < 867.3661207330506
sns.distplot(df_saude[filter]['Doses_Aplicadas_mil_Hab'])
print('número de registros desconsiderando os filtrados:',df_saude[filter]['Doses_Aplicadas_mil_Hab'].shape[0])
sns.boxplot(df_saude[filter]['Doses_Aplicadas_mil_Hab'])
print('novo boxplot desconsiderando dados filtrados:')
###Output
novo boxplot desconsiderando dados filtrados:
###Markdown
- Densidade_demográfica:
###Code
filter = df_saude['Densidade_demográfica'] < 281.44
sns.distplot(df_saude[filter]['Densidade_demográfica'])
print('número de registros desconsiderando os filtrados:',df_saude[filter]['Densidade_demográfica'].shape[0])
sns.boxplot(df_saude[filter]['Densidade_demográfica'])
print('novo boxplot desconsiderando dados filtrados:')
###Output
novo boxplot desconsiderando dados filtrados:
###Markdown
Observamos que Densidade_demográfica tem o maior número de outliers, e inclusive os valores mais extremos, por esse motivo supomos que fosse um bom ponto te partida na hora de separar nossos dados.Vamos gerar um novo dataset com apenas outliers e outro sem os outliers e analisá-los por um momento:
###Code
filter = df_saude['Densidade_demográfica'] > 281.44
df_saude[filter].describe() #dataset com apenas outliers de densidade demográfica
filter = df_saude['Densidade_demográfica'] <= 281.44
df_saude[filter].describe() #dataset com os outliers de densidade demográfica removidos
###Output
_____no_output_____
###Markdown
Podemos ver que mesmo separando os ouliers, os demais indicadores não são afetados de maneira relevante quanto a amplitude dos valores. Isso nos leva a acreditar que talvez uma abordagem mais interessante seja fazer um processo de binning dos municipios quanto a sua densidade demografica. Para isso usamos uma métrica logarítmica, por ser mais adequada a dados com amplitudes extremas como é o caso desse indicador. Vamos ver como fica a distribuição dos dados adotando essa nova escala:
###Code
plt.hist(df_saude['Densidade_demográfica'], bins=np.logspace(np.log10(10),np.log10(10000))) #histograma dos dados de densidade demográfica aplicando a escala logarítmica
plt.gca().set_xscale("log")
plt.show()
###Output
_____no_output_____
###Markdown
Vemos que a visualização parece bem mais adequada do que a anterior, o que parece validar nossa suposição inicial de usar binning ao invés do valor original.Criaremos uma nova coluna no dataset com os dados em bins, mas manteremos os originais por via de duvidas.
###Code
bins = np.logspace(np.log10(1),np.log10(14207.57), 5) #cria os bins
df_saude['den_binned'] = pd.cut(df_saude['Densidade_demográfica'], bins=bins, labels=[1,2,3,4]) #adiciona a nova colunas com os dados separados nos bins 1,2,3 e 4
###Output
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
###Markdown
Agora que criamos a nova coluna den_binned, podemos observar o comportamento desses dados.
###Code
sns.distplot(df_saude.den_binned)
df_saude.tail()
###Output
_____no_output_____
###Markdown
A essa altura acreditamos que nosso dataset está adequado para começar a testar modelos de clusterização. O processo será feito na ferramento do Watson Studio, SPSS Modeler, para isso exportaremos nossos dados como uma nova planilha.
###Code
df_saude.to_csv('/content/df_saude.csv', encoding='Latin-1', sep=';', decimal=',')
for column in df_saude:
if df_saude[column].max() > 1:
if column != 'den_binned':
df_saude[column] = df_saude[column]/df_saude[column].max()
df_saude.head()
df_saude.to_csv('/content/df_saude_normal.csv', encoding='Latin-1', sep=';', decimal=',')
###Output
_____no_output_____
###Markdown
**As próximas etapas do modelo foram realizadas na plataforma IBM WATSON STUDIO** **Modelos de Clusterização** Optamos pelo método não supervisionado na 1º fase do projeto para que pudéssemos segmentar os 645 municípios do estado de São Paulo, com o objetivo de agrupar os mesmos de uma forma mais homogênea. Com os clusters foi possível explorar os dados e conhecer as similaridades dos municípios do estado de São Paulo com o objetivo de extrair informações das quais poderiam subsidiar uma análise supervisionada na segunda fase do projeto. **Modelo de Regressão** Optamos pelo método supervisionado como 2º fase do projeto para que pudéssemos tirar o máximo proveito dos resultados obtidos com a análise feita previamente. O objetivo agora é criar um modelo que possa simular o comportamento da variável alvo dentro de cada um dos grupos identificados gerando valor e novos insights sobre nosso dados. **Teste de API do Modelo**
###Code
import requests
# Paste your Watson Machine Learning service apikey here
# Use the rest of the code sample as written
apikey = "c-MaC8NEy0LDY_wHNKyx0LhwnEei7bnAYFnXmQgxK1jp"
# Get an IAM token from IBM Cloud
url = "https://iam.bluemix.net/oidc/token"
headers = { "Content-Type" : "application/x-www-form-urlencoded" }
data = "apikey=" + apikey + "&grant_type=urn:ibm:params:oauth:grant-type:apikey"
IBM_cloud_IAM_uid = "bx"
IBM_cloud_IAM_pwd = "bx"
response = requests.post( url, headers=headers, data=data, auth=( IBM_cloud_IAM_uid, IBM_cloud_IAM_pwd ) )
iam_token = response.json()["access_token"]
ml_instance_id = "ba659903-440e-4610-850f-115a8070c983"
iam_token
###Output
_____no_output_____
###Markdown
**Modelo de Regressão 1**Feito específicamente para aplicação nos municípios do cluster 1 - Grupo Satisfatório
###Code
import urllib3, requests, json
# NOTE: generate iam_token and retrieve ml_instance_id based on provided documentation
header = {'Content-Type': 'application/json', 'Authorization': 'Bearer ' + iam_token, 'ML-Instance-ID': ml_instance_id}
# NOTE: manually define and pass the array(s) of values to be scored in the next line
payload_scoring = {"fields": ["Cod_IBGE", "Mortalidade_infantil", "IDHM_Educacao", "Renda_per_capita", "Grau_de_Urbanizacao", "Esgoto_Sanitario", "Estab_por_mil_Hab", "Doses_Aplicadas_mil_Hab", "Razao_Medico_mil_Hab", "Abastecimento_de_Agua", "Coleta_de_Lixo", "$KM-K-Means"],
"values": [[3500105,0.069779,0.75,0.493077,0.9638,0.9903,0.601063,0.318119,0.496123,0.9976,0.9989,"cluster-1"]]}
response_scoring = requests.post('https://us-south.ml.cloud.ibm.com/v3/wml_instances/ba659903-440e-4610-850f-115a8070c983/deployments/17615cbb-7d30-4cdf-8b8f-15db943bcca3/online', json=payload_scoring, headers=header)
print("Scoring response")
print(json.loads(response_scoring.text))
###Output
_____no_output_____
###Markdown
**Modelo de Regressão 2**Feito específicamente para aplicação nos municípios do cluster 2 - Grupo Alerta
###Code
import urllib3, requests, json
# NOTE: generate iam_token and retrieve ml_instance_id based on provided documentation
header = {'Content-Type': 'application/json', 'Authorization': 'Bearer ' + iam_token, 'ML-Instance-ID': ml_instance_id}
# NOTE: manually define and pass the array(s) of values to be scored in the next line
payload_scoring = {"fields": ["Cod_IBGE", "Mortalidade_infantil", "IDHM_Educacao", "Renda_per_capita", "Grau_de_Urbanizacao", "Esgoto_Sanitario", "Estab_por_mil_Hab", "Doses_Aplicadas_mil_Hab", "Razao_Medico_mil_Hab", "Abastecimento_de_Agua", "Coleta_de_Lixo", "$KM-K-Means"],
"values": [[3500105,0.069779,0.75,0.493077,0.9638,0.9903,0.601063,0.318119,0.496123,0.9976,0.9989,"cluster-2"]]}
response_scoring = requests.post('https://us-south.ml.cloud.ibm.com/v3/wml_instances/ba659903-440e-4610-850f-115a8070c983/deployments/4a35839a-87cf-4d6d-8478-e1800b67c018/online', json=payload_scoring, headers=header)
print("Scoring response")
print(json.loads(response_scoring.text))
###Output
_____no_output_____
###Markdown
**Modelo de Regressão 3**Feito específicamente para aplicação nos municípios do cluster 3 - Grupo Atenção
###Code
import urllib3, requests, json
# NOTE: generate iam_token and retrieve ml_instance_id based on provided documentation
header = {'Content-Type': 'application/json', 'Authorization': 'Bearer ' + iam_token, 'ML-Instance-ID': ml_instance_id}
# NOTE: manually define and pass the array(s) of values to be scored in the next line
payload_scoring = {"fields": ["Cod_IBGE", "Mortalidade_infantil", "IDHM_Educacao", "Renda_per_capita", "Grau_de_Urbanizacao", "Esgoto_Sanitario", "Estab_por_mil_Hab", "Doses_Aplicadas_mil_Hab", "Razao_Medico_mil_Hab", "Abastecimento_de_Agua", "Coleta_de_Lixo", "$KM-K-Means"],
"values": [[3500105,0.069779,0.75,0.493077,0.9638,0.9903,0.601063,0.318119,0.496123,0.9976,0.9989,"cluster-2"]]}
response_scoring = requests.post('https://us-south.ml.cloud.ibm.com/v3/wml_instances/ba659903-440e-4610-850f-115a8070c983/deployments/03ddaeb3-a667-4087-922a-043229e0dce6/online', json=payload_scoring, headers=header)
print("Scoring response")
print(json.loads(response_scoring.text))
###Output
_____no_output_____ |
Machining_feature_retrieval.ipynb | ###Markdown
Finding Accuracy
###Code
# Finding accuracy on test set
spp_train = intermediate_layer_model.predict(x_train)
top5_acc =0
top1_acc =0
top5_lbl = list()
sim_list = list()
null_index = list()
classes = 24
for i in tqdm (range(0,len(x_test))):
sim_list.clear()
#test_feat = spp_test[i]
test_feat = tf.reshape(x_test[i],[1,max_val,32,1])
spp_test = intermediate_layer_model.predict(test_feat)
y_t = y_test[i]
for j in range(0,classes):
if(y_t[j].numpy()==1.0):
test_lbl = Y_list[j]
for k in range(0,len(x_train)):
#train_feat = tf.reshape(x_train[i],[1,max_val,32,1])
sim_list.append(abs(np.linalg.norm(spp_train[k]-spp_test)))
id = list(range(0,len(x_train)))
Sim_models_id = [x for _,x in sorted(zip(sim_list,id))]
top5 = (Sim_models_id[0:5])
top5_lbl.clear()
for l in range(0,5):
yi = y_train[top5[l]]
for m in range(0,classes):
if ((yi[m].numpy())==1.0):
top5_lbl.append(Y_list[m])
if(test_lbl in top5_lbl):
top5_acc+=1
if (top5_lbl):
if(test_lbl == top5_lbl[0]):
top1_acc+=1
else:
null_index.append(i)
print("Accuracy for ",len(x_test),"testing files is ",top1_acc/len(x_test))
print("Top 5 accuracy for ",len(x_test),"testing files is ",top5_acc/len(x_test))
print(len(null_index))
###Output
100%|██████████| 3600/3600 [09:57<00:00, 6.02it/s]
###Markdown
Retrieving similar features from the dataset for a sample file
###Code
#loading stl dataset paths
db_folder = "data/stl"
os.path.abspath(db_folder)
ind = 0
stl_file_path = list()
sub_folders = os.listdir(db_folder)
for sub_folder in sub_folders:
sub_folder_path = os.path.join(db_folder, sub_folder)
stl_files = os.listdir(sub_folder_path)
for stl_file in stl_files:
if stl_file.endswith(".STL"):
stl_file_path.append(os.path.join(sub_folder_path, stl_file))
ind+=1
def get_spp_out(feat):
test_feat = tf.reshape(feat,[1,max_val,32,1])
spp_test = intermediate_layer_model.predict(test_feat)
return spp_test
#test file
test_id = 1990
print("Test file\n","\nID:\t",test_id,"\tFamily:\t",file_names[test_id])
test_feat = zero_pad(features[test_id])
spp_test_feat = get_spp_out(test_feat)
#comparing similaity between one test file and all the features individually
n_files = len(features)
sim_list = list()
for i in range(0,n_files):
if(i!=test_id):
feat = zero_pad(features[i])
feat = tf.reshape(feat,[1,max_val,32,1])
spp_feat = intermediate_layer_model.predict(feat)
sim_list.append(abs(np.linalg.norm(spp_feat - spp_test_feat)))
else:
sim_list.append(float("inf"))
id = list(range(0,n_files))
Similar_models_id = [x for _,x in sorted(zip(sim_list,id))]
top_5 = (Similar_models_id[0:5])
top_5
# Visualiztion of the CAD files
from solid import*
import viewscad
r = viewscad.Renderer()
print("Test file\n","\nID:\t",test_id,"\tFamily:\t",file_names[test_id])
r.render_stl(stl_file_path[test_id])
print("\nTop-5 similar models and their IDs\n")
for i in range(0,5):
yi = file_names[top_5[i]]
print("ID:\t",top_5[i],"\tFamily:\t",yi)
r.render_stl(stl_file_path[top_5[i]])
###Output
Test file
ID: 1990 Family: 20_v_circular_end_blind_slot
|
notebooks/labs/L2_Inferential_Statistics_Data_Hunt.ipynb | ###Markdown
Data Science Foundations Lab 2: Data Hunt II**Instructor**: Wesley Beckner**Contact**: [email protected]'s right you heard correctly. It's the data hunt part TWO. Preparing Environment and Importing Data Import Packages
###Code
!pip install -U plotly
# our standard libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import plotly.express as px
import seaborn as sns
from ipywidgets import interact
# our stats libraries
import random
import scipy.stats as stats
import statsmodels.api as sm
from statsmodels.formula.api import ols
import scipy
# our scikit-Learn library for the regression models
import sklearn
from sklearn import linear_model
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score
###Output
_____no_output_____
###Markdown
Import and Clean Data
###Code
df = pd.read_csv("https://raw.githubusercontent.com/wesleybeckner/"\
"technology_fundamentals/main/assets/truffle_rates.csv")
df = df.loc[df['rate'] > 0]
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis 🍫 L2 Q1 Finding Influential FeaturesWhich of the five features (base_cake, truffle_type, primary_flavor, secondary_flavor, color_group) of the truffles is most influential on production rate?Back your answer with both a visualization of the distributions (boxplot, kernel denisty estimate, histogram, violin plot) and a statistical test (moods median, ANOVA, t-test)* Be sure: * everything is labeled (can you improve your labels with additional descriptive statistical information e.g. indicate mean, std, etc.) * you meet the assumptions of your statistical test 🍫 L2 Q1.1 VisualizationUse any number of visualizations. Here is an example to get you started:
###Code
# Example: a KDE of the truffle_type and base_cake columns
fig, ax = plt.subplots(2, 1, figsize=(12,12))
sns.kdeplot(x=df['rate'], hue=df['truffle_type'], fill=True, ax=ax[0])
sns.kdeplot(x=df['rate'], hue=df['base_cake'], fill=True, ax=ax[1])
###Output
_____no_output_____
###Markdown
🍫 L2 Q1.2 Statistical AnalysisWhat statistical tests can you perform to evaluate your hypothesis from the visualizations (maybe you think one particular feature is significant). Here's an ANOVA on the `truffle_type` column to get you started:
###Code
model = ols('rate ~ C({})'.format('truffle_type'), data=df).fit()
anova_table = sm.stats.anova_lm(model, typ=2)
display(anova_table)
###Output
_____no_output_____
###Markdown
> Is this P value significant? What is the null hypothesis? How do we check the assumptions of ANOVA? 🍫 L2 Q2 Finding Best and Worst Groups 🍫 L2 Q2.1 Compare Every Group to the WholeOf the primary flavors (feature), what 5 flavors (groups) would you recommend Truffletopia discontinue?Iterate through every level (i.e. pound, cheese, sponge cakes) of every category (i.e. base cake, primary flavor, secondary flavor) and use moods median testing to compare the group distribution to the grand median rate. After you've computed a moods median test on every group, filter any data above a significance level of 0.05 Return the groups with the lowest median performance (your table need not look exactly like the one I've created) We would want to cut the following primary flavors. Check to see that you get a similar answer. rip wild cherry cream.```['Coconut', 'Pink Lemonade', 'Chocolate', 'Wild Cherry Cream', 'Gingersnap']``` 🍫 L2 Q2.2 Beyond Statistical Testing: Using ReasoningLet's look at the total profile of the products associated with the five worst primary flavors. Given the number of different products made with any of these flavors, would you alter your answer at all?
###Code
# 1. filter df for only bottom five flavors
# 2. groupby all columns besides rate
# 3. describe the rate column.
# by doing this we can evaluate just how much sampling variety we have for the
# worst performing flavors.
bottom_five = ['Coconut', 'Pink Lemonade', 'Chocolate', 'Wild Cherry Cream', 'Gingersnap']
df.loc[df['primary_flavor'].isin(bottom_five)].groupby(list(df.columns[:-1]))['rate'].describe()
###Output
_____no_output_____
###Markdown
🍫 L2 Q2.3 The Jelly Filled ConundrumYour boss notices the Jelly filled truffles are being produced much faster than the candy outer truffles and suggests expanding into this product line. What is your response? Use the visualization tool below to help you think about this problem, then create any visualizations or analyses of your own.[sunburst charts](https://plotly.com/python/sunburst-charts/)
###Code
def sun(path=[['base_cake', 'truffle_type', 'primary_flavor', 'secondary_flavor', 'color_group'],
['truffle_type', 'base_cake', 'primary_flavor', 'secondary_flavor', 'color_group']]):
fig = px.sunburst(df, path=path,
color='rate',
color_continuous_scale='viridis',
)
fig.update_layout(
margin=dict(l=20, r=20, t=20, b=20),
height=650
)
fig.show()
interact(sun)
###Output
_____no_output_____ |
notebooks/step3b_global_tsa.ipynb | ###Markdown
CIS 545 Final Project Big Portfolio Learner: Time Series Analysis Team members: Steven Brooks & Chenlia Xu
###Code
import random
import numpy as np
import json
import matplotlib
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import cm
from datetime import datetime
import glob
import seaborn as sns
import re
import os
%%capture
## If boto3 not already installed uncomment the following:
!pip3 install boto3
import boto3
from botocore import UNSIGNED
from botocore.config import Config
s3 = boto3.resource('s3', config=Config(signature_version=UNSIGNED))
s3.Bucket('cis545project').download_file('data/stock_data.zip', 'stock_data.zip')
s3.Bucket('cis545project').download_file('data/technical_data.zip', 'technical_data.zip')
%%capture
if not os.path.exists("stock_data"):
os.makedirs("stock_data")
!unzip /content/stock_data.zip -d /content/stock_data
!rm -f stock_data/.gitempty
if not os.path.exists("technical_data"):
os.makedirs("technical_data")
!unzip /content/technical_data.zip -d /content/technical_data
!rm -f technical_data/.gitempty
###Output
_____no_output_____
###Markdown
Setup for Spark
###Code
%%capture
!wget -nc https://downloads.apache.org/spark/spark-3.1.2/spark-3.1.2-bin-hadoop3.2.tgz
!tar xf spark-3.1.2-bin-hadoop3.2.tgz
!apt install libkrb5-dev
!pip install findspark
!pip install sparkmagic
!pip install pyspark
!pip install pyspark --user
!apt update
!apt install gcc python-dev libkrb5-dev
import os
import pyspark
from pyspark.sql import SQLContext
from pyspark.sql import SparkSession
from pyspark.sql.types import *
import pyspark.sql.functions as F
import os
spark = SparkSession.builder.getOrCreate()
%load_ext sparkmagic.magics
os.environ['SPARK_HOME'] = '/content/spark-3.1.2-bin-hadoop3.2'
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
try:
if(spark == None):
spark = SparkSession.builder.appName('Initial').getOrCreate()
sqlContext=SQLContext(spark)
except NameError:
spark = SparkSession.builder.appName('Initial').getOrCreate()
sqlContext=SQLContext(spark)
###Output
The sparkmagic.magics extension is already loaded. To reload it, use:
%reload_ext sparkmagic.magics
###Markdown
Setup for Darts (Time Series Modeling)
###Code
%%capture
!pip install 'u8darts[all]'
import torch
from darts import TimeSeries
from darts.utils.timeseries_generation import gaussian_timeseries, linear_timeseries, sine_timeseries
from darts.models import RNNModel, TCNModel, TransformerModel, NBEATSModel, BlockRNNModel
from darts.metrics import mape, smape
from darts.dataprocessing.transformers import Scaler
from darts.utils.timeseries_generation import datetime_attribute_timeseries
from darts.datasets import AirPassengersDataset, MonthlyMilkDataset
torch.manual_seed(1); np.random.seed(1) # for reproducibility
###Output
_____no_output_____
###Markdown
Load the stock data
###Code
stock_data_sdf = spark.read.load(
'stock_data/*.csv',
format = 'csv',
header = 'true',
inferSchema = 'true',
sep = ','
)
###Output
_____no_output_____
###Markdown
Section 1: Train Test SplitWe will train the data using the years 2002 to 2017. Our validation set will be the year 2018. Our test set will be the year 2019.
###Code
series_air = AirPassengersDataset().load()
series_milk = MonthlyMilkDataset().load()
series_air
###Output
_____no_output_____ |
homeworks/HW2/task4_BP_estimation.ipynb | ###Markdown
A Neural Network for Regression (Estimate blood pressure from PPG signal)*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [HW page](http://kovan.ceng.metu.edu.tr/~sinan/DL/index.html) on the course website.*Having gained some experience with neural networks, let us train a network that estimates the blood pressure from a PPG signal window.All of your work for this exercise will be done in this notebook. A Photoplethysmograph (PPG) signalA PPG (photoplethysmograph) signal is a signal obtained with a pulse oximeter, which illuminates the skin and measures changes in light absorption. A PPG signal carries rich information about the status of the cardiovascular health of a person, such as breadth rate, heart rate and blood pressure. An example is shown below, where you also see the blood pressure signal that we will estimate (the data also has the ECG signal, which you should ignore). Constructing the Dataset In this task, you are expected to perform the full pipeline for creating a learning system from scratch. Here is how you should construct the dataset:* Download the dataset from the following website, and only take "Part 1" from it (it is too big): https://archive.ics.uci.edu/ml/datasets/Cuff-Less+Blood+Pressure+Estimation* Take a window of size $W$ from the PPG channel between time $t$ and $t+W$. Let us call this $\textbf{x}_t$.* Take the corresponding window of size $W$ from the ABP (arterial blood pressure) channel between time $t$ and $t+W$. Find the maxima and minima of this signal within the window (you can use "findpeaks" from Matlab or "find_peaks_cwt" from scipy). Here is an example window from the ABP signal, and its peaks: * Calculate the average of the maxima, call it $y^1_t$, and the average of the minima, call it $y^2_t$.* Slide the window over the PPG signals and collect many $(\textbf{x}_t, )$ instances. In other words, your network outputs two values.* This will be your input-output for training the network.
###Code
import random
import numpy as np
from metu.data_utils import load_dataset
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
from cs231n.classifiers.neural_net_for_regression import TwoLayerNet
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([[0, 1, 2], [1, 2, 3], [2, 3, 4], [2, 1, 4], [2, 1, 4]])
return X, y
net = init_toy_model()
X, y = init_toy_data()
###Output
_____no_output_____
###Markdown
Forward pass: compute scoresOpen the file `cs231n/classifiers/neural_net_for_regression.py` and look at the method `TwoLayerNet.loss`. This function is very similar to the loss functions you have written for the previous exercises: It takes the data and weights and computes the *regression* scores, the squared error loss, and the gradients on the parameters. To be more specific, you will implement the following loss function:$$\frac{1}{2}\sum_i\sum_{j} (o_{ij} - y_{ij})^2 + \frac{1}{2}\lambda\sum_j w_j^2,$$where $i$ runs through the samples in the batch; $o_{ij}$ is the prediction of the network for the $i^{th}$ sample for output $j$, and $y_{ij}$ is the correct value; $\lambda$ is the weight of the regularization term.The first layer uses ReLU as the activation function. The output layer does not use any activation functions.Implement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs.
###Code
scores = net.loss(X)
print ('Your scores:')
print (scores)
print('')
print ('correct scores:')
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print (correct_scores)
print('')
# The difference should be very small. We get < 1e-7
print ('Difference between your scores and correct scores:')
print (np.sum(np.abs(scores - correct_scores)))
###Output
Your scores:
[[-0.81233741 -1.27654624 -0.70335995]
[-0.17129677 -1.18803311 -0.47310444]
[-0.51590475 -1.01354314 -0.8504215 ]
[-0.15419291 -0.48629638 -0.52901952]
[-0.00618733 -0.12435261 -0.15226949]]
correct scores:
[[-0.81233741 -1.27654624 -0.70335995]
[-0.17129677 -1.18803311 -0.47310444]
[-0.51590475 -1.01354314 -0.8504215 ]
[-0.15419291 -0.48629638 -0.52901952]
[-0.00618733 -0.12435261 -0.15226949]]
Difference between your scores and correct scores:
3.68027209324e-08
###Markdown
Forward pass: compute lossIn the same function, implement the second part that computes the data and regularizaion loss.
###Code
loss, _ = net.loss(X, y, reg=0.1)
correct_loss = 66.3406756909
print('loss:', loss)
# should be very small, we get < 1e-10
print ('Difference between your loss and correct loss:')
print (np.sum(np.abs(loss - correct_loss)))
###Output
loss: 66.3406756909
Difference between your loss and correct loss:
2.54800625044e-11
###Markdown
Backward passImplement the rest of the function. This will compute the gradient of the loss with respect to the variables `W1`, `b1`, `W2`, and `b2`. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
###Code
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.1)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.1)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name])
print ('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
###Output
W2 max relative error: 3.755046e-04
b2 max relative error: 1.443387e-06
W1 max relative error: 5.463838e-04
b1 max relative error: 2.188996e-07
###Markdown
Load the PPG dataset for training your regression network
###Code
-
###Output
Number of instances in the training set: 23669
Number of instances in the validation set: 263
Number of instances in the testing set: 1578
###Markdown
Now train our network on the PPG dataset
###Code
# Now, let's train a neural network
input_size = input_size
hidden_size = 500 # TODO: Choose a suitable hidden layer size
num_classes = 2 # We have two outputs
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=50000, batch_size=64,
learning_rate=1e-5, learning_rate_decay=0.95,
reg=0.5, verbose=True)
# Predict on the validation set
#val_err = ... # TODO: Perform prediction on the validation set
val_err = np.sum(np.square(net.predict(X_val) - y_val), axis=1).mean()
print ('Validation error: ', val_err)
###Output
iteration 0 / 50000: loss 534330.183591
iteration 100 / 50000: loss 501336.635259
iteration 200 / 50000: loss 451644.055357
iteration 300 / 50000: loss 378601.070585
iteration 400 / 50000: loss 375677.026417
iteration 500 / 50000: loss 331224.788309
iteration 600 / 50000: loss 313916.900274
iteration 700 / 50000: loss 254183.236685
iteration 800 / 50000: loss 211796.817715
iteration 900 / 50000: loss 253944.680456
iteration 1000 / 50000: loss 206615.596300
iteration 1100 / 50000: loss 184450.992006
iteration 1200 / 50000: loss 200981.136536
iteration 1300 / 50000: loss 153581.951940
iteration 1400 / 50000: loss 33338.694818
iteration 1500 / 50000: loss 121730.195078
iteration 1600 / 50000: loss 93690.350939
iteration 1700 / 50000: loss 120235.627992
iteration 1800 / 50000: loss 55887.420876
iteration 1900 / 50000: loss 37860.012189
iteration 2000 / 50000: loss 97783.312405
iteration 2100 / 50000: loss 95808.354790
iteration 2200 / 50000: loss 91224.020585
iteration 2300 / 50000: loss 65411.650035
iteration 2400 / 50000: loss 73963.228049
iteration 2500 / 50000: loss 100228.298466
iteration 2600 / 50000: loss 72141.724880
iteration 2700 / 50000: loss 99560.535845
iteration 2800 / 50000: loss 68260.499694
iteration 2900 / 50000: loss 85566.594876
iteration 3000 / 50000: loss 68253.316157
iteration 3100 / 50000: loss 47478.798357
iteration 3200 / 50000: loss 71682.557370
iteration 3300 / 50000: loss 55259.252782
iteration 3400 / 50000: loss 43088.090828
iteration 3500 / 50000: loss 50131.031938
iteration 3600 / 50000: loss 55681.344138
iteration 3700 / 50000: loss 53786.701601
iteration 3800 / 50000: loss 47674.958703
iteration 3900 / 50000: loss 46269.356469
iteration 4000 / 50000: loss 41226.722019
iteration 4100 / 50000: loss 50456.110290
iteration 4200 / 50000: loss 31262.146473
iteration 4300 / 50000: loss 52271.646154
iteration 4400 / 50000: loss 38009.236974
iteration 4500 / 50000: loss 50517.520839
iteration 4600 / 50000: loss 48748.319540
iteration 4700 / 50000: loss 37883.692506
iteration 4800 / 50000: loss 48163.686155
iteration 4900 / 50000: loss 38834.417139
iteration 5000 / 50000: loss 49833.956842
iteration 5100 / 50000: loss 37473.260348
iteration 5200 / 50000: loss 22508.918266
iteration 5300 / 50000: loss 44819.569245
iteration 5400 / 50000: loss 36218.820818
iteration 5500 / 50000: loss 35031.128803
iteration 5600 / 50000: loss 35442.522311
iteration 5700 / 50000: loss 29592.753643
iteration 5800 / 50000: loss 37125.209922
iteration 5900 / 50000: loss 33071.422018
iteration 6000 / 50000: loss 50711.631995
iteration 6100 / 50000: loss 36626.956940
iteration 6200 / 50000: loss 31963.708785
iteration 6300 / 50000: loss 32856.145513
iteration 6400 / 50000: loss 36583.528925
iteration 6500 / 50000: loss 31920.863215
iteration 6600 / 50000: loss 30206.229017
iteration 6700 / 50000: loss 34701.552415
iteration 6800 / 50000: loss 37768.558471
iteration 6900 / 50000: loss 35069.716678
iteration 7000 / 50000: loss 38239.992569
iteration 7100 / 50000: loss 30433.656583
iteration 7200 / 50000: loss 35668.422686
iteration 7300 / 50000: loss 36894.342225
iteration 7400 / 50000: loss 30212.682203
iteration 7500 / 50000: loss 39973.608519
iteration 7600 / 50000: loss 33090.373128
iteration 7700 / 50000: loss 30982.045499
iteration 7800 / 50000: loss 28433.581688
iteration 7900 / 50000: loss 30570.078456
iteration 8000 / 50000: loss 34023.266008
iteration 8100 / 50000: loss 30313.859017
iteration 8200 / 50000: loss 30403.643335
iteration 8300 / 50000: loss 32669.630737
iteration 8400 / 50000: loss 32125.770980
iteration 8500 / 50000: loss 32562.270506
iteration 8600 / 50000: loss 26260.541767
iteration 8700 / 50000: loss 30969.072655
iteration 8800 / 50000: loss 35427.100290
iteration 8900 / 50000: loss 30691.190901
iteration 9000 / 50000: loss 28351.775023
iteration 9100 / 50000: loss 32346.229702
iteration 9200 / 50000: loss 33794.071304
iteration 9300 / 50000: loss 39634.648919
iteration 9400 / 50000: loss 29765.376633
iteration 9500 / 50000: loss 29280.737959
iteration 9600 / 50000: loss 43467.543374
iteration 9700 / 50000: loss 25916.221347
iteration 9800 / 50000: loss 29189.679520
iteration 9900 / 50000: loss 33440.527503
iteration 10000 / 50000: loss 32557.160264
iteration 10100 / 50000: loss 30727.421866
iteration 10200 / 50000: loss 39153.886111
iteration 10300 / 50000: loss 33413.371650
iteration 10400 / 50000: loss 35425.779166
iteration 10500 / 50000: loss 30976.999230
iteration 10600 / 50000: loss 33930.644984
iteration 10700 / 50000: loss 36418.912151
iteration 10800 / 50000: loss 30481.569765
iteration 10900 / 50000: loss 23265.772910
iteration 11000 / 50000: loss 29129.651268
iteration 11100 / 50000: loss 34956.172549
iteration 11200 / 50000: loss 34017.697496
iteration 11300 / 50000: loss 32684.313722
iteration 11400 / 50000: loss 29571.925161
iteration 11500 / 50000: loss 24462.671804
iteration 11600 / 50000: loss 22032.304924
iteration 11700 / 50000: loss 29409.501127
iteration 11800 / 50000: loss 27664.374866
iteration 11900 / 50000: loss 29461.826129
iteration 12000 / 50000: loss 26028.597995
iteration 12100 / 50000: loss 29680.761865
iteration 12200 / 50000: loss 25841.113297
iteration 12300 / 50000: loss 27426.796999
iteration 12400 / 50000: loss 38820.015207
iteration 12500 / 50000: loss 38963.604201
iteration 12600 / 50000: loss 34715.208619
iteration 12700 / 50000: loss 36469.647240
iteration 12800 / 50000: loss 24337.500842
iteration 12900 / 50000: loss 33591.867707
iteration 13000 / 50000: loss 36949.963688
iteration 13100 / 50000: loss 35798.749040
iteration 13200 / 50000: loss 29770.812684
iteration 13300 / 50000: loss 29195.761822
iteration 13400 / 50000: loss 31404.371012
iteration 13500 / 50000: loss 26270.323208
iteration 13600 / 50000: loss 33053.413470
iteration 13700 / 50000: loss 32980.959108
iteration 13800 / 50000: loss 26735.050486
iteration 13900 / 50000: loss 43345.718428
iteration 14000 / 50000: loss 32050.433414
iteration 14100 / 50000: loss 24373.849666
iteration 14200 / 50000: loss 31877.203666
iteration 14300 / 50000: loss 25248.489268
iteration 14400 / 50000: loss 27760.956980
iteration 14500 / 50000: loss 38283.485623
iteration 14600 / 50000: loss 31512.167187
iteration 14700 / 50000: loss 29193.646360
iteration 14800 / 50000: loss 37183.708466
iteration 14900 / 50000: loss 26483.888991
iteration 15000 / 50000: loss 27397.546813
iteration 15100 / 50000: loss 36567.860607
iteration 15200 / 50000: loss 22061.778018
iteration 15300 / 50000: loss 18728.037055
iteration 15400 / 50000: loss 27714.316528
iteration 15500 / 50000: loss 32958.322352
iteration 15600 / 50000: loss 26448.060390
iteration 15700 / 50000: loss 26149.705476
iteration 15800 / 50000: loss 32646.082633
iteration 15900 / 50000: loss 27154.416957
iteration 16000 / 50000: loss 31470.152537
iteration 16100 / 50000: loss 32001.514287
iteration 16200 / 50000: loss 28563.910788
iteration 16300 / 50000: loss 31887.056165
iteration 16400 / 50000: loss 29705.610582
iteration 16500 / 50000: loss 37282.337120
iteration 16600 / 50000: loss 25181.471227
iteration 16700 / 50000: loss 33905.821779
iteration 16800 / 50000: loss 23055.367955
iteration 16900 / 50000: loss 30422.595391
iteration 17000 / 50000: loss 25223.639515
iteration 17100 / 50000: loss 29332.840986
iteration 17200 / 50000: loss 28747.307395
iteration 17300 / 50000: loss 27938.986528
iteration 17400 / 50000: loss 30462.908234
iteration 17500 / 50000: loss 33383.587473
iteration 17600 / 50000: loss 30491.926745
iteration 17700 / 50000: loss 28862.365103
iteration 17800 / 50000: loss 34151.513020
iteration 17900 / 50000: loss 30310.746685
iteration 18000 / 50000: loss 30600.834542
iteration 18100 / 50000: loss 28342.575884
iteration 18200 / 50000: loss 29141.959828
iteration 18300 / 50000: loss 38335.007900
iteration 18400 / 50000: loss 27061.217270
iteration 18500 / 50000: loss 33478.493997
iteration 18600 / 50000: loss 30307.340195
iteration 18700 / 50000: loss 26674.265042
iteration 18800 / 50000: loss 33310.364723
iteration 18900 / 50000: loss 24201.421186
iteration 19000 / 50000: loss 36522.489384
iteration 19100 / 50000: loss 33906.834676
iteration 19200 / 50000: loss 32267.817033
iteration 19300 / 50000: loss 25502.834828
iteration 19400 / 50000: loss 35374.508432
iteration 19500 / 50000: loss 24453.203555
iteration 19600 / 50000: loss 27480.425100
iteration 19700 / 50000: loss 25739.401765
iteration 19800 / 50000: loss 34817.330214
iteration 19900 / 50000: loss 30790.293896
iteration 20000 / 50000: loss 50936.255772
iteration 20100 / 50000: loss 27774.657881
iteration 20200 / 50000: loss 36325.119878
iteration 20300 / 50000: loss 26682.417342
iteration 20400 / 50000: loss 26517.366452
iteration 20500 / 50000: loss 28404.404600
iteration 20600 / 50000: loss 23678.780047
iteration 20700 / 50000: loss 27219.183373
iteration 20800 / 50000: loss 28626.753386
iteration 20900 / 50000: loss 35552.682291
iteration 21000 / 50000: loss 31247.715540
iteration 21100 / 50000: loss 28438.229833
iteration 21200 / 50000: loss 24482.248822
iteration 21300 / 50000: loss 25903.533791
iteration 21400 / 50000: loss 27380.746300
iteration 21500 / 50000: loss 27371.614226
iteration 21600 / 50000: loss 25989.611212
iteration 21700 / 50000: loss 41728.294850
iteration 21800 / 50000: loss 28213.390699
iteration 21900 / 50000: loss 31075.766068
iteration 22000 / 50000: loss 33181.819090
iteration 22100 / 50000: loss 31919.278681
iteration 22200 / 50000: loss 29223.474029
iteration 22300 / 50000: loss 23141.016248
iteration 22400 / 50000: loss 34661.074827
iteration 22500 / 50000: loss 29488.899999
iteration 22600 / 50000: loss 28733.274908
iteration 22700 / 50000: loss 23325.648704
iteration 22800 / 50000: loss 33119.209484
iteration 22900 / 50000: loss 46071.755375
iteration 23000 / 50000: loss 24950.402578
iteration 23100 / 50000: loss 35319.150980
iteration 23200 / 50000: loss 31775.541885
iteration 23300 / 50000: loss 27628.725833
iteration 23400 / 50000: loss 35920.956054
iteration 23500 / 50000: loss 33369.388448
iteration 23600 / 50000: loss 32471.854248
iteration 23700 / 50000: loss 36606.036239
iteration 23800 / 50000: loss 32190.129225
iteration 23900 / 50000: loss 32960.690522
iteration 24000 / 50000: loss 41022.927022
iteration 24100 / 50000: loss 44426.764149
iteration 24200 / 50000: loss 33104.568075
iteration 24300 / 50000: loss 34671.496235
iteration 24400 / 50000: loss 36169.645700
iteration 24500 / 50000: loss 35519.959144
iteration 24600 / 50000: loss 32329.045604
iteration 24700 / 50000: loss 34119.533346
iteration 24800 / 50000: loss 23626.481455
iteration 24900 / 50000: loss 37882.698730
iteration 25000 / 50000: loss 27951.657616
iteration 25100 / 50000: loss 28381.954887
iteration 25200 / 50000: loss 28761.600530
iteration 25300 / 50000: loss 27742.726797
iteration 25400 / 50000: loss 29775.171525
iteration 25500 / 50000: loss 27620.737751
iteration 25600 / 50000: loss 34965.653380
iteration 25700 / 50000: loss 31691.077699
iteration 25800 / 50000: loss 30940.766038
iteration 25900 / 50000: loss 29295.831693
iteration 26000 / 50000: loss 37159.593877
iteration 26100 / 50000: loss 29007.619257
iteration 26200 / 50000: loss 30129.694347
iteration 26300 / 50000: loss 27357.227159
iteration 26400 / 50000: loss 30749.115428
iteration 26500 / 50000: loss 27867.276541
iteration 26600 / 50000: loss 26258.761578
iteration 26700 / 50000: loss 29261.130793
iteration 26800 / 50000: loss 30074.242329
iteration 26900 / 50000: loss 27806.560692
iteration 27000 / 50000: loss 38754.173914
iteration 27100 / 50000: loss 24311.501835
iteration 27200 / 50000: loss 32273.969551
iteration 27300 / 50000: loss 31770.290071
iteration 27400 / 50000: loss 27958.253092
iteration 27500 / 50000: loss 27650.161314
iteration 27600 / 50000: loss 37572.552306
iteration 27700 / 50000: loss 25165.372860
iteration 27800 / 50000: loss 36331.985821
iteration 27900 / 50000: loss 37588.900597
iteration 28000 / 50000: loss 27036.208562
iteration 28100 / 50000: loss 41147.924838
iteration 28200 / 50000: loss 26287.809187
iteration 28300 / 50000: loss 27376.571412
iteration 28400 / 50000: loss 28303.598894
iteration 28500 / 50000: loss 27685.969866
iteration 28600 / 50000: loss 25750.822229
iteration 28700 / 50000: loss 37851.520510
iteration 28800 / 50000: loss 27835.744354
iteration 28900 / 50000: loss 27529.971967
iteration 29000 / 50000: loss 38564.464655
iteration 29100 / 50000: loss 25498.945939
iteration 29200 / 50000: loss 26039.432507
iteration 29300 / 50000: loss 26841.432829
iteration 29400 / 50000: loss 27056.502188
iteration 29500 / 50000: loss 37122.743544
iteration 29600 / 50000: loss 33506.167595
iteration 29700 / 50000: loss 33280.678923
iteration 29800 / 50000: loss 28670.311549
iteration 29900 / 50000: loss 29338.852632
iteration 30000 / 50000: loss 31679.044904
iteration 30100 / 50000: loss 29295.880651
iteration 30200 / 50000: loss 26004.723245
iteration 30300 / 50000: loss 38724.601429
iteration 30400 / 50000: loss 30385.485150
iteration 30500 / 50000: loss 28378.403320
iteration 30600 / 50000: loss 28134.868213
iteration 30700 / 50000: loss 30368.070557
iteration 30800 / 50000: loss 31086.542384
iteration 30900 / 50000: loss 29852.765460
iteration 31000 / 50000: loss 27838.541026
iteration 31100 / 50000: loss 25670.087797
iteration 31200 / 50000: loss 32180.946144
iteration 31300 / 50000: loss 29294.861736
iteration 31400 / 50000: loss 28753.086263
iteration 31500 / 50000: loss 22387.719556
iteration 31600 / 50000: loss 39186.792554
iteration 31700 / 50000: loss 33415.079776
iteration 31800 / 50000: loss 36923.867723
iteration 31900 / 50000: loss 30406.276754
iteration 32000 / 50000: loss 27612.221694
iteration 32100 / 50000: loss 28154.618462
iteration 32200 / 50000: loss 32429.376022
iteration 32300 / 50000: loss 24073.805498
iteration 32400 / 50000: loss 34254.370953
iteration 32500 / 50000: loss 30329.204104
iteration 32600 / 50000: loss 31532.994529
iteration 32700 / 50000: loss 30129.341579
iteration 32800 / 50000: loss 29602.301266
iteration 32900 / 50000: loss 30088.917103
iteration 33000 / 50000: loss 35629.869629
iteration 33100 / 50000: loss 29213.048729
iteration 33200 / 50000: loss 33868.482729
iteration 33300 / 50000: loss 30033.799724
iteration 33400 / 50000: loss 29065.567479
iteration 33500 / 50000: loss 29387.688517
iteration 33600 / 50000: loss 33008.136705
iteration 33700 / 50000: loss 30912.641489
iteration 33800 / 50000: loss 27055.679879
iteration 33900 / 50000: loss 23752.842520
iteration 34000 / 50000: loss 25750.529557
iteration 34100 / 50000: loss 29627.482442
iteration 34200 / 50000: loss 25497.904737
iteration 34300 / 50000: loss 27289.934635
iteration 34400 / 50000: loss 23914.372273
iteration 34500 / 50000: loss 31098.354553
iteration 34600 / 50000: loss 22623.457136
iteration 34700 / 50000: loss 33332.111285
iteration 34800 / 50000: loss 27512.092362
iteration 34900 / 50000: loss 34958.089863
iteration 35000 / 50000: loss 31692.725070
iteration 35100 / 50000: loss 27949.191614
iteration 35200 / 50000: loss 39192.351551
iteration 35300 / 50000: loss 31774.278988
iteration 35400 / 50000: loss 35670.243240
iteration 35500 / 50000: loss 28750.836671
iteration 35600 / 50000: loss 34968.339421
iteration 35700 / 50000: loss 45858.184321
iteration 35800 / 50000: loss 28255.089935
iteration 35900 / 50000: loss 27939.932402
iteration 36000 / 50000: loss 34123.194966
iteration 36100 / 50000: loss 29154.704541
iteration 36200 / 50000: loss 27565.503882
iteration 36300 / 50000: loss 29746.945418
iteration 36400 / 50000: loss 28929.998759
iteration 36500 / 50000: loss 28177.826306
iteration 36600 / 50000: loss 35460.947319
iteration 36700 / 50000: loss 27615.626486
iteration 36800 / 50000: loss 34199.707210
iteration 36900 / 50000: loss 28360.563038
iteration 37000 / 50000: loss 23624.305510
iteration 37100 / 50000: loss 31137.579902
iteration 37200 / 50000: loss 37207.167413
iteration 37300 / 50000: loss 38031.087303
iteration 37400 / 50000: loss 27545.345811
iteration 37500 / 50000: loss 25282.140814
iteration 37600 / 50000: loss 26086.092360
iteration 37700 / 50000: loss 31687.971876
iteration 37800 / 50000: loss 37586.410005
iteration 37900 / 50000: loss 19648.896653
iteration 38000 / 50000: loss 26974.946687
iteration 38100 / 50000: loss 34125.356781
iteration 38200 / 50000: loss 27450.759698
iteration 38300 / 50000: loss 26218.489787
iteration 38400 / 50000: loss 29587.361658
iteration 38500 / 50000: loss 32529.065356
iteration 38600 / 50000: loss 36043.483563
iteration 38700 / 50000: loss 31360.514912
iteration 38800 / 50000: loss 34531.489324
iteration 38900 / 50000: loss 34315.281410
iteration 39000 / 50000: loss 29752.326085
iteration 39100 / 50000: loss 22456.579883
iteration 39200 / 50000: loss 29149.510369
iteration 39300 / 50000: loss 22727.754888
iteration 39400 / 50000: loss 27094.105787
iteration 39500 / 50000: loss 22245.346553
iteration 39600 / 50000: loss 28931.644017
iteration 39700 / 50000: loss 30673.185221
iteration 39800 / 50000: loss 38195.713337
iteration 39900 / 50000: loss 27154.815175
iteration 40000 / 50000: loss 25142.198913
iteration 40100 / 50000: loss 37685.036742
iteration 40200 / 50000: loss 36676.728734
iteration 40300 / 50000: loss 29254.774175
iteration 40400 / 50000: loss 32005.677007
iteration 40500 / 50000: loss 27360.635622
iteration 40600 / 50000: loss 27899.159934
iteration 40700 / 50000: loss 36032.447060
iteration 40800 / 50000: loss 31879.102832
iteration 40900 / 50000: loss 28608.775191
iteration 41000 / 50000: loss 41313.594994
iteration 41100 / 50000: loss 32098.477054
iteration 41200 / 50000: loss 28539.074311
iteration 41300 / 50000: loss 28936.455713
iteration 41400 / 50000: loss 33480.592834
iteration 41500 / 50000: loss 38536.593322
iteration 41600 / 50000: loss 33833.361101
iteration 41700 / 50000: loss 35093.469876
iteration 41800 / 50000: loss 28696.308953
iteration 41900 / 50000: loss 37707.328978
iteration 42000 / 50000: loss 30197.820386
iteration 42100 / 50000: loss 24131.054903
iteration 42200 / 50000: loss 35748.690775
iteration 42300 / 50000: loss 27394.670834
iteration 42400 / 50000: loss 25668.014276
iteration 42500 / 50000: loss 31372.324750
iteration 42600 / 50000: loss 28865.363822
iteration 42700 / 50000: loss 36127.749352
iteration 42800 / 50000: loss 27979.335024
iteration 42900 / 50000: loss 34707.864391
iteration 43000 / 50000: loss 28011.870060
iteration 43100 / 50000: loss 28831.608513
iteration 43200 / 50000: loss 30791.958441
iteration 43300 / 50000: loss 32388.924674
iteration 43400 / 50000: loss 27855.486080
iteration 43500 / 50000: loss 24734.889301
iteration 43600 / 50000: loss 28455.742163
iteration 43700 / 50000: loss 26747.698508
iteration 43800 / 50000: loss 27322.799613
iteration 43900 / 50000: loss 30986.460432
iteration 44000 / 50000: loss 31088.372343
iteration 44100 / 50000: loss 31976.610484
iteration 44200 / 50000: loss 34486.160954
iteration 44300 / 50000: loss 40291.159238
iteration 44400 / 50000: loss 28178.213430
iteration 44500 / 50000: loss 30508.341310
iteration 44600 / 50000: loss 28457.843735
iteration 44700 / 50000: loss 35384.151842
iteration 44800 / 50000: loss 28388.878517
iteration 44900 / 50000: loss 34777.126451
iteration 45000 / 50000: loss 37745.837221
iteration 45100 / 50000: loss 34774.229017
iteration 45200 / 50000: loss 23957.006382
iteration 45300 / 50000: loss 35962.783055
iteration 45400 / 50000: loss 24074.503077
iteration 45500 / 50000: loss 28699.783801
iteration 45600 / 50000: loss 28710.405675
iteration 45700 / 50000: loss 28715.040729
iteration 45800 / 50000: loss 27766.633569
iteration 45900 / 50000: loss 40215.294115
iteration 46000 / 50000: loss 26740.269233
iteration 46100 / 50000: loss 28719.250118
iteration 46200 / 50000: loss 31571.539945
iteration 46300 / 50000: loss 35348.572378
iteration 46400 / 50000: loss 32971.218146
iteration 46500 / 50000: loss 26005.362715
iteration 46600 / 50000: loss 34236.236918
iteration 46700 / 50000: loss 36136.404523
iteration 46800 / 50000: loss 25448.111729
iteration 46900 / 50000: loss 32009.678918
iteration 47000 / 50000: loss 27857.295040
iteration 47100 / 50000: loss 30874.011043
iteration 47200 / 50000: loss 20972.937025
iteration 47300 / 50000: loss 31780.321468
iteration 47400 / 50000: loss 24490.037699
iteration 47500 / 50000: loss 36765.924637
iteration 47600 / 50000: loss 27804.460521
iteration 47700 / 50000: loss 31706.534829
iteration 47800 / 50000: loss 27943.432259
iteration 47900 / 50000: loss 33401.555251
iteration 48000 / 50000: loss 28385.379318
iteration 48100 / 50000: loss 29125.078210
iteration 48200 / 50000: loss 26060.564229
iteration 48300 / 50000: loss 28062.698713
iteration 48400 / 50000: loss 26745.713740
iteration 48500 / 50000: loss 31003.223446
iteration 48600 / 50000: loss 29748.654683
iteration 48700 / 50000: loss 34487.246331
iteration 48800 / 50000: loss 33904.614047
iteration 48900 / 50000: loss 40127.977687
iteration 49000 / 50000: loss 32742.793010
iteration 49100 / 50000: loss 32009.341150
iteration 49200 / 50000: loss 27011.972731
iteration 49300 / 50000: loss 27974.319780
iteration 49400 / 50000: loss 29400.805694
iteration 49500 / 50000: loss 36596.889755
iteration 49600 / 50000: loss 36850.838869
iteration 49700 / 50000: loss 33171.288230
iteration 49800 / 50000: loss 23961.999980
iteration 49900 / 50000: loss 38739.192702
Validation error: 1243.04065949
###Markdown
Debug the training and improve learningYou should be able to get a validation error of 5.So far so good. But, is it really good? Let us plot the validation and training errors to see how good the network did. Did it memorize or generalize? Discuss your observations and conclusions. If its performance is not looking good, propose and test measures. This is the part that will show me how well you have digested everything covered in the lectures.
###Code
# Plot the loss function and train / validation errors
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
train = plt.plot(stats['train_err_history'], label='train')
val = plt.plot(stats['val_err_history'], label='val')
plt.legend(loc='upper right', shadow=True)
plt.title('Classification error history')
plt.xlabel('Epoch')
plt.ylabel('Clasification error')
plt.show()
print(stats['train_err_history'])
iterations_per_epoch = int(max(X_train.shape[0] / 32, 1))
print(iterations_per_epoch, X_train.shape[0])
###Output
[19150.281411869848]
739 23669
|
10_Missing_Values/3_How_to_Handle_Missing_Data.ipynb | ###Markdown
https://towardsdatascience.com/handling-missing-data-for-a-beginner-6d6f5ea53436 Handling Missing Data Understanding Missing Data - Missing data can come in all shapes and sizes.- You can have data that looks like line 1 below where it’s only missing data in the Insulin column.- You can have data that’s missing across a lot of columns like in line 2.- You can have data that contains 0s across a lot of columns like in line 3. - You can visualize each column of data in **boxplots to find outliers**.- You can also use **heatmaps** to visualize your data highlighting the missing data. 
###Code
import seaborn as sns
sns.heatmap(df.isnull(), cbar=False)
###Output
_____no_output_____ |
functions/inheritance.ipynb | ###Markdown
Python Inheritance and Super() Inheritance allows us to define a class that inherits all the methods and properties from another class. Parent class is the class being inherited from, also called base class. Child class is the class that inherits from another class, also called derived class. Here's a simple class definition with initialization method
###Code
# Simple class definition
class Person:
def __init__(self, fname, lname):
self.firstname = fname
self.lastname = lname
def printname(self):
print(self.firstname, self.lastname)
x = Person("Sejin", "Nam")
x.printname()
###Output
Sejin Nam
###Markdown
You can inherit the methods of the parent class as below. Note that the child class does not automatically inherit initialization method, only function methods are passed on.
###Code
# Creating a child class
class Student(Person):
def __init__(self, fname, lname, college, major):
self.firstname = fname
self.lastname = lname
self.college = college
self.major = major
def printmajor(self):
print(self.firstname, "is studying", self.major, "at", self.college)
y = Student("Sejin", "Nam", "UH Manoa", "Physics")
y.printmajor()
y.printname()
###Output
Sejin Nam
###Markdown
Note that the child's __init__() function overrides the inheritance of the parent's __init__() function. Thus, to keep the inheritance of the parent's __init__() function, add a call to the parent's __init__() function
###Code
# Calling parent's __init__() inside the child's __init__()
class Gamer(Person):
def __init__(self, fname, lname, game):
Person.__init__(self, fname, lname)
self.game = game
def printgame(self):
print(self.firstname, "plays", self.game)
z = Gamer("Sejin", "Nam", "Chess")
z.printgame()
z.printname()
###Output
Sejin Nam
###Markdown
Python also has a super() function enables class inheritance more manageable and extensible.
###Code
# A child's class from Gamer parent class with super()
class Asian(Gamer):
def __init__(self, fname, lname, game, nat):
super().__init__(fname, lname, game)
self.nat = nat
def printnat(self):
print(self.firstname, "is a national of", self.nat)
a = Asian("Sejin", "Nam", "Chess", "Korea")
a.printnat()
a.printname()
###Output
Sejin Nam
|
sandbox/presentation/1570643624.ipynb | ###Markdown
Modeling the Epidemic Outbreak and Dynamics of COVID-19 in Croatia Ante Lojic Kapetanovic1, Dragan Poljak2 [email protected], [email protected] of Electronics and Computer Engineering, University of SplitPaper (submission date 11/4/2020): on ArXiv.Paper code (Python package): on GitHub.--- Content* Abstract* Results * Introduction* Data* Initial outbreak modeling * Growth modeling * Exponential growth * Sigmoidal growth* Dynamics modeling * Modified SEIR model * SEIRD model * Multiwave simulation * Interesting read* Effective reproduction number* Conclusion* Supplementary material Abstract (from the paper written during the lockdown in March, 2020)"The paper deals with a modeling of the ongoing epidemic caused by Coronavirus disease 2019 (COVID-19) on the closed territory of the Republic of Croatia. Using the official public information on the number of confirmed infected, recovered and deceased individuals, the modified SEIR compartmental model is developed to describe the underlying dynamics of the epidemic. Fitted modified SEIR model provides the prediction of the disease progression in the near future, considering strict control interventions by means of social distancing and quarantine for infected and at-risk individuals introduced at the beginning of COVID-19 spread on February, 25th by Croatian Ministry of Health. Assuming the accuracy of provided data and satisfactory representativeness of the model used, the basic reproduction number is derived. Obtained results portray potential positive developments and justify the stringent precautionary measures introduced by the Ministry of Health." Results (from the paper written during the lockdown in March, 2020)"Fitting the data provides optimal values for epidemiological parameters $\alpha$, $\beta$, $\gamma$ and $\delta$. The basic reproduction number is then calculated for different phases of the epidemic and the resulting values are $1.43$, $1.33$ and $1.25$ for $80\%$ of the data, $88\%$ of the data and the complete data set, respectively.These results imply the effectiveness of the control measures implemented to combat the epidemic as $R0$ decreases with each increase of the data set. In case there is no change in control measures, one could infer that the positive downward trend will continue up until the late April when the number of confirmed active infected cases will reach its maximum. The maximum point is also the inflection point indicating the moment at which $R0<1$ and after which, with the retention of the control measures, the number of total confirmed cases stops increasing."
###Code
from covid_19.plotting import plot_data
plot_data(
epidemics_start_date, confirmed_cases, recovered_cases,
death_cases, daily_tests)
###Output
_____no_output_____
###Markdown
IntroductionThe epidemic of coronavirus disase 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus2 (SARS-CoV-2), began in Wuhan, China, in late December 2019 [1](fn1).A lot of effort has been invested to develop the best possible models that would predict the behavior and dynamics of the epidemic from the day one.In order to determine epidemic parameters, stochastic models are well adopted and preferred in the current research, but during the ongoing epidemic process, the data are sparse and the epidemic dynamics are better described using deterministic data driven modeling [2](fn2) [3](fn3) [4](fn4) [5](fn5) [6](fn6). 1World Health Organization. (2020) Coronavirus disease (COVID-19) outbreak2Liangrong, P. et al. (2020) Epidemic analysis of covid-19 in china by dynamical modeling3Zhao, S. et al. (2020) Modeling the epidemic dynamics and control ofCOVID-19 outbreak in China4Lopez, L. R. et al. (2020) A modified SEIR model to predict the COVID-19 outbreak in Spain: simulating control scenarios and multi-scale epidemics5Cereda, D. et al. (2020) The early phase of the COVID-19 outbreak in Lombardy, Italy6Calafiore, G. C. et al. (2020) A Modified SIR Model for the COVID-19 Contagion in Italy In order to determine parameters of any epidemiological model for epidemics, all clincal features of the pathogen have to be known. Even though coronavirus-based diseases are well known and documented, there are novel important features [7](fn7):* a prolonged incubation period, which cause the time delay between real dynamics and the actual status;* asymptomatic individuals are capable of being infectious carriers of the pathogen;* the disease transmission is achieved via respiratory droplets and is extremely difficult to prevent due the well resilient pathogen hardly affected by external atmospheric conditions. 7Guan, W. et al. (2020) Clinical Characteristics of Coronavirus Disease 2019 in China Here, we introduce the modified version of SEIR(D) model, based on the early work of Kermack and McKendrick [8](fn8) [9](fn9) [10](fn10), with a single additional parameter that enables asymptomatic individuals to be active infectious pathogen carriers to fit and implicitly include additional compartment for quarantined and self-isolated individuals: 8Kermack, W. and McKendrick, A. (1991) Contributions to the mathematical theory of epidemics – I9Kermack, W. and McKendrick, A. (1991) Contributions to the mathematical theory of epidemics – II. The problem of endemicity10Kermack, W. and McKendrick, A. (1991) Contributions to the mathematical theory of epidemics – III. Further studies of the problem of endemicity \begin{align} \label{eqn.s} S' &= - \beta \cdot \frac{I}{N} \cdot S - \delta \cdot E \cdot S \\ \label{eqn.e} E' &= \beta \cdot \frac{I}{N} \cdot S - \alpha \cdot E + \delta \cdot E \cdot S \\ \label{eqn.i} I' &= \alpha \cdot E - \gamma \cdot I - \mu \cdot I \\ \label{eqn.r} R' &= \gamma \cdot I \\ \label{eqn.d} \big(D' &= \mu \cdot I\big)\end{align} where* $S$ is the susceptibles compartment;* $E$ is the exposed compartment;* $I$ is the infected compartment;* $R$ is the recovered compartment and* $D$ is the deceased compartment.and* $\beta$ - transition or infectious rate; controls the rate of spread which represents the probability of transmitting disease between a susceptible and an infected individual per contact per unit time;* $\gamma$ - recovery rate;* $\mu$ - mortality rate; * $\alpha$ - incubation rate, the reciprocal value of the incubation period;* $\delta$ - direct transition rate between susceptible and exposed individual;* $q$ - quarantine or self-isolation rate. DataDaily data on the number of confirmed infected, recovered, deceased individuals, as well as the number of daily PCR tests performed were collected from the official website [11](fn11) and stored locally for further analysis and modeling. 11Croatian institue of public health. (2020) Official government website for accurate and verified infromation on Coronavirus.
###Code
dataframe
###Output
_____no_output_____
###Markdown
Initial outbreak modeling Growth modeling Exponential growth
###Code
from covid_19 import simulate
eff_date = dt.datetime(2020, 8, 1)
cases = dataframe[dataframe.date > eff_date].confirmed_cases
simulate.initial_growth(
'exponential', eff_date, cases,
normalize_data=False, n_days=7)
eff_date = dt.datetime(2020, 8, 1)
cases = dataframe[dataframe.date > eff_date].confirmed_cases
simulate.initial_growth(
'exponential', eff_date, cases,
normalize_data=False, n_days=7,
plot_confidence_intervals=True)
###Output
_____no_output_____
###Markdown
Unfortunatelly, there is no confirmed value of sensitivity and specificity for tests in Croatia (different hospitals) use different tests. Ideal case is when both sensitivity and specificity are 1 (no false classifications). Realistic case is to expect high value of specificity and sensitivity between 72% and 98% [12](fn12). Since this simulator takes worse case scenario into an account (95% CI lower bound for sensitivity and upper bound for specificity), the simulation performed here uses sensitivity with expected value of 85% (for 95% CI range between 80.75% and 89.25%, where lower value is taken into an account) and specificity with expected value of 95% (for 95% CI range between 90.25% and 99.75%, where upper value is taken into an account). 12Watson J. et al. (2020) Interpreting a COVID-19 test result
###Code
eff_date = dt.datetime(2020, 8, 1)
cases = dataframe[dataframe.date > eff_date].confirmed_cases
tests = dataframe[dataframe.date > eff_date].daily_tests.values
simulate.initial_growth(
'exponential', eff_date, cases,
normalize_data=False, n_days=7,
plot_confidence_intervals=True,
sensitivity=0.85, specificity=0.95, ci_level=95, daily_tests=tests)
from covid_19 import simulate
simulate.averaged_new_cases_v_total_cases(confirmed_cases)
###Output
_____no_output_____
###Markdown
Sigmoidal growth
###Code
from covid_19 import simulate
eff_date = dt.datetime(2020, 8, 1)
cases = dataframe[dataframe.date > eff_date].confirmed_cases
tests = dataframe[dataframe.date > eff_date].daily_tests.values
simulate.initial_growth(
'logistic', eff_date, cases,
normalize_data=True, n_days=7,
plot_confidence_intervals=True,
sensitivity=0.85, specificity=0.95, ci_level=95, daily_tests=tests)
###Output
_____no_output_____
###Markdown
Dynamics modeling Modified SEIR model
###Code
removed_cases = recovered_cases + death_cases
active_cases = confirmed_cases - removed_cases
duration = 101
S0 = 2200
E0 = 3 * active_cases[0]
I0 = active_cases[0]
R0 = removed_cases[0]
from covid_19 import simulate
(S, E, I, R), seir_model, loss = simulate.seir_dynamics(
active_cases=active_cases[:duration],
removed_cases=removed_cases[:duration],
initial_conditions=(S0, E0, I0, R0),
epidemics_start_date=epidemics_start_date,
plot_sim=True,
plot_l=False)
###Output
_____no_output_____
###Markdown
SEIRD model
###Code
duration = 45
S0 = 2200
E0 = 3 * active_cases[0]
I0 = active_cases[0]
R0 = recovered_cases[0]
D0 = death_cases[0]
from covid_19 import simulate
(S, E, I, R, D), seird_model, loss = simulate.seird_dynamics(
active_cases=active_cases[:duration],
recovered_cases=recovered_cases[:duration],
death_cases=death_cases[:duration],
initial_conditions=(S0, E0, I0, R0, D0),
epidemics_start_date=epidemics_start_date,
plot_sim=True,
plot_l=False,
sensitivity=0.90,
specificity=0.96,
new_positives=np.diff(np.concatenate((np.array([0]), confirmed_cases[:duration]))),
total_tests=daily_tests[:duration])
from covid_19.plotting import plot_compartmental_model_forecast
S_pred, E_pred, I_pred, R_pred, D_pred = seird_model.forecast(30)
plot_compartmental_model_forecast(
epidemics_start_date,
active_cases[:duration], I, I_pred,
recovered_cases[:duration], R, R_pred,
death_cases[:duration], D, D_pred)
###Output
_____no_output_____
###Markdown
Multiwave simulation
###Code
from covid_19 import simulate
(S, E, I, R, D) = simulate.seird_multiple_waves(
active_cases=active_cases,
recovered_cases=recovered_cases,
death_cases=death_cases,
first_wave_eff_population=2200,
eff_dates=[dt.datetime(2020, 2, 26),
dt.datetime(2020, 6, 9),
dt.datetime(2020, 8, 8)],
plot_sim=True)
###Output
_____no_output_____
###Markdown
Interesting read on the topic1. [Extended SEIRS model for studying population structure, social distancing, testing, tracing, and quarantining—including stochastic implementations of these models on dynamic networks](https://twitter.com/RS_McGee/status/1242949797247508480) by Ryan McGee;2. [COVID-19 Projections Using Machine Learning](https://covid19-projections.com/) by Youyang Gu;3. [Answering the Initial 20 Questions on COVID-19](https://medium.com/@irudan/answering-the-initial-20-questions-on-covid-19-83f40b0486d1) and [Answering 20 More Questions on COVID-19](https://medium.com/@irudan/answering-20-more-questions-on-covid-19-26f179e0c354) by Igor Rudan. Reproduction number**Basic reproduction number $(R_0)$**The expected number of secondary infections in a sufficiently large population without prior immunity to a disease. The non-immunity assumption is well aligned with the COVID-19 disease outbreak, since there is no maternal immunity nor there is a functional vaccine yet.\begin{align} \label{eqn.R0} R_0 &= \frac{\beta}{\gamma + \alpha}\end{align}**Effective reproduction number $(R_t)$**The expected number of secondary infections caused by a single infected individual at time $t$ in the partially susceptible population. Importance here lies in the varying proportions of the population that become immune for a variety of reasons at any time $t$.\begin{align} \label{eqn.Rt} R_t &= S(t) \cdot R_0\end{align}
###Code
from covid_19 import R0
R0.run(
epidemics_start_date,
confirmed_cases,
averaging_period=16,
symptoms_delay=3,
ci_plot=True, sensitivity=0.8, specificity=0.95, daily_tests=daily_tests)
###Output
_____no_output_____ |
tutorials/pipelines/azure.ipynb | ###Markdown
Azure Analysis Example This is a demo notebook showing how to use **azure** pipeline on a signal using the `orion.analysis.analyze` function. For more information about the usage of microsoft's anomaly detection API, view their documentation [here](https://docs.microsoft.com/en-us/azure/cognitive-services/anomaly-detector/). 1. Load the dataIn the first step, we load the signal that we want to process.To do so, we need to import the `orion.data.load_signal` function and call it passingeither the path to the CSV file or the name of the signal to fetch fromm the `s3 bucket`.In this case, we will be loading the `S-1`.
###Code
from orion.data import load_signal
signal_path = 'S-1'
data = load_signal(signal_path)
data.head()
###Output
_____no_output_____
###Markdown
2. Setup the pipelineTo use `azure` pipeline, we first need two important information: `subscription_key` and `endpoint`. In order to obtain them, you must setup an Anomaly Detection resource on Azure portal, follow the steps mentioned [here](https://docs.microsoft.com/en-us/azure/cognitive-services/anomaly-detector/quickstarts/client-libraries?pivots=programming-language-python&tabs=linux) to setup your resource instance.Once that's accomplished, update the hyperparameter dictionary specified to the values of your instance.
###Code
# your subscription key and endpoint
subscription_key = None
endpoint = None
hyperparameters = {
"mlprimitives.custom.timeseries_preprocessing.time_segments_aggregate#1": {
"interval": 21600,
},
"orion.primitives.azure_anomaly_detector.split_sequence#1": {
"sequence_size": 6000,
"overlap_size": 2640
},
"orion.primitives.azure_anomaly_detector.detect_anomalies#1": {
"subscription_key": subscription_key,
"endpoint": endpoint,
"overlap_size": 2640,
"interval": 21600,
"granularity": "hourly",
"custom_interval": 6
}
}
###Output
_____no_output_____
###Markdown
The `split_sequence` primitive takes the signal and splits it into multiple signals based on the `sequence_size` and `overlap_size`. Since the method uses a rolling window sequence approach, we use the `overlap_size` to maintain historical information when splitting the sequence.It is custom to set the `overlap_size` as the same value in both `split_sequence` and `detect_anomalies` primitives. In addition, we require the frequency of the signal to be recorded in timestamp interval, as well as convention based where `granularity` refers to the aggregation unit (e.g. hourly, minutely, etc) and `custom_interval` refers to the quantity (in this case, 6 hours). 3. Detect anomalies using azure pipelineOnce we have the data and setup, we use the azure pipeline to analyze it and search for anomalies.In order to do so, we will have import the `orion.analysis.analyze` function and pass itthe loaded data and the path to the pipeline JSON that we want to use.In this case, we will be using the `azure.json` pipeline from inside the `orion` folder.The output will be a ``pandas.DataFrame`` containing a table with the detected anomalies.
###Code
from orion.analysis import analyze
pipeline_path = 'azure'
if subscription_key and endpoint:
anomalies = analyze(pipeline_path, data, hyperparams=hyperparameters)
###Output
_____no_output_____ |
docs/examples/1-basics/1b-Tutorial-Arps_Class.ipynb | ###Markdown
Dcapy - Arps ClassThis section introduces the `Arps` class which is a *'wrapper'* for the Arps Function seen in the previous section. It add certain functionalities to the forecast estimation, like dates, plots, cumulatives, water calculation. By taking advantage of python Object-Oriented functionalities it is very convinient to define a class with the required properties to make an Arps declination analysis. With the class are defined methods that help to make the forecast in a very flexible way. That means you can make different kind of forecast from the same Arps declination parameters.
###Code
import os
from dcapy import dca
import numpy as np
import pandas as pd
from datetime import date
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
np.seterr(divide='ignore')
###Output
_____no_output_____
###Markdown
Arps ClassAs seen in the previous section to define an Arps declination object you must have a *Decline rate* `di`, *b coefficient* `b`, *Initial Time* `Ti`, *Initial rate* `qi`. With these properties you can create a simple Arps Class. The time array to make a forecast can vary depending on the horizon time, frequency or rates limits. In that way you can estimate multiple forecast from the same class depending on the needs. Let's define a simple Aprs class by providing the same properties we have been seen. We can add a property we had not seen so far which is useful when we incorporates different time units. The units of the declination rate `di`. So far we can handle three periods of time. Days, Months and years.
###Code
# Define a Simple Arps Class
a1 = dca.Arps(
ti = 0,
di = 0.03,
qi = 1500,
b = 0,
freq_di='M'
)
print(a1)
###Output
Declination
Ti: 0
Qi: 1500.0 bbl/d
Di: 0.03 M
b: 0.0
###Markdown
We have defined a Arps class with a nominal declination rate of 0.03 monthly. This is usefull if you want to make a forecast on differnt time basis. You can get forecast on daily, monthly or annual basis from the same Arps Class Time basis When the time is defined with integers, they can represent any of the periods available (days, months or years). For example you can define forecast on daily basis each day or on daily basis each month. Next are the different ways you can create forecastBy calling the method `forecast` and providing either a time array or the start and end, and the frequencies of the output it returns a pandas DataFrame with the forecast with some useful metadata
###Code
print('Calculate Daily Basis each day')
fr = a1.forecast(start=0,end=1095,freq_input='D',freq_output='D')
print(fr)
###Output
Calculate Daily Basis each day
oil_rate oil_cum iteration oil_volume
date
0 1500.000000 0.000000 0 1499.250250
1 1498.500750 1499.250250 0 1498.501000
2 1497.002998 2997.001999 0 1497.003248
3 1495.506743 4493.256745 0 1495.506993
4 1494.011984 5988.015984 0 1494.012233
... ... ... ... ...
1090 504.324741 995675.259440 0 504.324825
1091 503.820668 996179.332102 0 503.820752
1092 503.317099 996682.900944 0 503.317183
1093 502.814034 997185.966468 0 502.814117
1094 502.311471 997688.529178 0 502.562710
[1095 rows x 4 columns]
###Markdown
Let's Plot it instead
###Code
print('Calculate Daily Basis each day - Plot')
fr = a1.plot(start=0,end=1095,freq_input='D',freq_output='D')
###Output
Calculate Daily Basis each day - Plot
###Markdown
Generate forecast with more periods alternatives
###Code
print('Calculate Daily Basis each Month')
fr = a1.forecast(start=0,end=1096,freq_input='D',freq_output='M')
print(fr)
a1.plot(start=0,end=1096,freq_input='D',freq_output='M',rate_kw=dict(palette=['darkgreen'],linestyle='-',linewidth=5))
print('Calculate Daily Basis each Year')
fr = a1.forecast(start=0,end=1096,freq_input='D',freq_output='A')
print(fr)
#Assign to a matplotlib axes
fig, ax = plt.subplots(figsize=(10,7))
a1.plot(start=0,end=1096,freq_input='D',freq_output='A',cum=True,rate_kw = {'palette':['green']}, ax=ax)
ax.set_title('Arps Forecast on Daily Basis each year', fontsize=14)
ax.set_xlabel('Time [days]', fontsize=10)
ax.set_ylabel('Oil Rate [bbl/d]', fontsize=10)
print('Calculate Monthly Basis each Month')
fr = a1.forecast(start=0,end=37,freq_input='M',freq_output='M')
print(fr)
fig, ax = plt.subplots()
a1.plot(start=0,end=37,freq_input='M',freq_output='M',rate_kw=dict(palette=['darkgreen'],linestyle='-.',linewidth=2))
ax.set_title('Arps Forecast on Month Basis each month', fontsize=14)
ax.set_xlabel('Time [months]', fontsize=10)
ax.set_ylabel('Oil Rate [bbl/d]', fontsize=10)
print('Calculate Monthly Basis each Year')
fr = a1.forecast(start=0,end=37,freq_input='M',freq_output='A')
print(fr)
fig, ax = plt.subplots()
a1.plot(start=0,end=37,freq_input='M',freq_output='A',rate_kw=dict(palette=['darkgreen'],linestyle='-.',linewidth=2))
ax.set_title('Arps Forecast on Month Basis each year', fontsize=14)
ax.set_xlabel('Time [months]', fontsize=10)
ax.set_ylabel('Oil Rate [bbl/d]', fontsize=10)
print('Calculate Annual Basis each Year')
fr = a1.forecast(start=0,end=4,freq_input='A',freq_output='A')
print(fr)
fig, ax = plt.subplots()
a1.plot(start=0,end=4,freq_input='A',freq_output='A',rate_kw=dict(palette=['darkgreen'],linestyle='-.',linewidth=2))
ax.set_title('Arps Forecast on Annual Basis each year', fontsize=14)
ax.set_xlabel('Time [Years]', fontsize=10)
ax.set_ylabel('Oil Rate [bbl/d]', fontsize=10)
###Output
Calculate Annual Basis each Year
oil_rate oil_cum iteration oil_volume
date
0 1500.000000 0.000000e+00 0 459783.920767
1 1046.514489 4.597839e+05 0 390282.138697
2 730.128384 7.805643e+05 0 272290.608657
3 509.393288 1.004365e+06 0 223800.860687
###Markdown
Multiple Values You may have noticed that the pandas dataframe returned with the forecast has a column name *iteration*. As we have defined so far a singles parameters for the Arps class it is created only one iteration. You can declare Multiple values for any of the Arps parameters and they will result on Multiple iteration on the pandas dataframe.
###Code
# Define an Arps Class with multiple values
a2 = dca.Arps(
ti = 0,
di = 0.03,
qi = [1500,1000,500],
b = 0,
freq_di='M'
)
print(a2)
print('Calculate Monthly Basis each month - Multiple parameters')
fr = a2.forecast(start=0,end=12,freq_input='M',freq_output='M')
#print(fr)
fig, ax = plt.subplots()
a2.plot(start=0,end=12,freq_input='M',freq_output='M')
###Output
Calculate Monthly Basis each month - Multiple parameters
###Markdown
Estimate Water Rate.You can add water columns for the returning forecast by providing either a fluid rate or water cut. When any of them is provided the function assumes they are constant and the water estimation are simple substraction.
###Code
# Define an Arps Class with multiple values - Fluid rate
a3 = dca.Arps(
ti = 0,
di = 0.03,
qi = [1500,1450],
b = [0,1],
freq_di='M',
fluid_rate = 2000
)
fr = a3.forecast(start=0,end=12,freq_input='M',freq_output='M')
print(fr)
a4 = dca.Arps(
ti = 0,
di = 0.03,
qi = [1500,1450],
b = [0,1],
freq_di='M',
bsw = 0.6
)
fr = a4.forecast(start=0,end=12,freq_input='M',freq_output='M')
print(fr)
###Output
oil_rate oil_cum iteration oil_volume bsw water_rate \
date
0 1500.000000 0.000000 0 44331.699677 0.6 2250.000000
1 1455.668300 44331.699677 0 43676.599812 0.6 2183.502450
2 1412.646800 87353.199624 0 42385.761208 0.6 2118.970201
3 1370.896778 129103.222093 0 41133.072650 0.6 2056.345167
4 1330.380655 169619.344924 0 39917.406635 0.6 1995.570983
5 1291.061965 208938.035362 0 38737.668979 0.6 1936.592947
6 1252.905317 247094.682883 0 37592.797841 0.6 1879.357976
7 1215.876369 284123.631045 0 36481.762759 0.6 1823.814553
8 1179.941792 320058.208400 0 35403.563725 0.6 1769.912687
9 1145.069242 354930.758495 0 34357.230289 0.6 1717.603862
10 1111.227331 388772.668977 0 33341.820679 0.6 1666.840997
11 1078.385600 421614.399852 0 32841.730875 0.6 1617.578400
0 1450.000000 0.000000 1 42860.263250 0.6 2175.000000
1 1407.766990 42860.263250 1 42244.958390 0.6 2111.650485
2 1367.924528 84489.916780 1 41048.698150 0.6 2051.886792
3 1330.275229 124957.659550 1 39918.338458 0.6 1995.412844
4 1294.642857 164326.593695 1 38848.578447 0.6 1941.964286
5 1260.869565 202654.816444 1 37834.671049 0.6 1891.304348
6 1228.813559 239995.935792 1 36872.352494 0.6 1843.220339
7 1198.347107 276399.521433 1 35957.782326 0.6 1797.520661
8 1169.354839 311911.500445 1 35087.492125 0.6 1754.032258
9 1141.732283 346574.505682 1 34258.341517 0.6 1712.598425
10 1115.384615 380428.183478 1 33467.480278 0.6 1673.076923
11 1090.225564 413509.466239 1 33081.282761 0.6 1635.338346
fluid_rate wor water_cum fluid_cum water_volume \
date
0 3750.000000 1.5 0.000000 0.000000e+00 65505.073515
1 3639.170751 1.5 65505.073515 1.091751e+05 64537.089766
2 3531.617001 1.5 129074.179531 2.151236e+05 62629.730511
3 3427.241945 1.5 190764.534537 3.179409e+05 60778.742242
4 3325.951638 1.5 250631.664016 4.177194e+05 58982.458944
5 3227.654912 1.5 308729.452424 5.145491e+05 57239.263839
6 3132.263293 1.5 365110.191695 6.085170e+05 55547.587937
7 3039.690922 1.5 419824.628298 6.997077e+05 53905.908612
8 2949.854479 1.5 472922.008920 7.882033e+05 52312.748245
9 2862.673104 1.5 524450.124787 8.740835e+05 50766.672882
10 2778.068328 1.5 574455.354683 9.574256e+05 49266.290951
11 2695.964000 1.5 622982.706690 1.038305e+06 48527.352007
0 3625.000000 1.5 0.000000 0.000000e+00 63349.514563
1 3519.417476 1.5 63349.514563 1.055825e+05 62453.059168
2 3419.811321 1.5 124906.118337 2.081769e+05 60709.494547
3 3325.688073 1.5 184768.503658 3.079475e+05 59060.656946
4 3236.607143 1.5 243027.432229 4.050457e+05 57499.029503
5 3152.173913 1.5 299766.562664 4.996109e+05 56017.870302
6 3072.033898 1.5 355063.172833 5.917720e+05 54611.115002
7 2995.867769 1.5 408988.792668 6.816480e+05 53273.293788
8 2923.387097 1.5 461609.760410 7.693496e+05 51999.460249
9 2854.330709 1.5 512987.713166 8.549795e+05 50785.130224
10 2788.461538 1.5 563180.020858 9.386334e+05 49626.229034
11 2725.563910 1.5 612240.171234 1.020400e+06 49060.150376
fluid_volume
date
0 109175.122524
1 107561.816276
2 104382.884186
3 101297.903737
4 98304.098239
5 95398.773066
6 92579.313228
7 89843.181021
8 87187.913741
9 84611.121470
10 82110.484919
11 80878.920011
0 105582.524272
1 104088.431947
2 101182.490912
3 98434.428244
4 95831.715839
5 93363.117170
6 91018.525004
7 88788.822981
8 86665.767082
9 84641.883707
10 82710.381724
11 81766.917293
###Markdown
Remember you can pass a time list with a custom time distribution
###Code
fr = a4.forecast(time_list=[0,2,3,4,6,8,12],freq_input='M',freq_output='M')
print(fr)
###Output
oil_rate oil_cum iteration oil_volume bsw water_rate \
date
0 1500.000000 0.000000 0 87353.199624 0.6 2250.000000
2 1412.646800 87353.199624 0 64551.611047 0.6 2118.970201
3 1370.896778 129103.222093 0 41133.072650 0.6 2056.345167
4 1330.380655 169619.344924 0 58995.730395 0.6 1995.570983
6 1252.905317 247094.682883 0 75219.431738 0.6 1879.357976
8 1179.941792 320058.208400 0 103195.414005 0.6 1769.912687
12 1046.514489 453485.510893 0 133427.302493 0.6 1569.771734
0 1450.000000 0.000000 1 84489.916780 0.6 2175.000000
2 1367.924528 84489.916780 1 62478.829775 0.6 2051.886792
3 1330.275229 124957.659550 1 39918.338458 0.6 1995.412844
4 1294.642857 164326.593695 1 57519.138121 0.6 1941.964286
6 1228.813559 239995.935792 1 73792.453375 0.6 1843.220339
8 1169.354839 311911.500445 1 102928.439421 0.6 1754.032258
12 1066.176471 445852.814635 1 133941.314190 0.6 1599.264706
fluid_rate wor water_cum fluid_cum water_volume \
date
0 3750.000000 1.5 0.000000 0.000000e+00 127138.212034
2 3531.617001 1.5 127138.212034 2.118970e+05 94414.283520
3 3427.241945 1.5 188828.567040 3.147143e+05 60778.742242
4 3325.951638 1.5 248695.696518 4.144928e+05 86314.304009
6 3132.263293 1.5 361457.175059 6.024286e+05 109478.119892
8 2949.854479 1.5 467651.936303 7.794199e+05 147283.684642
12 2616.286223 1.5 656024.544342 1.093374e+06 188372.608039
0 3625.000000 1.5 0.000000 0.000000e+00 123113.207547
2 3419.811321 1.5 123113.207547 2.051887e+05 91487.796434
3 3325.688073 1.5 182975.592868 3.049593e+05 59060.656946
4 3236.607143 1.5 241234.521440 4.020575e+05 84426.074455
6 3072.033898 1.5 351827.741779 5.863796e+05 107917.577911
8 2923.387097 1.5 457069.677263 7.617828e+05 148576.850095
12 2665.441176 1.5 648981.441968 1.081636e+06 191911.764706
fluid_volume
date
0 211897.020056
2 157357.139200
3 101297.903737
4 143857.173349
6 182463.533154
8 245472.807736
12 313954.346732
0 205188.679245
2 152479.660724
3 98434.428244
4 140710.124092
6 179862.629852
8 247628.083491
12 319852.941176
###Markdown
Using Arps class with datesYou can also define the Arps class with dates. Like before, the output frequency approach also works
###Code
a5 = dca.Arps(
ti = date(2021,1,1),
di = [0.03,0.05],
qi = 1500,
b = 0,
freq_di='M',
fluid_rate = 2000
)
print(a5)
fr = a5.forecast(start=date(2021,1,1),end=date(2021,1,10),freq_output='D')
print(fr.head())
print(fr.tail())
a5.plot(start=date(2021,1,1),end=date(2021,1,10),freq_output='D')
fr = a5.forecast(start=date(2021,1,1),end=date(2022,1,1),freq_output='M')
print(fr)
fr = a5.forecast(start=date(2021,1,1),end=date(2026,1,1),freq_output='A')
print(fr)
###Output
oil_rate oil_cum iteration oil_volume fluid_rate \
date
2021 1500.000000 0.000000e+00 0 458705.023683 2000.0
2022 1041.294976 4.587050e+05 0 388568.257432 2000.0
2023 722.863485 7.771365e+05 0 269742.782947 2000.0
2024 501.809410 9.981906e+05 0 187428.626667 2000.0
2025 348.006232 1.151994e+06 0 130112.324911 2000.0
2026 241.584761 1.258415e+06 0 106421.471200 2000.0
2021 1500.000000 0.000000e+00 1 410168.511963 2000.0
2022 816.385813 4.101685e+05 1 316702.840738 2000.0
2023 444.323864 6.334057e+05 1 172367.804160 2000.0
2024 241.826466 7.549041e+05 1 93878.173091 2000.0
2025 131.396621 8.211620e+05 1 51093.872466 2000.0
2026 71.513558 8.570919e+05 1 35929.837558 2000.0
water_rate bsw wor water_cum fluid_cum \
date
2021 500.000000 0.250000 0.333333 0.000000e+00 0.0
2022 958.705024 0.479353 0.920685 3.499273e+05 730000.0
2023 1277.136515 0.638568 1.766774 8.160822e+05 1460000.0
2024 1498.190590 0.749095 2.985577 1.362922e+06 2190000.0
2025 1651.993768 0.825997 4.747024 1.967551e+06 2922000.0
2026 1758.415239 0.879208 7.278668 2.609373e+06 3652000.0
2021 500.000000 0.250000 0.333333 0.000000e+00 0.0
2022 1183.614187 0.591807 1.449822 4.320192e+05 730000.0
2023 1555.676136 0.777838 3.501221 9.998410e+05 1460000.0
2024 1758.173534 0.879087 7.270393 1.641574e+06 2190000.0
2025 1868.603379 0.934302 14.221092 2.325483e+06 2922000.0
2026 1928.486442 0.964243 26.966725 3.029381e+06 3652000.0
water_volume fluid_volume
date
2021 349927.333644 730000.0
2022 408041.080785 730000.0
2023 506497.196561 730000.0
2024 575734.642178 731000.0
2025 623225.640771 731000.0
2026 641821.562380 730000.0
2021 432019.178111 730000.0
2022 499920.483838 730000.0
2023 604777.564702 730000.0
2024 662821.088355 731000.0
2025 693903.194105 731000.0
2026 703897.551339 730000.0
###Markdown
Plot them
###Code
a5.plot(start=date(2021,1,1),end=date(2022,1,1),freq_output='M')
###Output
_____no_output_____ |
homeworks/tarea_02/tarea_02.ipynb | ###Markdown
Tarea N°02 Instrucciones1.- Completa tus datos personales (nombre y rol USM) en siguiente celda.**Nombre**: Gonzalo Gacitua Hernández**Rol**: 201551544-12.- Debes pushear este archivo con tus cambios a tu repositorio personal del curso, incluyendo datos, imágenes, scripts, etc.3.- Se evaluará:- Soluciones- Código- Que Binder esté bien configurado.- Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error. I.- Clasificación de dígitosEn este laboratorio realizaremos el trabajo de reconocer un dígito a partir de una imagen.  El objetivo es a partir de los datos, hacer la mejor predicción de cada imagen. Para ellos es necesario realizar los pasos clásicos de un proyecto de _Machine Learning_, como estadística descriptiva, visualización y preprocesamiento. * Se solicita ajustar al menos tres modelos de clasificación: * Regresión logística * K-Nearest Neighbours * Uno o más algoritmos a su elección [link](https://scikit-learn.org/stable/supervised_learning.htmlsupervised-learning) (es obligación escoger un _estimator_ que tenga por lo menos un hiperparámetro). * En los modelos que posean hiperparámetros es mandatorio buscar el/los mejores con alguna técnica disponible en `scikit-learn` ([ver más](https://scikit-learn.org/stable/modules/grid_search.htmltuning-the-hyper-parameters-of-an-estimator)).* Para cada modelo, se debe realizar _Cross Validation_ con 10 _folds_ utilizando los datos de entrenamiento con tal de determinar un intervalo de confianza para el _score_ del modelo.* Realizar una predicción con cada uno de los tres modelos con los datos _test_ y obtener el _score_. * Analizar sus métricas de error (**accuracy**, **precision**, **recall**, **f-score**) Exploración de los datosA continuación se carga el conjunto de datos a utilizar, a través del sub-módulo `datasets` de `sklearn`.
###Code
import numpy as np
import pandas as pd
from sklearn import datasets
import matplotlib.pyplot as plt
%matplotlib inline
digits_dict = datasets.load_digits()
print(digits_dict["DESCR"])
digits_dict.keys()
digits_dict["target"]
###Output
_____no_output_____
###Markdown
A continuación se crea dataframe declarado como `digits` con los datos de `digits_dict` tal que tenga 65 columnas, las 6 primeras a la representación de la imagen en escala de grises (0-blanco, 255-negro) y la última correspondiente al dígito (`target`) con el nombre _target_.
###Code
digits = (
pd.DataFrame(
digits_dict["data"],
)
.rename(columns=lambda x: f"c{x:02d}")
.assign(target=digits_dict["target"])
.astype(int)
)
digits.head()
###Output
_____no_output_____
###Markdown
Ejercicio 1**Análisis exploratorio:** Realiza tu análisis exploratorio, no debes olvidar nada! Recuerda, cada análisis debe responder una pregunta.Algunas sugerencias:* ¿Cómo se distribuyen los datos?* ¿Cuánta memoria estoy utilizando?* ¿Qué tipo de datos son?* ¿Cuántos registros por clase hay?* ¿Hay registros que no se correspondan con tu conocimiento previo de los datos?
###Code
#verifiquemos con describe
digits.describe()
#veamos que tipo de datos tiene digits
digits.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1797 entries, 0 to 1796
Data columns (total 65 columns):
c00 1797 non-null int32
c01 1797 non-null int32
c02 1797 non-null int32
c03 1797 non-null int32
c04 1797 non-null int32
c05 1797 non-null int32
c06 1797 non-null int32
c07 1797 non-null int32
c08 1797 non-null int32
c09 1797 non-null int32
c10 1797 non-null int32
c11 1797 non-null int32
c12 1797 non-null int32
c13 1797 non-null int32
c14 1797 non-null int32
c15 1797 non-null int32
c16 1797 non-null int32
c17 1797 non-null int32
c18 1797 non-null int32
c19 1797 non-null int32
c20 1797 non-null int32
c21 1797 non-null int32
c22 1797 non-null int32
c23 1797 non-null int32
c24 1797 non-null int32
c25 1797 non-null int32
c26 1797 non-null int32
c27 1797 non-null int32
c28 1797 non-null int32
c29 1797 non-null int32
c30 1797 non-null int32
c31 1797 non-null int32
c32 1797 non-null int32
c33 1797 non-null int32
c34 1797 non-null int32
c35 1797 non-null int32
c36 1797 non-null int32
c37 1797 non-null int32
c38 1797 non-null int32
c39 1797 non-null int32
c40 1797 non-null int32
c41 1797 non-null int32
c42 1797 non-null int32
c43 1797 non-null int32
c44 1797 non-null int32
c45 1797 non-null int32
c46 1797 non-null int32
c47 1797 non-null int32
c48 1797 non-null int32
c49 1797 non-null int32
c50 1797 non-null int32
c51 1797 non-null int32
c52 1797 non-null int32
c53 1797 non-null int32
c54 1797 non-null int32
c55 1797 non-null int32
c56 1797 non-null int32
c57 1797 non-null int32
c58 1797 non-null int32
c59 1797 non-null int32
c60 1797 non-null int32
c61 1797 non-null int32
c62 1797 non-null int32
c63 1797 non-null int32
target 1797 non-null int32
dtypes: int32(65)
memory usage: 456.3 KB
###Markdown
1) Desde lo anterior notamos que los datos se distribuyen en el dataframe digits en columnas llamadas $c_i$ donde $i$ corresponde a la i-ésima columna, con i entre 0 y 63, además del target. 2) Según el comando info, se está utilizando 456.3 KB de memoria 3) Según info, el tipo de datos que se tienen son del tipo int32 4) Hay 1797 datos en cada columna Ejercicio 2**Visualización:** Para visualizar los datos utilizaremos el método `imshow` de `matplotlib`. Resulta necesario convertir el arreglo desde las dimensiones (1,64) a (8,8) para que la imagen sea cuadrada y pueda distinguirse el dígito. Superpondremos además el label correspondiente al dígito, mediante el método `text`. Esto nos permitirá comparar la imagen generada con la etiqueta asociada a los valores. Realizaremos lo anterior para los primeros 25 datos del archivo.
###Code
digits_dict["images"][0]
###Output
_____no_output_____
###Markdown
Visualiza imágenes de los dígitos utilizando la llave `images` de `digits_dict`. Sugerencia: Utiliza `plt.subplots` y el método `imshow`. Puedes hacer una grilla de varias imágenes al mismo tiempo!
###Code
#haremos todos los plot de una
nx, ny = 5, 5
fig, axs = plt.subplots(nx, ny, figsize=(12, 12))
for i in range(1,nx*ny+1):
image=digits_dict['images'][i]
fig.add_subplot(nx, ny, i)
plt.imshow(image)
plt.show()
###Output
_____no_output_____
###Markdown
Ejercicio 3**Machine Learning**: En esta parte usted debe entrenar los distintos modelos escogidos desde la librería de `skelearn`. Para cada modelo, debe realizar los siguientes pasos:* **train-test** * Crear conjunto de entrenamiento y testeo (usted determine las proporciones adecuadas). * Imprimir por pantalla el largo del conjunto de entrenamiento y de testeo. * **modelo**: * Instanciar el modelo objetivo desde la librería sklearn. * *Hiper-parámetros*: Utiliza `sklearn.model_selection.GridSearchCV` para obtener la mejor estimación de los parámetros del modelo objetivo.* **Métricas**: * Graficar matriz de confusión. * Analizar métricas de error.__Preguntas a responder:__* ¿Cuál modelo es mejor basado en sus métricas?* ¿Cuál modelo demora menos tiempo en ajustarse?* ¿Qué modelo escoges?
###Code
X = digits.drop(columns="target").values
y = digits["target"].values
from sklearn.model_selection import train_test_split
#se crea el split, con un test size de 33% de los datos
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1)
###Output
_____no_output_____
###Markdown
**Haremos el Regresor logístico:**
###Code
from sklearn.linear_model import LogisticRegression
Regresor=LogisticRegression()
Regresor.fit(X_train,y_train)
###Output
C:\Users\56982\Anaconda3\lib\site-packages\sklearn\linear_model\logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
C:\Users\56982\Anaconda3\lib\site-packages\sklearn\linear_model\logistic.py:469: FutureWarning: Default multi_class will be changed to 'auto' in 0.22. Specify the multi_class option to silence this warning.
"this warning.", FutureWarning)
###Markdown
**Ahora KNN:**
###Code
#desde la fuente https://medium.com/@erikgreenj/k-neighbors-classifier-with-gridsearchcv-basics-3c445ddeb657
#se encontró una forma de usar KNN con gridsearch, pero se adaptó un poco
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
knn=KNeighborsClassifier()
grid_params={
'n_neighbors':[1,2,3,4,5,6,7,8,9,10],
'weights': ['uniform','distance'],
}
gs= GridSearchCV(
knn,
grid_params,
verbose=1,
cv=3,
n_jobs=-1
)
gs_results=gs.fit(X_train,y_train)
###Output
Fitting 3 folds for each of 20 candidates, totalling 60 fits
###Markdown
**Usaremos el Perceptrón, uno de los métodos más básicos, pero es bien flexible**
###Code
from sklearn.linear_model import Perceptron
Perceptron=Perceptron(tol=1e-3,random_state=0)
Perceptron.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
EVALUEMOS LAS MÉTRICAS
###Code
from sklearn.metrics import f1_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
###Output
_____no_output_____
###Markdown
Vamos a usar la matriz de confusión y classification_report, que nos entrega la precision, el recall y el f-score. **Regresor Logístico**
###Code
Regresor.score(X_test,y_test)
y_hat=Regresor.predict(X_test)
confusion_matrix(y_test, y_hat)
f1_score(y_test,y_hat,average='micro')
print(classification_report(y_test,y_hat))
###Output
precision recall f1-score support
0 1.00 0.97 0.98 63
1 0.94 0.86 0.90 59
2 1.00 0.96 0.98 55
3 0.98 0.96 0.97 68
4 0.97 0.98 0.98 66
5 0.94 0.96 0.95 52
6 0.98 1.00 0.99 54
7 1.00 0.98 0.99 62
8 0.82 0.96 0.88 51
9 0.94 0.94 0.94 64
accuracy 0.96 594
macro avg 0.96 0.96 0.96 594
weighted avg 0.96 0.96 0.96 594
###Markdown
**KNN**
###Code
gs_results.score(X_test,y_test)
y_hat2=gs_results.predict(X_test)
confusion_matrix(y_test, y_hat2)
f1_score(y_test,y_hat2,average='micro')
print(classification_report(y_test,y_hat2))
###Output
precision recall f1-score support
0 1.00 1.00 1.00 63
1 0.97 1.00 0.98 59
2 1.00 0.98 0.99 55
3 0.99 1.00 0.99 68
4 1.00 1.00 1.00 66
5 0.98 0.98 0.98 52
6 1.00 1.00 1.00 54
7 0.98 0.98 0.98 62
8 0.98 0.98 0.98 51
9 0.97 0.94 0.95 64
accuracy 0.99 594
macro avg 0.99 0.99 0.99 594
weighted avg 0.99 0.99 0.99 594
###Markdown
**Perceptron**
###Code
Perceptron.score(X_test,y_test)
y_hat3= Perceptron.predict(X_test)
confusion_matrix(y_test, y_hat3)
print(classification_report(y_test,y_hat3))
###Output
precision recall f1-score support
0 1.00 0.98 0.99 63
1 0.89 0.92 0.90 59
2 1.00 0.98 0.99 55
3 0.89 0.99 0.94 68
4 0.98 0.95 0.97 66
5 0.94 0.94 0.94 52
6 0.93 1.00 0.96 54
7 0.98 0.94 0.96 62
8 0.93 0.84 0.89 51
9 0.95 0.94 0.94 64
accuracy 0.95 594
macro avg 0.95 0.95 0.95 594
weighted avg 0.95 0.95 0.95 594
###Markdown
Ejercicio 4__Comprensión del modelo:__ Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, debe comprender e interpretar minuciosamente los resultados y gráficos asocados al modelo en estudio, para ello debe resolver los siguientes puntos: * **Cross validation**: usando **cv** (con n_fold = 10), sacar una especie de "intervalo de confianza" sobre alguna de las métricas estudiadas en clases: * $\mu \pm \sigma$ = promedio $\pm$ desviación estandar * **Curva de Validación**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_validation_curve.htmlsphx-glr-auto-examples-model-selection-plot-validation-curve-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico. * **Curva AUC–ROC**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.htmlsphx-glr-auto-examples-model-selection-plot-roc-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico. **Veamos Cross Validation para KNN**
###Code
from sklearn.model_selection import cross_validate
cv_validate=cross_validate(gs_results, X, y, cv=10)
#veamos los resultados!
for i in range (0,len(cv_validate['test_score'])):
print('El score de la clase '+str(i)+' es: '+str(cv_validate['test_score'][i]))
from sklearn.model_selection import validation_curve
param_range = np.array([i for i in range(1, 10)])
#Validation curve usando lo obtenido con GridSearch
train_scores, test_scores = validation_curve(
KNeighborsClassifier(weights = 'distance',
metric = 'euclidean'),
X_train, y_train,
param_name="n_neighbors",
param_range=param_range,
scoring="accuracy", n_jobs=1)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.title("Validation Curve with SVM")
plt.xlabel(r"$\gamma$")
plt.ylabel("Score")
plt.ylim(0.0, 1.1)
lw = 2
plt.semilogx(param_range, train_scores_mean, label="Training score",
color="darkorange", lw=lw)
plt.fill_between(param_range, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color="darkorange", lw=lw)
plt.semilogx(param_range, test_scores_mean, label="Cross-validation score",
color="navy", lw=lw)
plt.fill_between(param_range, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.2,
color="navy", lw=lw)
plt.legend(loc="best")
plt.show()
###Output
C:\Users\56982\Anaconda3\lib\site-packages\sklearn\model_selection\_split.py:1978: FutureWarning: The default value of cv will change from 3 to 5 in version 0.22. Specify it explicitly to silence this warning.
warnings.warn(CV_WARNING, FutureWarning)
###Markdown
Se puede observar que la línea del training score y del cross-validation son muy buenas, casi constante=1. Por ende, el método es muy bueno.
###Code
from itertools import cycle
from sklearn.metrics import roc_curve, auc
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from scipy import interp
from sklearn.metrics import roc_auc_score
index = np.argmax(test_scores_mean)
param_range[index]
y = label_binarize(y, classes=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
n_classes = y.shape[1]
n_samples, n_features = X.shape
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.20,
train_size=0.80,
random_state=2020)
classifier = KNeighborsClassifier(weights = 'distance',metric = 'euclidean', n_neighbors = param_range[index])
y_score = classifier.fit(x_train, y_train).predict(x_test)
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
#AOC-ROC para multiples clases (código también obtenido del link)
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Some extension of Receiver operating characteristic to multi-class')
plt.legend(loc="lower right")
plt.show()
#Curva promedio de las multi-clases
import sys
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle='-', linewidth=4)
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Some extension of Receiver operating characteristic to multi-class')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Ejercicio 5__Reducción de la dimensión:__ Tomando en cuenta el mejor modelo encontrado en el `Ejercicio 3`, debe realizar una redcción de dimensionalidad del conjunto de datos. Para ello debe abordar el problema ocupando los dos criterios visto en clases: * **Selección de atributos*** **Extracción de atributos**__Preguntas a responder:__Una vez realizado la reducción de dimensionalidad, debe sacar algunas estadísticas y gráficas comparativas entre el conjunto de datos original y el nuevo conjunto de datos (tamaño del dataset, tiempo de ejecución del modelo, etc.)
###Code
#SELECCIÓN DE ATRIBUTOS
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
x_training = digits.drop(columns="target")
y_training = digits["target"]
x_training = x_training.drop(['c00','c32','c39'],axis=1)
k = 30 # número de atributos a seleccionar
columnas = list(x_training.columns.values)
seleccionadas = SelectKBest(f_classif, k=k).fit(x_training, y_training)
catrib = seleccionadas.get_support()
atributos = [columnas[i] for i in list(catrib.nonzero()[0])]
X_a=x_training[atributos]
import time
start_time = time.time()
knn_grid_result = gs_results.fit(x_training, y_training)
print("%s segundos, que demora sin selección de atributos" % (time.time() - start_time))
start_time = time.time()
knn_grid_result = gs_results.fit(X_a, y_training)
print('%s segundos, que demora tras hacer la seleccionar atributos' % (time.time() - start_time))
###Output
Fitting 3 folds for each of 20 candidates, totalling 60 fits
###Markdown
Se ve que tras hacer la selección de atributos cambió bastante la cantidad de tiempo, se redujo desde poco más de 2 minutos a menos de medio segundo. Más de 280 veces más rápido. Un cambio tremendamente significativo. Ejercicio 6__Visualizando Resultados:__ A continuación se provee código para comparar las etiquetas predichas vs las etiquetas reales del conjunto de _test_.
###Code
def mostar_resultados(digits,model,nx=5, ny=5,label = "correctos"):
"""
Muestra los resultados de las prediciones de un modelo
de clasificacion en particular. Se toman aleatoriamente los valores
de los resultados.
- label == 'correcto': retorna los valores en que el modelo acierta.
- label == 'incorrecto': retorna los valores en que el modelo no acierta.
Observacion: El modelo que recibe como argumento debe NO encontrarse
'entrenado'.
:param digits: dataset 'digits'
:param model: modelo de sklearn
:param nx: numero de filas (subplots)
:param ny: numero de columnas (subplots)
:param label: datos correctos o incorrectos
:return: graficos matplotlib
"""
X = digits.drop(columns="target").values
y = digits["target"].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 42)
model.fit(X_train, y_train) # ajustando el modelo
y_pred = list(model.predict(X_test))
# Mostrar los datos correctos
if label=="correctos":
mask = (y_pred == y_test)
color = "green"
# Mostrar los datos correctos
elif label=="incorrectos":
mask = (y_pred != y_test)
color = "red"
else:
raise ValueError("Valor incorrecto")
X_aux = X_test
y_aux_true = y_test
y_aux_pred = y_pred
# We'll plot the first 100 examples, randomly choosen
fig, ax = plt.subplots(nx, ny, figsize=(12,12))
for i in range(nx):
for j in range(ny):
index = j + ny * i
data = X_aux[index, :].reshape(8,8)
label_pred = str(int(y_aux_pred[index]))
label_true = str(int(y_aux_true[index]))
ax[i][j].imshow(data, interpolation='nearest', cmap='gray_r')
ax[i][j].text(0, 0, label_pred, horizontalalignment='center', verticalalignment='center', fontsize=10, color=color)
ax[i][j].text(7, 0, label_true, horizontalalignment='center', verticalalignment='center', fontsize=10, color='blue')
ax[i][j].get_xaxis().set_visible(False)
ax[i][j].get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
**Pregunta*** Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, grafique los resultados cuando: * el valor predicho y original son iguales * el valor predicho y original son distintos * Cuando el valor predicho y original son distintos , ¿Por qué ocurren estas fallas?
###Code
#valor predicho e iguales
mostar_resultados(digits,KNeighborsClassifier(),nx=5, ny=5,label = "correctos")
#valor predicho y distintos
mostar_resultados(digits,KNeighborsClassifier(),nx=5, ny=5,label = "incorrectos")
###Output
_____no_output_____
###Markdown
Tarea N°02 Instrucciones1.- Completa tus datos personales (nombre y rol USM) en siguiente celda.**Nombre**: Gonzalo Gallardo Urrutia**Rol**: 201741523-12.- Debes pushear este archivo con tus cambios a tu repositorio personal del curso, incluyendo datos, imágenes, scripts, etc.3.- Se evaluará:- Soluciones- Código- Que Binder esté bien configurado.- Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error. I.- Clasificación de dígitosEn este laboratorio realizaremos el trabajo de reconocer un dígito a partir de una imagen.  El objetivo es a partir de los datos, hacer la mejor predicción de cada imagen. Para ellos es necesario realizar los pasos clásicos de un proyecto de _Machine Learning_, como estadística descriptiva, visualización y preprocesamiento. * Se solicita ajustar al menos tres modelos de clasificación: * Regresión logística * K-Nearest Neighbours * Uno o más algoritmos a su elección [link](https://scikit-learn.org/stable/supervised_learning.htmlsupervised-learning) (es obligación escoger un _estimator_ que tenga por lo menos un hiperparámetro). * En los modelos que posean hiperparámetros es mandatorio buscar el/los mejores con alguna técnica disponible en `scikit-learn` ([ver más](https://scikit-learn.org/stable/modules/grid_search.htmltuning-the-hyper-parameters-of-an-estimator)).* Para cada modelo, se debe realizar _Cross Validation_ con 10 _folds_ utilizando los datos de entrenamiento con tal de determinar un intervalo de confianza para el _score_ del modelo.* Realizar una predicción con cada uno de los tres modelos con los datos _test_ y obtener el _score_. * Analizar sus métricas de error (**accuracy**, **precision**, **recall**, **f-score**) Exploración de los datosA continuación se carga el conjunto de datos a utilizar, a través del sub-módulo `datasets` de `sklearn`.
###Code
import numpy as np
import pandas as pd
from sklearn import datasets
import matplotlib.pyplot as plt
%matplotlib inline
digits_dict = datasets.load_digits()
print(digits_dict["DESCR"])
digits_dict.keys()
digits_dict["target"]
###Output
_____no_output_____
###Markdown
A continuación se crea dataframe declarado como `digits` con los datos de `digits_dict` tal que tenga 65 columnas, las 6 primeras a la representación de la imagen en escala de grises (0-blanco, 255-negro) y la última correspondiente al dígito (`target`) con el nombre _target_.
###Code
digits = (
pd.DataFrame(
digits_dict["data"],
)
.rename(columns=lambda x: f"c{x:02d}")
.assign(target=digits_dict["target"])
.astype(int)
)
digits.head()
###Output
_____no_output_____
###Markdown
Ejercicio 1**Análisis exploratorio:** Realiza tu análisis exploratorio, no debes olvidar nada! Recuerda, cada análisis debe responder una pregunta.Algunas sugerencias:* ¿Cómo se distribuyen los datos?* ¿Cuánta memoria estoy utilizando?* ¿Qué tipo de datos son?* ¿Cuántos registros por clase hay?* ¿Hay registros que no se correspondan con tu conocimiento previo de los datos?
###Code
digits.describe()
###Output
_____no_output_____
###Markdown
**¿Cómo se distribuyen los datos?**
###Code
cols = digits.columns
fig = plt.figure(figsize = (30,30))
for i in range(len(cols)-1):
plt.subplot(8,8,i+1)
plt.hist(digits[cols[i]], bins=60)
plt.title("Histograma de "+cols[i])
###Output
_____no_output_____
###Markdown
**¿Cuánta memoria estoy utilizando?**
###Code
digits.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1797 entries, 0 to 1796
Data columns (total 65 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 c00 1797 non-null int32
1 c01 1797 non-null int32
2 c02 1797 non-null int32
3 c03 1797 non-null int32
4 c04 1797 non-null int32
5 c05 1797 non-null int32
6 c06 1797 non-null int32
7 c07 1797 non-null int32
8 c08 1797 non-null int32
9 c09 1797 non-null int32
10 c10 1797 non-null int32
11 c11 1797 non-null int32
12 c12 1797 non-null int32
13 c13 1797 non-null int32
14 c14 1797 non-null int32
15 c15 1797 non-null int32
16 c16 1797 non-null int32
17 c17 1797 non-null int32
18 c18 1797 non-null int32
19 c19 1797 non-null int32
20 c20 1797 non-null int32
21 c21 1797 non-null int32
22 c22 1797 non-null int32
23 c23 1797 non-null int32
24 c24 1797 non-null int32
25 c25 1797 non-null int32
26 c26 1797 non-null int32
27 c27 1797 non-null int32
28 c28 1797 non-null int32
29 c29 1797 non-null int32
30 c30 1797 non-null int32
31 c31 1797 non-null int32
32 c32 1797 non-null int32
33 c33 1797 non-null int32
34 c34 1797 non-null int32
35 c35 1797 non-null int32
36 c36 1797 non-null int32
37 c37 1797 non-null int32
38 c38 1797 non-null int32
39 c39 1797 non-null int32
40 c40 1797 non-null int32
41 c41 1797 non-null int32
42 c42 1797 non-null int32
43 c43 1797 non-null int32
44 c44 1797 non-null int32
45 c45 1797 non-null int32
46 c46 1797 non-null int32
47 c47 1797 non-null int32
48 c48 1797 non-null int32
49 c49 1797 non-null int32
50 c50 1797 non-null int32
51 c51 1797 non-null int32
52 c52 1797 non-null int32
53 c53 1797 non-null int32
54 c54 1797 non-null int32
55 c55 1797 non-null int32
56 c56 1797 non-null int32
57 c57 1797 non-null int32
58 c58 1797 non-null int32
59 c59 1797 non-null int32
60 c60 1797 non-null int32
61 c61 1797 non-null int32
62 c62 1797 non-null int32
63 c63 1797 non-null int32
64 target 1797 non-null int32
dtypes: int32(65)
memory usage: 456.4 KB
###Markdown
La memoria utilizada es de 456.4 KB **¿Qué tipo de datos son?**
###Code
digits.dtypes.unique()
###Output
_____no_output_____
###Markdown
El tipo de dato de las columnas son enteros, esto es, "int" **¿Cuántos registros por clase hay?**
###Code
reg = pd.value_counts(digits.target).to_frame().reset_index().sort_values(by = 'index')
reg.rename(columns = {"index": "Clase", "target": "Registros"}).reset_index(drop = True )
###Output
_____no_output_____
###Markdown
Ejercicio 2**Visualización:** Para visualizar los datos utilizaremos el método `imshow` de `matplotlib`. Resulta necesario convertir el arreglo desde las dimensiones (1,64) a (8,8) para que la imagen sea cuadrada y pueda distinguirse el dígito. Superpondremos además el label correspondiente al dígito, mediante el método `text`. Esto nos permitirá comparar la imagen generada con la etiqueta asociada a los valores. Realizaremos lo anterior para los primeros 25 datos del archivo.
###Code
digits_dict["images"][0]
###Output
_____no_output_____
###Markdown
Visualiza imágenes de los dígitos utilizando la llave `images` de `digits_dict`. Sugerencia: Utiliza `plt.subplots` y el método `imshow`. Puedes hacer una grilla de varias imágenes al mismo tiempo!
###Code
nx, ny = 5, 5
fig, axs = plt.subplots(nx, ny, figsize=(12, 12))
k=1
for i in range(0,25):
plt.subplot(5,5,k)
plt.imshow(digits_dict["images"][i])
k+=1
###Output
_____no_output_____
###Markdown
Ejercicio 3**Machine Learning**: En esta parte usted debe entrenar los distintos modelos escogidos desde la librería de `skelearn`. Para cada modelo, debe realizar los siguientes pasos:* **train-test** * Crear conjunto de entrenamiento y testeo (usted determine las proporciones adecuadas). * Imprimir por pantalla el largo del conjunto de entrenamiento y de testeo. * **modelo**: * Instanciar el modelo objetivo desde la librería sklearn. * *Hiper-parámetros*: Utiliza `sklearn.model_selection.GridSearchCV` para obtener la mejor estimación de los parámetros del modelo objetivo.* **Métricas**: * Graficar matriz de confusión. * Analizar métricas de error.__Preguntas a responder:__* ¿Cuál modelo es mejor basado en sus métricas?* ¿Cuál modelo demora menos tiempo en ajustarse?* ¿Qué modelo escoges?
###Code
X = digits.drop(columns="target").values
y = digits["target"].values
import metrics_classification as metrics
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import confusion_matrix
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
import time
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 42)
print('El train set tiene un total de', len(X_train), 'datos')
print('El test set tiene un total de', len(X_test), 'datos')
###Output
El train set tiene un total de 1437 datos
El test set tiene un total de 360 datos
###Markdown
**Logistic Regression**
###Code
p_log_reg = {
'penalty' : ['l1', 'l2'],
'C' : [0.1, 1, 10],
'solver' : ['liblinear'],
}
log_reg = LogisticRegression()
log_reg_grid = GridSearchCV(estimator = log_reg, param_grid = p_log_reg, cv = 10)
start = time.time()
log_reg_grid_result = log_reg_grid.fit(X_train, y_train)
time_log_reg = time.time() - start
print("El mejor score tuvo un valor de: %f \n Usando los parámetros: %s"
% (log_reg_grid_result.best_score_, log_reg_grid_result.best_params_))
y_pred = log_reg_grid.predict(X_test)
df_log_reg = pd.DataFrame({'y': y_test, 'yhat': y_pred})
print("Matriz de confusión:\n",confusion_matrix(y_test,y_pred))
###Output
Matriz de confusión:
[[32 0 0 0 1 0 0 0 0 0]
[ 0 28 0 0 0 0 0 0 0 0]
[ 0 0 33 0 0 0 0 0 0 0]
[ 0 0 0 33 0 1 0 0 0 0]
[ 0 1 0 0 44 0 1 0 0 0]
[ 0 0 1 0 0 45 1 0 0 0]
[ 0 0 0 0 0 1 34 0 0 0]
[ 0 0 0 0 0 0 0 33 0 1]
[ 0 1 0 0 0 1 0 0 28 0]
[ 0 1 0 0 0 0 0 0 3 36]]
###Markdown
**K-Nearest Neighbours**
###Code
p_knn = {
'n_neighbors' : [1, 5, 25],
'weights' : ['uniform', 'distance'],
'algorithm' : ['auto','brute', 'kd_tree','ball_tree']
}
knn = KNeighborsClassifier()
knn_grid = GridSearchCV(estimator = knn, param_grid = p_knn, cv = 10)
startt = time.time()
knn_grid_result = knn_grid.fit(X_train, y_train)
time_knn = time.time() - startt
print("El mejor score tuvo un valor de: %f \n Usando los parámetros: %s"
% (knn_grid_result.best_score_, knn_grid_result.best_params_))
y_pred = knn_grid.predict(X_test)
df_knn = pd.DataFrame({'y': y_test, 'yhat': y_pred})
print("Matriz de confusión:\n",confusion_matrix(y_test,y_pred))
###Output
Matriz de confusión:
[[33 0 0 0 0 0 0 0 0 0]
[ 0 28 0 0 0 0 0 0 0 0]
[ 0 0 33 0 0 0 0 0 0 0]
[ 0 0 0 34 0 0 0 0 0 0]
[ 0 1 0 0 45 0 0 0 0 0]
[ 0 0 0 0 0 46 1 0 0 0]
[ 0 0 0 0 0 0 35 0 0 0]
[ 0 0 0 0 0 0 0 33 0 1]
[ 0 1 0 0 0 0 0 0 28 1]
[ 0 0 0 1 1 1 0 0 0 37]]
###Markdown
**Decision Tree Classifier**
###Code
p_dtreec = {
'criterion' : ['gini', 'entropy'],
'splitter' : ['best', 'random'],
'max_features' : ['auto', 'sqrt', 'log2']
}
dtreec = DecisionTreeClassifier()
dtreec_grid = GridSearchCV(estimator = dtreec, param_grid = p_dtreec, cv = 10)
starttt = time.time()
dtreec_grid_result = dtreec_grid.fit(X_train, y_train)
time_dtreec = time.time() - starttt
print("El mejor score tuvo un valor de: %f \n Usando los parámetros: %s"
% (dtreec_grid_result.best_score_, dtreec_grid_result.best_params_))
y_pred = dtreec_grid.predict(X_test)
df_dtreec = pd.DataFrame({'y': y_test, 'yhat': y_pred})
print("Matriz de confusión:\n",confusion_matrix(y_test,y_pred))
###Output
Matriz de confusión:
[[26 0 0 0 2 1 0 0 1 3]
[ 0 24 0 0 0 0 1 0 3 0]
[ 0 0 29 3 0 0 0 0 0 1]
[ 2 0 3 23 0 2 1 0 2 1]
[ 2 0 0 0 41 0 0 3 0 0]
[ 0 0 0 2 0 39 1 2 1 2]
[ 2 0 0 0 0 1 31 0 0 1]
[ 0 0 0 0 1 0 0 31 0 2]
[ 0 1 2 0 2 0 0 0 24 1]
[ 0 1 0 4 2 1 0 0 2 30]]
###Markdown
**¿Cuál modelo es mejor basado en sus métricas?**
###Code
print("Métricas del modelo Logistic Regression: \n")
metrics.summary_metrics(df_log_reg)
print("Métricas del modelo K-Nearest Neighbors: \n")
metrics.summary_metrics(df_knn)
print("Métricas del modelo Decision Classifier Tree: \n")
metrics.summary_metrics(df_dtreec)
###Output
Métricas del modelo Decision Classifier Tree:
###Markdown
Podemos observar que las métricas de cada modelo tienen valores similares entre sí, pero claramente los valores de las métricas del modelo Decision Classifier Tree son menores a las de las otras dos, siendo las del modelo K-Nearest Neighbors ligeramente más cercanas al 1 que las del modelo Logistic Regression. **¿Cuál modelo demora menos tiempo en ajustarse?**
###Code
print(" El modelo Logistic Regression se ajustó en %s segundos" % time_log_reg)
print(" El modelo K-Nearest Neighbors se ajustó en %s segundos" % time_knn)
print(" El modelo Decision Tree Classifier se ajustó en %s segundos" % time_dtreec)
###Output
El modelo Decision Tree Classifier se ajustó en 0.7640001773834229 segundos
###Markdown
Claramente el modelo Decision Tree Classifier es el que demoró menos en ajustarse, le sigue el modelo K-Nearest Neighbors y detrás de éste el modelo Logistic Regression. **¿Qué modelo escoges?** A priori me tentaría a elegir el modelo Decision Tree Classifier debido a que es por lejos el que demora menos tiempo en adaptarse, pero sus métricas no son tan buenas como para considerarlo, en cambio, el modelo K-Nearest Neighbors tiene las mejores métricas de los tres modelos y el tiempo que demora en adaptarse es decente, no es tan rápido como el modelo Decision Tree Classifier, pero es más rápido que el modelo Logistic Regression. Ejercicio 4__Comprensión del modelo:__ Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, debe comprender e interpretar minuciosamente los resultados y gráficos asocados al modelo en estudio, para ello debe resolver los siguientes puntos: * **Cross validation**: usando **cv** (con n_fold = 10), sacar una especie de "intervalo de confianza" sobre alguna de las métricas estudiadas en clases: * $\mu \pm \sigma$ = promedio $\pm$ desviación estandar * **Curva de Validación**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_validation_curve.htmlsphx-glr-auto-examples-model-selection-plot-validation-curve-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico. * **Curva AUC–ROC**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.htmlsphx-glr-auto-examples-model-selection-plot-roc-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico.
###Code
from sklearn.model_selection import cross_val_score
from sklearn.datasets import load_digits
from sklearn.svm import SVC
from sklearn.model_selection import validation_curve
from sklearn import svm, datasets
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from scipy import interp
from sklearn.metrics import roc_auc_score
from itertools import cycle
import sys
cvs = cross_val_score(estimator = knn_grid, X = X_train, y = y_train, cv = 10)
cvs = [round(x,2) for x in cvs]
print('Precisión promedio: {0: .2f} +/- {1: .2f}'.format(np.mean(cvs),np.std(cvs)*2))
param_range = np.array([i for i in range(1,10)])
train_scores, test_scores = validation_curve(
KNeighborsClassifier(algorithm = 'auto', weights = 'uniform'),
X_train,
y_train,
param_name = "n_neighbors",
param_range = param_range,
scoring = "accuracy",
n_jobs = 1)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.title("Validation Curve con K-Nearest Neighbors")
plt.xlabel("n_neighbors")
plt.ylabel("Score")
plt.ylim(0.9, 1.1)
lw = 2
plt.semilogx(param_range, train_scores_mean, label="Training score",
color="darkorange", lw=lw)
plt.fill_between(param_range, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color="darkorange", lw=lw)
plt.semilogx(param_range, test_scores_mean, label="Cross-validation score",
color="navy", lw=lw)
plt.fill_between(param_range, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.2,
color="navy", lw=lw)
plt.legend(loc="best")
plt.show()
# Binarize the output
y = label_binarize(y, classes = [i for i in range(10)])
n_classes = y.shape[1]
# Add noisy features to make the problem harder
n_samples, n_features = X.shape
# Shuffle and split training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Learn to predict each class against the other
classifier = OneVsRestClassifier(KNeighborsClassifier(algorithm = 'auto', weights = 'uniform'))
y_score = classifier.fit(X_train, y_train).predict(X_test)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
plt.figure()
lw = 2
plt.plot(fpr[2], tpr[2], color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc[2])
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.1])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure(figsize = (8,8))
plt.plot(fpr["micro"], tpr["micro"],
label = 'micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color = 'deeppink', linestyle = ':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label = 'macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color = 'navy', linestyle = ':', linewidth=4)
colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color = color, lw = lw,
label = 'ROC curve of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw = lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Some extension of Receiver operating characteristic to multi-class')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Ejercicio 5__Reducción de la dimensión:__ Tomando en cuenta el mejor modelo encontrado en el `Ejercicio 3`, debe realizar una redcción de dimensionalidad del conjunto de datos. Para ello debe abordar el problema ocupando los dos criterios visto en clases: * **Selección de atributos*** **Extracción de atributos**__Preguntas a responder:__Una vez realizado la reducción de dimensionalidad, debe sacar algunas estadísticas y gráficas comparativas entre el conjunto de datos original y el nuevo conjunto de datos (tamaño del dataset, tiempo de ejecución del modelo, etc.)
###Code
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
###Output
_____no_output_____
###Markdown
**Selección de atributos**
###Code
# Separamos las columnas objetivo
x_training = digits.drop(['target','c00','c32','c39'], axis = 1) # Las clases incluidas tienen un valor constante #
y_training = digits['target']
# Aplicando el algoritmo univariante de prueba F.
k = 23 # Número de atributos a seleccionar
columnas = list(x_training.columns.values)
seleccionadas = SelectKBest(f_classif, k=k).fit(x_training, y_training)
catrib = seleccionadas.get_support()
atributos = [columnas[i] for i in list(catrib.nonzero()[0])]
digits[atributos]
###Output
_____no_output_____
###Markdown
**Extracción de atributos**
###Code
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
x = digits.drop("target", axis = 1).values
y = digits["target"].values
x = StandardScaler().fit_transform(x)
pca = PCA(n_components = 23)
principalComponents = pca.fit_transform(x)
# Graficar varianza por componente
percent_variance = np.round(pca.explained_variance_ratio_* 100, decimals =2)
percent_variance_cum = np.cumsum(percent_variance)
columns=[f"PC{i}" for i in range(1,24)]
plt.figure(figsize = (16,9))
plt.bar(x = range(1,24), height = percent_variance_cum, tick_label = columns)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component Cumsum')
plt.title('PCA Scree Plot')
plt.show()
# Graficar varianza por la suma acumulada de los componente
percent_variance_cum = np.cumsum(percent_variance)
columns_sum = ["PC1", "PC1+PC2", "PC1+PC2+PC3"] + [f"PC1+...+PC{i+1}" for i in range(3,23)]
plt.figure(figsize = (16,9))
plt.bar(x = range(1,24), height = percent_variance_cum, tick_label = columns_sum )
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component Cumsum')
plt.title('PCA Scree Plot')
plt.xticks(rotation = 45)
plt.show()
pca = PCA(n_components = 23)
principalComponents = pca.fit_transform(x)
principalDataframe = pd.DataFrame(data = principalComponents, columns = columns)
targetDataframe = digits[['target']]
newDataframe = pd.concat([principalDataframe, targetDataframe], axis = 1)
newDataframe.head()
print('Dimensión del data set original:',digits.shape)
print('Dimensión del data set reducido:',newDataframe.shape)
X = newDataframe.drop(columns="target").values
y = newDataframe["target"].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42)
start_new = time.time()
knn_grid.fit(X_train, y_train)
time_knn_new = time.time() - start_new
y_pred = knn_grid.predict(X_test)
df_knn_new = pd.DataFrame({'y': y_test, 'yhat': y_pred})
print('Matriz de confusión: \n', confusion_matrix(y_test,y_pred))
print("El modelo K-Nearest Neighbors con el nuevo dataset se ajustó en %s segundos" % time_knn_new)
dif_time = time_knn - time_knn_new
print("El modelo se ejecuta", dif_time, "más rapido con el nuevo dataset")
print("Métricas del modelo K-Nearest Neighbors con el nuevo dataset: \n")
metrics.summary_metrics(df_knn_new)
###Output
Métricas del modelo K-Nearest Neighbors con el nuevo dataset:
###Markdown
Ejercicio 6__Visualizando Resultados:__ A continuación se provee código para comparar las etiquetas predichas vs las etiquetas reales del conjunto de _test_.
###Code
def mostar_resultados(digits,model,nx=5, ny=5,label = "correctos"):
"""
Muestra los resultados de las prediciones de un modelo
de clasificacion en particular. Se toman aleatoriamente los valores
de los resultados.
- label == 'correcto': retorna los valores en que el modelo acierta.
- label == 'incorrecto': retorna los valores en que el modelo no acierta.
Observacion: El modelo que recibe como argumento debe NO encontrarse
'entrenado'.
:param digits: dataset 'digits'
:param model: modelo de sklearn
:param nx: numero de filas (subplots)
:param ny: numero de columnas (subplots)
:param label: datos correctos o incorrectos
:return: graficos matplotlib
"""
X = digits.drop(columns="target").values
y = digits["target"].values
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.2, random_state = 42)
model.fit(X_train, Y_train) # ajustando el modelo
y_pred = model.predict(X_test)
# Mostrar los datos correctos
if label=="correctos":
mask = (y_pred == Y_test)
color = "green"
# Mostrar los datos correctos
elif label=="incorrectos":
mask = (y_pred != Y_test)
color = "red"
else:
raise ValueError("Valor incorrecto")
X_aux = X_test[mask]
y_aux_true = Y_test[mask]
y_aux_pred = y_pred[mask]
# We'll plot the first 100 examples, randomly choosen
fig, ax = plt.subplots(nx, ny, figsize=(12,12))
for i in range(nx):
for j in range(ny):
index = j + ny * i
if index < X_aux.shape[0]:
data = X_aux[index, :].reshape(8,8)
label_pred = str(int(y_aux_pred[index]))
label_true = str(int(y_aux_true[index]))
ax[i][j].imshow(data, interpolation='nearest', cmap='gray_r')
ax[i][j].text(0, 0, label_pred, horizontalalignment='center', verticalalignment='center', fontsize=10, color=color)
ax[i][j].text(7, 0, label_true, horizontalalignment='center', verticalalignment='center', fontsize=10, color='blue')
ax[i][j].get_xaxis().set_visible(False)
ax[i][j].get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
**Pregunta*** Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, grafique los resultados cuando: * el valor predicho y original son iguales * el valor predicho y original son distintos * Cuando el valor predicho y original son distintos , ¿Por qué ocurren estas fallas?
###Code
mostar_resultados(digits, model = KNeighborsClassifier(), nx = 3, ny = 3, label = "correctos")
mostar_resultados(digits, model = KNeighborsClassifier(), nx = 3, ny = 3, label = "incorrectos")
###Output
_____no_output_____
###Markdown
Tarea N°02 Instrucciones1.- Completa tus datos personales (nombre y rol USM) en siguiente celda.**Nombre**: Javier Pizarro Wittke**Rol**: 201510520-02.- Debes pushear este archivo con tus cambios a tu repositorio personal del curso, incluyendo datos, imágenes, scripts, etc.3.- Se evaluará:- Soluciones- Código- Que Binder esté bien configurado.- Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error. I.- Clasificación de dígitosEn este laboratorio realizaremos el trabajo de reconocer un dígito a partir de una imagen.  El objetivo es a partir de los datos, hacer la mejor predicción de cada imagen. Para ellos es necesario realizar los pasos clásicos de un proyecto de _Machine Learning_, como estadística descriptiva, visualización y preprocesamiento. * Se solicita ajustar al menos tres modelos de clasificación: * Regresión logística * K-Nearest Neighbours * Uno o más algoritmos a su elección [link](https://scikit-learn.org/stable/supervised_learning.htmlsupervised-learning) (es obligación escoger un _estimator_ que tenga por lo menos un hiperparámetro). * En los modelos que posean hiperparámetros es mandatorio buscar el/los mejores con alguna técnica disponible en `scikit-learn` ([ver más](https://scikit-learn.org/stable/modules/grid_search.htmltuning-the-hyper-parameters-of-an-estimator)).* Para cada modelo, se debe realizar _Cross Validation_ con 10 _folds_ utilizando los datos de entrenamiento con tal de determinar un intervalo de confianza para el _score_ del modelo.* Realizar una predicción con cada uno de los tres modelos con los datos _test_ y obtener el _score_. * Analizar sus métricas de error (**accuracy**, **precision**, **recall**, **f-score**) Exploración de los datosA continuación se carga el conjunto de datos a utilizar, a través del sub-módulo `datasets` de `sklearn`.
###Code
import numpy as np
import pandas as pd
from sklearn import datasets
import matplotlib.pyplot as plt
%matplotlib inline
digits_dict = datasets.load_digits()
print(digits_dict["DESCR"])
digits_dict.keys()
digits_dict["target"]
###Output
_____no_output_____
###Markdown
A continuación se crea dataframe declarado como `digits` con los datos de `digits_dict` tal que tenga 65 columnas, las 6 primeras a la representación de la imagen en escala de grises (0-blanco, 255-negro) y la última correspondiente al dígito (`target`) con el nombre _target_.
###Code
digits = (
pd.DataFrame(
digits_dict["data"],
)
.rename(columns=lambda x: f"c{x:02d}")
.assign(target=digits_dict["target"])
.astype(int)
)
digits.head()
###Output
_____no_output_____
###Markdown
Ejercicio 1**Análisis exploratorio:** Realiza tu análisis exploratorio, no debes olvidar nada! Recuerda, cada análisis debe responder una pregunta.Algunas sugerencias:* ¿Cómo se distribuyen los datos?* ¿Cuánta memoria estoy utilizando?* ¿Qué tipo de datos son?* ¿Cuántos registros por clase hay?* ¿Hay registros que no se correspondan con tu conocimiento previo de los datos?
###Code
#Tipo de datos del dataframe
digits.info()
#Datos distintos en la columna target
digits['target'].unique()
#Registros por clase
(a,b)=np.unique(digits['target'],return_counts=True)
for i in range(10):
print('hay', b[i] ,'registros de la clase' ,a[i])
#Información sobre la cantidad de elementos
promedio=b.mean()
maximo=b.max()
minimo=b.min()
print((promedio,maximo,minimo))
###Output
(179.7, 183, 174)
###Markdown
El dataframe posee 1797 filas y 64 columnas, arrojando un total de 116805 datos, los cuales utilizan 456.4 KB de memoria. Destacar que todos los datos corresponden al tipo Int32 no exisistiendo valores nulos. Por otro lado, y tomando en cuenta que las clases vienen dadas por la columna target, se puede ver que existen 10 clases rotuladas del 0 al 9, las cuales poseen las cantidades de elementos mostradas anteriormente. Se puede deducir que el promedio de datos por clase es de 179.7 elementos con un máximo de 183 elementos asociado a la clase 3 y un minimo de 174 asociado a la clase 8. Se adjunta histograma que muestra la cantidad de datos que hay por clase.
###Code
import seaborn as sns
sns.set(rc={'figure.figsize':(15,8)})
sns.countplot(y='target',
data=digits,)
###Output
_____no_output_____
###Markdown
Ejercicio 2**Visualización:** Para visualizar los datos utilizaremos el método `imshow` de `matplotlib`. Resulta necesario convertir el arreglo desde las dimensiones (1,64) a (8,8) para que la imagen sea cuadrada y pueda distinguirse el dígito. Superpondremos además el label correspondiente al dígito, mediante el método `text`. Esto nos permitirá comparar la imagen generada con la etiqueta asociada a los valores. Realizaremos lo anterior para los primeros 25 datos del archivo.
###Code
digits_dict["images"][0]
###Output
_____no_output_____
###Markdown
Visualiza imágenes de los dígitos utilizando la llave `images` de `digits_dict`. Sugerencia: Utiliza `plt.subplots` y el método `imshow`. Puedes hacer una grilla de varias imágenes al mismo tiempo!
###Code
nx, ny = 5, 5
fig, axs = plt.subplots(nx, ny, figsize=(12, 12))
for x in range(nx):
for y in range(ny):
axs[x,y].imshow(digits_dict["images"][5*x +y], cmap='cividis')
axs[x,y].text(3,5,s=digits['target'][5*x +y],fontsize=60,color='w')
## FIX ME PLEASE
###Output
_____no_output_____
###Markdown
Ejercicio 3**Machine Learning**: En esta parte usted debe entrenar los distintos modelos escogidos desde la librería de `skelearn`. Para cada modelo, debe realizar los siguientes pasos:* **train-test** * Crear conjunto de entrenamiento y testeo (usted determine las proporciones adecuadas). * Imprimir por pantalla el largo del conjunto de entrenamiento y de testeo. * **modelo**: * Instanciar el modelo objetivo desde la librería sklearn. * *Hiper-parámetros*: Utiliza `sklearn.model_selection.GridSearchCV` para obtener la mejor estimación de los parámetros del modelo objetivo.* **Métricas**: * Graficar matriz de confusión. * Analizar métricas de error.__Preguntas a responder:__* ¿Cuál modelo es mejor basado en sus métricas?* ¿Cuál modelo demora menos tiempo en ajustarse?* ¿Qué modelo escoges?
###Code
X = digits.drop(columns="target").values
y = digits["target"].values
#Se definen conjuntos de entrenamiento y testeo
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
print('El largo del conjunto entrenamiento es', len(X_train))
print('El largo del conjunto testeo es', len(X_test))
###Output
El largo del conjunto entrenamiento es 1437
El largo del conjunto testeo es 360
###Markdown
Regresión logística
###Code
#Se instancia el modelo Regresión Logistica
from sklearn.linear_model import LogisticRegression
from metrics_classification import *
from sklearn.metrics import confusion_matrix
rlog=LogisticRegression(max_iter=5000)
rlog.fit(X_train, y_train)
#Matriz de confusión
y_true = list(y_test)
y_pred = list(rlog.predict(X_test))
print(confusion_matrix(y_true,y_pred))
#Datos acertados
acert1=sum(y_test == rlog.predict(X_test))
print(" Se acertó en", acert1, "datos")
#Metricas
df_temp = pd.DataFrame(
{
'y':y_true,
'yhat':y_pred
}
)
df_metrics = summary_metrics(df_temp)
df_metrics
###Output
_____no_output_____
###Markdown
K-Nearest Neighbours
###Code
#Se instancia el modelo
from sklearn.neighbors import KNeighborsClassifier
knb=KNeighborsClassifier()
knb.fit(X_train, y_train)
#Matriz de confusión
y_true = list(y_test)
y_pred = list(knb.predict(X_test))
print(confusion_matrix(y_true,y_pred))
#Datos acertados
acert2=sum(y_test == knb.predict(X_test))
print("Se acertó en", acert2, "datos")
#Métricas
df_temp = pd.DataFrame(
{
'y':y_true,
'yhat':y_pred
}
)
df_metrics = summary_metrics(df_temp)
df_metrics
###Output
_____no_output_____
###Markdown
Random forest
###Code
#Se instancia modelo
from sklearn.ensemble import RandomForestClassifier
rfc=RandomForestClassifier(max_depth=50)
rfc.fit(X_train, y_train)
#Matriz de confusión
y_true = list(y_test)
y_pred = list(rfc.predict(X_test))
print(confusion_matrix(y_true,y_pred))
#Datos acertados
acert3=sum(y_test == rfc.predict(X_test))
print("Se acertó en", acert3, "datos")
#Metricas
df_temp = pd.DataFrame(
{
'y':y_true,
'yhat':y_pred
}
)
df_metrics = summary_metrics(df_temp)
df_metrics
#Grid search
from sklearn.model_selection import GridSearchCV
# creación del modelo
model = RandomForestClassifier()
# rango de parametros
rango_criterion = ['gini','entropy']
rango_max_depth =np.array( [4,5,6,7,8,9,10,11,12,15,20,30,40,50,70,90,120,150])
param_grid = dict(criterion=rango_criterion, max_depth=rango_max_depth)
param_grid
gs = GridSearchCV(estimator=model,
param_grid=param_grid,
scoring='accuracy',
cv=5,
n_jobs=-1)
gs = gs.fit(X_train, y_train)
# imprimir resultados
print(gs.best_score_)
print(gs.best_params_)
###Output
0.974946767324816
{'criterion': 'gini', 'max_depth': 90}
###Markdown
Respuestas Basado en las métricas el mejor modelo corresponde a K-Nearest Neighbours, las cuales tienen un porcentaje mayor a los otros dos modelos.Es por esto, y además dado el nivel de acierto mostrado, es que se elige el modelo K-Nearest Neighbours. Ejercicio 4__Comprensión del modelo:__ Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, debe comprender e interpretar minuciosamente los resultados y gráficos asocados al modelo en estudio, para ello debe resolver los siguientes puntos: * **Cross validation**: usando **cv** (con n_fold = 10), sacar una especie de "intervalo de confianza" sobre alguna de las métricas estudiadas en clases: * $\mu \pm \sigma$ = promedio $\pm$ desviación estandar * **Curva de Validación**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_validation_curve.htmlsphx-glr-auto-examples-model-selection-plot-validation-curve-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico. * **Curva AUC–ROC**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.htmlsphx-glr-auto-examples-model-selection-plot-roc-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico.
###Code
#cross validation
from sklearn.model_selection import cross_val_score
model = KNeighborsClassifier()
precision = cross_val_score(estimator=model,
X=X_train,
y=y_train,
cv=10)
prom=(precision.mean()).round(3)
desv_std=precision.std().round(3)
ic=[prom-desv_std,prom+desv_std]
print('El intervalo de confianza es', ic)
## FIX ME PLEASE
knb.get_params().keys()
#Cross validation
from sklearn.model_selection import validation_curve
parameters = np.arange(1,10)
train_scores, test_scores = validation_curve(model,
X_train,
y_train,
param_name = 'n_neighbors',
param_range = parameters,
scoring = 'accuracy',
n_jobs = -1)
train_scores_mean = np.mean(train_scores, axis = 1)
train_scores_std = np.std(train_scores, axis = 1)
test_scores_mean = np.mean(test_scores, axis = 1)
test_scores_std = np.std(test_scores, axis = 1)
plt.figure(figsize=(20,8))
plt.title('Validation Curve (KNeighbors)')
plt.xlabel('n_neighbors')
plt.ylabel('scores')
plt.semilogx(parameters,train_scores_mean,label = 'Training Score',color = 'red',lw =2)
plt.fill_between(parameters, train_scores_mean - train_scores_std, train_scores_mean + train_scores_std,alpha = 0.2,
color = 'red', lw = 2)
plt.semilogx(parameters, test_scores_mean, label = 'Cross Validation Score', color = 'green',lw =2)
plt.fill_between(parameters, test_scores_mean - test_scores_std, test_scores_mean + test_scores_std, alpha = 0.2,
color = 'green', lw = 2)
plt.legend(loc = 'best')
plt.show()
###Output
_____no_output_____
###Markdown
Ejercicio 5__Reducción de la dimensión:__ Tomando en cuenta el mejor modelo encontrado en el `Ejercicio 3`, debe realizar una redcción de dimensionalidad del conjunto de datos. Para ello debe abordar el problema ocupando los dos criterios visto en clases: * **Selección de atributos*** **Extracción de atributos**__Preguntas a responder:__Una vez realizado la reducción de dimensionalidad, debe sacar algunas estadísticas y gráficas comparativas entre el conjunto de datos original y el nuevo conjunto de datos (tamaño del dataset, tiempo de ejecución del modelo, etc.)
###Code
#Selección de atributos
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
df = pd.DataFrame(X)
df.columns = [f'c{k}' for k in range(0,X.shape[1])]
df['target']=y
# Separamos las columnas objetivo
x_training = df.drop(['target',], axis=1)
y_training = df['target']
# Aplicando el algoritmo univariante de prueba F.
k = 40 # número de atributos a seleccionar
columnas = list(x_training.columns.values)
seleccionadas = SelectKBest(f_classif, k=k).fit(x_training, y_training)
catrib = seleccionadas.get_support()
df= df[[columnas[i] for i in list(catrib.nonzero()[0])]]
print("Las columnas seleccionadas son:\n",df.columns.tolist())
#Extraccion de atributos
#Se escalan los datos
from sklearn.preprocessing import StandardScaler
X1=StandardScaler().fit_transform(df)
#Dataframe Normalizado
df_norm=pd.DataFrame(X1,columns=df.columns)
#gráfica de correlación
corr=df_norm.corr()
f,ax=plt.subplots(figsize=(15,15))
sns.set_style(style='white')
sns.heatmap(corr.round(1),
mask=np.triu(np.ones_like(corr, dtype = bool)),
annot=True)
#Se aplica PCA
from sklearn.decomposition import PCA
pca = PCA(df.shape[1])
principalComponents = pca.fit_transform(X1)
# graficar varianza por componente
percent_variance = np.round(pca.explained_variance_ratio_* 100, decimals =2)
columnas = df_norm.columns
plt.figure(figsize=(30,10))
sns.barplot(
x=columnas,
y=percent_variance,
)
plt.ylabel('Percentate of Variance Explained',**{'size':'22'})
plt.xlabel('Principal Component',**{'size':'22'})
plt.title('PCA Scree Plot',**{'size':'30'})
plt.show()
#gráfica varianza acumulada"
percent_variance_cum = np.cumsum(percent_variance)
columnas_1=[columnas[0]]
for i in range(1,len(columnas)):
columnas_1.append(columnas[0] + 'to' + columnas[i])
plt.figure(figsize=(25,25))
sns.barplot(
x=columnas_1,
y=percent_variance_cum,
)
plt.ylabel('Percentate of Variance Explained',**{'size':'22'})
plt.xlabel('Principal Component Cumsum',**{'size':'22'})
plt.title('PCA Scree Plot',**{'size':'30'})
plt.show()
###Output
_____no_output_____
###Markdown
Ahora, analicemos el comportamiento de nuestro modelo con al reducción de datos.
###Code
X2=df
X_train, X_test, y_train, y_test = train_test_split(X2, y, test_size=0.2)
print('El largo del conjunto entrenamiento es', len(X_train))
print('El largo del conjunto testeo es', len(X_test))
knn=KNeighborsClassifier()
knn.fit(X_train,y_train)
#matriz de confusión
y_true = list(y_test)
y_pred = list(knn.predict(X_test))
print(confusion_matrix(y_true,y_pred))
#Datos acertados
acert4=sum(y_test == knn.predict(X_test))
print("Se acertó en", acert4, "datos")
df_temp = pd.DataFrame(
{
'y':y_true,
'yhat':y_pred
}
)
df_metrics = summary_metrics(df_temp)
df_metrics
###Output
_____no_output_____
###Markdown
Claramente,el modelo con todos los datos es mejor que el modelo acotado, lo cual se refleja en las métricas como tambien en la cantidad de datos acertados Ejercicio 6__Visualizando Resultados:__ A continuación se provee código para comparar las etiquetas predichas vs las etiquetas reales del conjunto de _test_.
###Code
def mostar_resultados(digits,model,nx=5, ny=5,label = "correctos"):
"""
Muestra los resultados de las prediciones de un modelo
de clasificacion en particular. Se toman aleatoriamente los valores
de los resultados.
- label == 'correcto': retorna los valores en que el modelo acierta.
- label == 'incorrecto': retorna los valores en que el modelo no acierta.
Observacion: El modelo que recibe como argumento debe NO encontrarse
'entrenado'.
:param digits: dataset 'digits'
:param model: modelo de sklearn
:param nx: numero de filas (subplots)
:param ny: numero de columnas (subplots)
:param label: datos correctos o incorrectos
:return: graficos matplotlib
"""
X = digits.drop(columns = "target").values
y = digits["target"].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42)
model.fit(X_train, y_train) # ajustando el modelo
y_pred = model.predict(X_test)
# Mostrar los datos correctos
if label == "correctos":
mask = (y_pred == y_test)
color = "green"
# Mostrar los datos correctos
elif label == "incorrectos":
mask = (y_pred != y_test)
color = "red"
else:
raise ValueError("Valor incorrecto")
X_aux = X_test[mask]
y_aux_true = y_test[mask]
y_aux_pred = y_pred[mask]
# We'll plot the first 100 examples, randomly choosen
fig, ax = plt.subplots(nx, ny, figsize=(12,12))
for i in range(nx):
for j in range(ny):
index = j + ny * i
data = X_aux[index, :].reshape(8,8)
label_pred = str(int(y_aux_pred[index]))
label_true = str(int(y_aux_true[index]))
ax[i][j].imshow(data, interpolation = 'nearest', cmap = 'gray_r')
ax[i][j].text(0, 0, label_pred, horizontalalignment = 'center', verticalalignment = 'center', fontsize = 10, color = color)
ax[i][j].text(7, 0, label_true, horizontalalignment = 'center', verticalalignment = 'center', fontsize = 10, color = 'blue')
ax[i][j].get_xaxis().set_visible(False)
ax[i][j].get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
**Pregunta*** Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, grafique los resultados cuando: * el valor predicho y original son iguales * el valor predicho y original son distintos * Cuando el valor predicho y original son distintos , ¿Por qué ocurren estas fallas?
###Code
mostar_resultados(digits, KNeighborsClassifier(), nx=5, ny=5,label = "correctos")
mostar_resultados(digits, KNeighborsClassifier(), nx=2, ny=3,label = "incorrectos")
## FIX ME PLEASE
###Output
_____no_output_____
###Markdown
Tarea N°02 Instrucciones1.- Completa tus datos personales (nombre y rol USM) en siguiente celda.**Nombre**:Marcelino Zúñiga**Rol**:201610504-22.- Debes pushear este archivo con tus cambios a tu repositorio personal del curso, incluyendo datos, imágenes, scripts, etc.3.- Se evaluará:- Soluciones- Código- Que Binder esté bien configurado.- Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error. I.- Clasificación de dígitosEn este laboratorio realizaremos el trabajo de reconocer un dígito a partir de una imagen.  El objetivo es a partir de los datos, hacer la mejor predicción de cada imagen. Para ellos es necesario realizar los pasos clásicos de un proyecto de _Machine Learning_, como estadística descriptiva, visualización y preprocesamiento. * Se solicita ajustar al menos tres modelos de clasificación: * Regresión logística * K-Nearest Neighbours * Uno o más algoritmos a su elección [link](https://scikit-learn.org/stable/supervised_learning.htmlsupervised-learning) (es obligación escoger un _estimator_ que tenga por lo menos un hiperparámetro). * En los modelos que posean hiperparámetros es mandatorio buscar el/los mejores con alguna técnica disponible en `scikit-learn` ([ver más](https://scikit-learn.org/stable/modules/grid_search.htmltuning-the-hyper-parameters-of-an-estimator)).* Para cada modelo, se debe realizar _Cross Validation_ con 10 _folds_ utilizando los datos de entrenamiento con tal de determinar un intervalo de confianza para el _score_ del modelo.* Realizar una predicción con cada uno de los tres modelos con los datos _test_ y obtener el _score_. * Analizar sus métricas de error (**accuracy**, **precision**, **recall**, **f-score**) Exploración de los datosA continuación se carga el conjunto de datos a utilizar, a través del sub-módulo `datasets` de `sklearn`.
###Code
import numpy as np
import pandas as pd
from sklearn import datasets
import matplotlib.pyplot as plt
%matplotlib inline
digits_dict = datasets.load_digits()
print(digits_dict["DESCR"])
digits_dict.keys()
digits_dict["target"]
###Output
_____no_output_____
###Markdown
A continuación se crea dataframe declarado como `digits` con los datos de `digits_dict` tal que tenga 65 columnas, las 6 primeras a la representación de la imagen en escala de grises (0-blanco, 255-negro) y la última correspondiente al dígito (`target`) con el nombre _target_.
###Code
digits = (
pd.DataFrame(
digits_dict["data"],
)
.rename(columns=lambda x: f"c{x:02d}")
.assign(target=digits_dict["target"])
.astype(int)
)
digits.head()
###Output
_____no_output_____
###Markdown
Ejercicio 1**Análisis exploratorio:** Realiza tu análisis exploratorio, no debes olvidar nada! Recuerda, cada análisis debe responder una pregunta.Algunas sugerencias:* ¿Cómo se distribuyen los datos?* ¿Cuánta memoria estoy utilizando?* ¿Qué tipo de datos son?* ¿Cuántos registros por clase hay?* ¿Hay registros que no se correspondan con tu conocimiento previo de los datos?
###Code
#Memoria que ocupa el dataset
import sys
Memoria = digits.memory_usage() #Determina la memoria ocupada por cada columna en bytes
total = (Memoria[1]*65)/1000 #Como todas las columnas ocupan la misma cantidad de memoria multiplicamos por la cantidad
#de columnas que son 65 y dividimos por 1000 para dejar el dato en kilobyte
print(total, 'kilobytes')
###Output
467.22 kilobytes
###Markdown
Ejercicio 2**Visualización:** Para visualizar los datos utilizaremos el método `imshow` de `matplotlib`. Resulta necesario convertir el arreglo desde las dimensiones (1,64) a (8,8) para que la imagen sea cuadrada y pueda distinguirse el dígito. Superpondremos además el label correspondiente al dígito, mediante el método `text`. Esto nos permitirá comparar la imagen generada con la etiqueta asociada a los valores. Realizaremos lo anterior para los primeros 25 datos del archivo.
###Code
digits_dict["images"][0]
###Output
_____no_output_____
###Markdown
Visualiza imágenes de los dígitos utilizando la llave `images` de `digits_dict`. Sugerencia: Utiliza `plt.subplots` y el método `imshow`. Puedes hacer una grilla de varias imágenes al mismo tiempo!
###Code
nx, ny = 5, 5
fig, axs = plt.subplots(nx, ny, figsize=(12, 12))
for i in range(1, 26):
img = digits_dict["images"][i]
fig.add_subplot(5, 5, i)
plt.imshow(img)
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Ejercicio 3**Machine Learning**: En esta parte usted debe entrenar los distintos modelos escogidos desde la librería de `skelearn`. Para cada modelo, debe realizar los siguientes pasos:* **train-test** * Crear conjunto de entrenamiento y testeo (usted determine las proporciones adecuadas). * Imprimir por pantalla el largo del conjunto de entrenamiento y de testeo. * **modelo**: * Instanciar el modelo objetivo desde la librería sklearn. * *Hiper-parámetros*: Utiliza `sklearn.model_selection.GridSearchCV` para obtener la mejor estimación de los parámetros del modelo objetivo.* **Métricas**: * Graficar matriz de confusión. * Analizar métricas de error.__Preguntas a responder:__* ¿Cuál modelo es mejor basado en sus métricas?* ¿Cuál modelo demora menos tiempo en ajustarse?* ¿Qué modelo escoges?
###Code
X = digits.drop(columns="target").values
y = digits["target"].values
from metrics_classification import summary_metrics as sm
from sklearn.metrics import confusion_matrix
#Entrenamiento del modelo
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20,
train_size=0.80,
random_state=1997)
#Imprimimos el conjunto de entrenamiento y testeo
print('numero de filas train set : ',len(X_train))
print('numero de filas test set : ',len(X_test))
###Output
numero de filas train set : 1437
numero de filas test set : 360
###Markdown
REGRESIÓN LOGÍSTICA
###Code
#Modelo de Regresión logística usando GridsearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
import time
#Selección de hiperparámetros
metric_lr = {
'penalty' : ['l1', 'l2'],
'class_weight' : ['balanced', None],
'solver' : ['liblinear'],
'random_state':[0,1997]
}
lr = LogisticRegression()
lr_gridsearchcv = GridSearchCV(estimator = lr, param_grid = metric_lr, cv = 10)
#Temporizador
start_time = time.time()
lr_grid_result = lr_gridsearchcv.fit(X_train, y_train)
print("%s segundos" % (time.time() - start_time))
#Vemos los mejores parametros utilizados
print("El mejor tiempo es de: %f usando %s" % (lr_grid_result.best_score_, lr_grid_result.best_params_))
#Calculo de métricas con matriz de confusión
y_lrpred = lr_gridsearchcv.predict(X_test)
d = dict( y=y_test, yhat = y_lrpred)
df_aux= pd.DataFrame.from_dict(d, orient='index').transpose()
print(confusion_matrix(y_test,y_lrpred))
sm(df_aux)
###Output
[[36 0 0 0 0 0 0 0 0 0]
[ 0 45 0 1 0 0 0 0 2 0]
[ 0 0 23 0 0 0 0 0 0 0]
[ 0 0 0 33 0 1 0 1 1 0]
[ 0 0 0 0 35 0 0 0 0 0]
[ 0 0 1 0 0 35 2 0 0 1]
[ 0 0 0 0 0 0 35 0 0 0]
[ 0 0 0 0 0 0 0 32 1 1]
[ 0 0 0 0 0 0 0 0 27 0]
[ 0 0 0 0 0 0 0 0 0 47]]
###Markdown
KNN
###Code
#Método K-Nearest Neighbours usando GridSearchCV
from sklearn.neighbors import KNeighborsClassifier
#Selección de hiperparámetros
metric_knn = {
'n_neighbors' : [5, 7, 11, 17],
'weights' : ['uniform', 'distance'],
'metric' : ['manhattan','chebyshev'],
'algorithm' : ['auto','ball_tree', 'kd_tree']
}
knn = KNeighborsClassifier()
knn_gridsearchcv = GridSearchCV(estimator = knn, param_grid = metric_knn, cv = 10)
#temporizador
start_time = time.time()
knn_grid_result = knn_gridsearchcv.fit(X_train, y_train)
print(" %s segundos" % (time.time() - start_time))
print("Mejor tiempo: %f usando %s" % (knn_grid_result.best_score_, knn_grid_result.best_params_))
#Calculo de métricas con matriz de confusión
y_knnpred = knn_gridsearchcv.predict(X_test)
d = dict( y=y_test, yhat = y_knnpred)
df_aux= pd.DataFrame.from_dict(d, orient='index').transpose()
print(confusion_matrix(y_test,y_knnpred))
sm(df_aux)
###Output
[[35 0 0 0 1 0 0 0 0 0]
[ 0 48 0 0 0 0 0 0 0 0]
[ 0 0 23 0 0 0 0 0 0 0]
[ 0 0 0 36 0 0 0 0 0 0]
[ 0 0 0 0 35 0 0 0 0 0]
[ 0 0 0 1 0 37 1 0 0 0]
[ 0 0 0 0 0 0 35 0 0 0]
[ 0 0 0 0 0 0 0 33 0 1]
[ 0 0 0 0 0 0 0 0 26 1]
[ 0 0 0 0 0 0 0 0 0 47]]
###Markdown
Perceptron
###Code
from sklearn.linear_model import Perceptron
turned_parameters ={'tol':[1e-3,1e-5,1e-1],
'random_state': [0,10],
'shuffle':[True,False],
'eta0':[1,0.5,10]
}
scores = ['precision', 'recall']
P_gridsearchcv = GridSearchCV(estimator = Perceptron(), param_grid = turned_parameters, cv = 10)
#Temporizador
start_time = time.time()
P_grid_result = P_gridsearchcv.fit(X_train, y_train)
print("%s segundos" % (time.time() - start_time))
P_grid_result = P_gridsearchcv.fit(X_train, y_train)
#Calculo de métricas con la matriz de confusión
y_ppred = P_gridsearchcv.predict(X_test)
d = dict( y=y_test, yhat = y_ppred)
df_aux= pd.DataFrame.from_dict(d, orient='index').transpose()
print(confusion_matrix(y_test,y_ppred))
sm(df_aux)
###Output
[[35 0 0 0 0 0 0 0 1 0]
[ 0 45 0 0 0 0 0 0 3 0]
[ 0 1 20 1 0 0 0 0 1 0]
[ 0 0 0 34 0 0 0 1 1 0]
[ 0 0 0 0 35 0 0 0 0 0]
[ 0 0 0 0 0 36 1 0 1 1]
[ 0 1 0 0 0 0 34 0 0 0]
[ 0 0 0 0 0 0 0 32 1 1]
[ 0 0 0 0 0 0 0 0 27 0]
[ 0 1 0 1 0 0 0 1 1 43]]
###Markdown
Ejercicio 4__Comprensión del modelo:__ Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, debe comprender e interpretar minuciosamente los resultados y gráficos asocados al modelo en estudio, para ello debe resolver los siguientes puntos: * **Cross validation**: usando **cv** (con n_fold = 10), sacar una especie de "intervalo de confianza" sobre alguna de las métricas estudiadas en clases: * $\mu \pm \sigma$ = promedio $\pm$ desviación estandar * **Curva de Validación**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_validation_curve.htmlsphx-glr-auto-examples-model-selection-plot-validation-curve-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico. * **Curva AUC–ROC**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.htmlsphx-glr-auto-examples-model-selection-plot-roc-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico.
###Code
#Cross Validation usando KNN
from sklearn.model_selection import cross_val_score
precision = cross_val_score(estimator=knn_gridsearchcv,
X=X_train,
y=y_train,
cv=10)
precision = [round(x,2) for x in precision]
print('Precisiones: {} '.format(precision))
print('Precision promedio: {0: .3f} +/- {1: .3f}'.format(np.mean(precision),
np.std(precision)))
#curva de validación
from sklearn.model_selection import validation_curve
param_range = np.array([i for i in range(1, 10)])
#Validation curve usando los mejores hiperparámetros
train_scores, test_scores = validation_curve(
KNeighborsClassifier(weights = 'distance',metric = 'euclidean'), X_train, y_train, param_name="n_neighbors", param_range=param_range,
scoring="accuracy", n_jobs=1)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.title("Validation Curve with SVM")
plt.xlabel(r"$\gamma$")
plt.ylabel("Score")
plt.ylim(0.0, 1.1)
lw = 2
plt.semilogx(param_range, train_scores_mean, label="Training score",
color="darkorange", lw=lw)
plt.fill_between(param_range, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color="darkorange", lw=lw)
plt.semilogx(param_range, test_scores_mean, label="Cross-validation score",
color="navy", lw=lw)
plt.fill_between(param_range, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.2,
color="navy", lw=lw)
plt.legend(loc="best")
plt.show()
###Output
_____no_output_____
###Markdown
La curva de training score es perfecta porque el modelo memoriza los datos de entrenamiento, por otro lado la curva de cross validation es muy buena, esto nos dice que el modelo knn se comporta sin importar la cantidad de cluster, aunque se puede apreciar que si se aumentan los cluster el rendimiento empiesa a bajar
###Code
#Determinamos la cantidad de neighbors necesarios
index = np.argmax(test_scores_mean)
param_range[index]
from itertools import cycle
from sklearn.metrics import roc_curve, auc
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from scipy import interp
from sklearn.metrics import roc_auc_score
y = label_binarize(y, classes=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
n_classes = y.shape[1]
n_samples, n_features = X.shape
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20,
train_size=0.80,
random_state=1997)
classifier = KNeighborsClassifier(weights = 'distance',metric = 'euclidean', n_neighbors = param_range[index])
y_score = classifier.fit(X_train, y_train).predict(X_test)
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
#AOC-ROC para multiples clases (código también obtenido del link)
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Some extension of Receiver operating characteristic to multi-class')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
En este caso podemos ver que la mayoria de los casos se comporta muy bien, pero el aumento de clusters baja un poco el rendimiento de las predicciones hechas por el modelo
###Code
#Curva promedio de las multi-clases
import sys
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle='-', linewidth=4)
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Some extension of Receiver operating characteristic to multi-class')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Para este caso podemos ver que la curva sigue siendo muy buena, que era de esperarse por los casos anteriores. Ejercicio 5__Reducción de la dimensión:__ Tomando en cuenta el mejor modelo encontrado en el `Ejercicio 3`, debe realizar una redcción de dimensionalidad del conjunto de datos. Para ello debe abordar el problema ocupando los dos criterios visto en clases: * **Selección de atributos*** **Extracción de atributos**__Preguntas a responder:__Una vez realizado la reducción de dimensionalidad, debe sacar algunas estadísticas y gráficas comparativas entre el conjunto de datos original y el nuevo conjunto de datos (tamaño del dataset, tiempo de ejecución del modelo, etc.) selección de atributos
###Code
x_training = digits.drop(columns="target")
y_training = digits["target"]
x_training = x_training.drop(['c00','c32','c39'],axis=1)
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
k = 30 # número de atributos a seleccionar
columnas = list(x_training.columns.values)
seleccionadas = SelectKBest(f_classif, k=k).fit(x_training, y_training)
catrib = seleccionadas.get_support()
atributos = [columnas[i] for i in list(catrib.nonzero()[0])]
X_a=x_training[atributos]
#Método K-Nearest Neighbours usando GridSearchCV
#Selección de hiperparámetros
metric_knn = {
'n_neighbors' : [5, 7, 11, 17],
'weights' : ['uniform', 'distance'],
'metric' : ['manhattan','chebyshev'],
'algorithm' : ['auto','ball_tree', 'kd_tree']
}
knn = KNeighborsClassifier()
knn_gridsearchcv = GridSearchCV(estimator = knn, param_grid = metric_knn, cv = 10)
#temporizador
start_time = time.time()
knn_grid_result = knn_gridsearchcv.fit(x_training, y_training)
print("tiempo de %s segundos, que demora antes de seleccionar atributos" % (time.time() - start_time))
#Método K-Nearest Neighbours usando GridSearchCV
#Selección de hiperparámetros
metric_knn = {
'n_neighbors' : [5, 7, 11, 17],
'weights' : ['uniform', 'distance'],
'metric' : ['manhattan','chebyshev'],
'algorithm' : ['auto','ball_tree', 'kd_tree']
}
knn = KNeighborsClassifier()
knn_gridsearchcv = GridSearchCV(estimator = knn, param_grid = metric_knn, cv = 10)
#temporizador
start_time = time.time()
knn_grid_result = knn_gridsearchcv.fit(X_a, y_training)
print('tiempo de %s segundos, que demora despues de seleccionar atributos' % (time.time() - start_time))
print('tamaño del dataframe antes de seleccionar atributos',np.array(x_training.shape))
print('tamaño del dataframe antes de seleccionar atributos',np.array(X_a.shape))
###Output
tamaño del dataframe antes de seleccionar atributos [1797 61]
tamaño del dataframe antes de seleccionar atributos [1797 30]
###Markdown
Ejercicio 6__Visualizando Resultados:__ A continuación se provee código para comparar las etiquetas predichas vs las etiquetas reales del conjunto de _test_.
###Code
def mostar_resultados(digits,model,nx=5, ny=5,label = "correctos"):
"""
Muestra los resultados de las prediciones de un modelo
de clasificacion en particular. Se toman aleatoriamente los valores
de los resultados.
- label == 'correcto': retorna los valores en que el modelo acierta.
- label == 'incorrecto': retorna los valores en que el modelo no acierta.
Observacion: El modelo que recibe como argumento debe NO encontrarse
'entrenado'.
:param digits: dataset 'digits'
:param model: modelo de sklearn
:param nx: numero de filas (subplots)
:param ny: numero de columnas (subplots)
:param label: datos correctos o incorrectos
:return: graficos matplotlib
"""
X = digits.drop(columns="target").values
y = digits["target"].values
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.2, random_state = 42)
model.fit(X_train, Y_train) # ajustando el modelo
y_pred = list(model.predict(X_test))
# Mostrar los datos correctos
if label=="correctos":
mask = (y_pred == Y_test)
color = "green"
# Mostrar los datos correctos
elif label=="incorrectos":
mask = (y_pred != Y_test)
color = "red"
else:
raise ValueError("Valor incorrecto")
X_aux = X_test[mask]
y_aux_true = Y_test[mask]
y_aux_pred = np.array(y_pred)[mask]
# We'll plot the first 100 examples, randomly choosen
fig, ax = plt.subplots(nx, ny, figsize=(12,12))
fix = X_aux.shape[0]
for i in range(nx):
for j in range(ny):
index = j + ny * i
if index < fix:
data = X_aux[index, :].reshape(8,8)
label_pred = str(int(y_aux_pred[index]))
label_true = str(int(y_aux_true[index]))
ax[i][j].imshow(data, interpolation='nearest', cmap='gray_r')
ax[i][j].text(0, 0, label_pred, horizontalalignment='center', verticalalignment='center', fontsize=10, color=color)
ax[i][j].text(7, 0, label_true, horizontalalignment='center', verticalalignment='center', fontsize=10, color='blue')
ax[i][j].get_xaxis().set_visible(False)
ax[i][j].get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
**Pregunta*** Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, grafique los resultados cuando: * el valor predicho y original son iguales * el valor predicho y original son distintos * Cuando el valor predicho y original son distintos , ¿Por qué ocurren estas fallas?
###Code
mostar_resultados(digits,KNeighborsClassifier(),nx=5, ny=5,label = "correctos")
mostar_resultados(digits,KNeighborsClassifier(),nx=5, ny=5,label = "incorrectos")
###Output
_____no_output_____
###Markdown
Tarea N°02 Instrucciones1.- Completa tus datos personales (nombre y rol USM) en siguiente celda.**Nombre**: Nicolás González **Rol**: 201673544-52.- Debes pushear este archivo con tus cambios a tu repositorio personal del curso, incluyendo datos, imágenes, scripts, etc.3.- Se evaluará:- Soluciones- Código- Que Binder esté bien configurado.- Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error. I.- Clasificación de dígitosEn este laboratorio realizaremos el trabajo de reconocer un dígito a partir de una imagen.  El objetivo es a partir de los datos, hacer la mejor predicción de cada imagen. Para ellos es necesario realizar los pasos clásicos de un proyecto de _Machine Learning_, como estadística descriptiva, visualización y preprocesamiento. * Se solicita ajustar al menos tres modelos de clasificación: * Regresión logística * K-Nearest Neighbours * Uno o más algoritmos a su elección [link](https://scikit-learn.org/stable/supervised_learning.htmlsupervised-learning) (es obligación escoger un _estimator_ que tenga por lo menos un hiperparámetro). * En los modelos que posean hiperparámetros es mandatorio buscar el/los mejores con alguna técnica disponible en `scikit-learn` ([ver más](https://scikit-learn.org/stable/modules/grid_search.htmltuning-the-hyper-parameters-of-an-estimator)).* Para cada modelo, se debe realizar _Cross Validation_ con 10 _folds_ utilizando los datos de entrenamiento con tal de determinar un intervalo de confianza para el _score_ del modelo.* Realizar una predicción con cada uno de los tres modelos con los datos _test_ y obtener el _score_. * Analizar sus métricas de error (**accuracy**, **precision**, **recall**, **f-score**) Exploración de los datosA continuación se carga el conjunto de datos a utilizar, a través del sub-módulo `datasets` de `sklearn`.
###Code
import numpy as np
import pandas as pd
from sklearn import datasets
import matplotlib.pyplot as plt
%matplotlib inline
digits_dict = datasets.load_digits()
print(digits_dict["DESCR"])
digits_dict.keys()
digits_dict["target"]
###Output
_____no_output_____
###Markdown
A continuación se crea dataframe declarado como `digits` con los datos de `digits_dict` tal que tenga 65 columnas, las 6 primeras a la representación de la imagen en escala de grises (0-blanco, 255-negro) y la última correspondiente al dígito (`target`) con el nombre _target_.
###Code
digits = (
pd.DataFrame(
digits_dict["data"],
)
.rename(columns=lambda x: f"c{x:02d}")
.assign(target=digits_dict["target"])
.astype(int)
)
digits.head()
###Output
_____no_output_____
###Markdown
Ejercicio 1**Análisis exploratorio:** Realiza tu análisis exploratorio, no debes olvidar nada! Recuerda, cada análisis debe responder una pregunta.Algunas sugerencias:* ¿Cómo se distribuyen los datos?* ¿Cuánta memoria estoy utilizando?* ¿Qué tipo de datos son?* ¿Cuántos registros por clase hay?* ¿Hay registros que no se correspondan con tu conocimiento previo de los datos? Cantidad de memoria
###Code
#Memoria utilizada
import sys
memory = digits.memory_usage() #Determinamos la cantidad de memoria por cada columna en bytes
total = (memory[1]*65)/1000 #Como todas las columnas tienen la misma cantidad de memoria multiplicamos por la cantidad
#de columnas que son 65 y dividimos por 1000 para dejar el dato en kilobyte
print(total, 'kilobytes')
###Output
467.22 kilobytes
###Markdown
Tipo de datos
###Code
#Tipo de datos por columna
digits.dtypes
###Output
_____no_output_____
###Markdown
Descripción del dataframe
###Code
digits.describe()
###Output
_____no_output_____
###Markdown
Distribución de los datos
###Code
columnas = digits.columns
y = [i for i in range(len(digits))]
c = 0
fig = plt.figure(figsize = (30,30))
for i in range(64):
plt.subplot(8,8,i+1)
plt.scatter(digits[columnas[i]], y)
plt.title(columnas[i])
###Output
_____no_output_____
###Markdown
Datos nulos
###Code
digits.isnull().values.any()
###Output
_____no_output_____
###Markdown
Cantidad de registros por clase
###Code
pd.value_counts(digits.target)
###Output
_____no_output_____
###Markdown
Ejercicio 2**Visualización:** Para visualizar los datos utilizaremos el método `imshow` de `matplotlib`. Resulta necesario convertir el arreglo desde las dimensiones (1,64) a (8,8) para que la imagen sea cuadrada y pueda distinguirse el dígito. Superpondremos además el label correspondiente al dígito, mediante el método `text`. Esto nos permitirá comparar la imagen generada con la etiqueta asociada a los valores. Realizaremos lo anterior para los primeros 25 datos del archivo.
###Code
digits_dict["images"][0]
###Output
_____no_output_____
###Markdown
Visualiza imágenes de los dígitos utilizando la llave `images` de `digits_dict`. Sugerencia: Utiliza `plt.subplots` y el método `imshow`. Puedes hacer una grilla de varias imágenes al mismo tiempo!
###Code
nx, ny = 5, 5
fig, axs = plt.subplots(nx, ny, figsize=(12, 12))
#Iteramos sobre los primeros 25 datos
for i in range(1, 26):
img = digits_dict["images"][i-1]
fig.add_subplot(5, 5, i)
plt.imshow(img)
plt.axis('off') #Tuve problemas con los ejes y con esto lo solucione
plt.show()
###Output
_____no_output_____
###Markdown
Ejercicio 3**Machine Learning**: En esta parte usted debe entrenar los distintos modelos escogidos desde la librería de `skelearn`. Para cada modelo, debe realizar los siguientes pasos:* **train-test** * Crear conjunto de entrenamiento y testeo (usted determine las proporciones adecuadas). * Imprimir por pantalla el largo del conjunto de entrenamiento y de testeo. * **modelo**: * Instanciar el modelo objetivo desde la librería sklearn. * *Hiper-parámetros*: Utiliza `sklearn.model_selection.GridSearchCV` para obtener la mejor estimación de los parámetros del modelo objetivo.* **Métricas**: * Graficar matriz de confusión. * Analizar métricas de error.__Preguntas a responder:__* ¿Cuál modelo es mejor basado en sus métricas?* ¿Cuál modelo demora menos tiempo en ajustarse?* ¿Qué modelo escoges?
###Code
X = digits.drop(columns="target").values
y = digits["target"].values
#Metricas entregadas en archivo de la tarea y en clases anteriores con unos ligeros cambios.
from sklearn.metrics import confusion_matrix, accuracy_score, recall_score, precision_score, f1_score
def summary_metrics(y_test,y_pred):
# metrics
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_test,y_pred))
print('\nMetricas:\n ')
print('accuracy: ',accuracy_score(y_test, y_pred))
print('recall: ',recall_score(y_test, y_pred, average='macro'))
print('precision: ',precision_score(y_test, y_pred, average='macro'))
print('f-score: ',f1_score(y_test, y_pred, average='macro'))
print("")
return
#Entrenamiento del modelo
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30,
train_size=0.70,
random_state=1997)
#Imprimimos el conjunto de entrenamiento y testeo
print('numero de filas train set : ',len(X_train))
print('numero de filas test set : ',len(X_test))
###Output
numero de filas train set : 1257
numero de filas test set : 540
###Markdown
Regresión Logística
###Code
#Modelo de Regresión logística usando GridsearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
import time
#Selección de hiperparámetros
metric_lr = {
'penalty' : ['l1', 'l2'],
'C' : [100, 10 , 1, 0.1, 0.01],
'class_weight' : ['balanced', None],
'solver' : ['liblinear'],
}
lr = LogisticRegression()
lr_gridsearchcv = GridSearchCV(estimator = lr, param_grid = metric_lr, cv = 10)
start_time = time.time()#Cronometro
lr_grid_result = lr_gridsearchcv.fit(X_train, y_train)
print("--- %s segundos ---" % (time.time() - start_time))
#Presentamos el mejor valor obtenido junto a los mejores hiperparametros
print("Mejor: %f usando %s" % (lr_grid_result.best_score_, lr_grid_result.best_params_))
#Calculo de métricas con matriz de confusión
y_pred = lr_gridsearchcv.predict(X_test)
summary_metrics(y_test,y_pred)
###Output
Matriz de confusion:
[[50 0 0 0 1 0 0 0 0 0]
[ 0 60 0 0 0 0 0 0 2 0]
[ 0 0 44 0 0 0 0 0 0 0]
[ 0 0 0 47 0 0 0 1 1 2]
[ 0 0 0 0 52 0 0 0 0 0]
[ 0 0 1 0 0 55 1 0 0 1]
[ 0 1 0 0 0 0 51 0 0 0]
[ 0 0 0 0 0 0 0 51 1 1]
[ 0 1 0 1 0 0 0 0 48 0]
[ 0 1 0 1 0 0 0 0 1 64]]
Metricas:
accuracy: 0.9666666666666667
recall: 0.9676235844176204
precision: 0.9678849788585003
f-score: 0.9675175358052035
###Markdown
Se observa que todas las métricas son aproximadamente 97%, por lo tanto el modelo tuvo el mismo desempeño para clasificar las clases positivos (de forma macro) y para clasificar las clases negativas (de forma macro) KNN
###Code
#Método K-Nearest Neighbours usando GridSearchCV
from sklearn.neighbors import KNeighborsClassifier
start_time = time.time()
#Selección de hiperparámetros
metric_knn = {
'n_neighbors' : [3, 5, 11, 19],
'weights' : ['uniform', 'distance'],
'metric' : ['euclidean', 'manhattan'],
'algorithm' : ['auto','ball_tree', 'kd_tree']
}
knn = KNeighborsClassifier()
knn_gridsearchcv = GridSearchCV(estimator = knn, param_grid = metric_knn, cv = 10)
start_time = time.time()#Cronometro
knn_grid_result = knn_gridsearchcv.fit(X_train, y_train)
print("--- %s segundos ---" % (time.time() - start_time))
print("Mejor: %f usando %s" % (knn_grid_result.best_score_, knn_grid_result.best_params_))
#Calculo de métricas con la matriz de confusión
y_pred = knn_gridsearchcv.predict(X_test)
summary_metrics(y_test,y_pred)
###Output
Matriz de confusion:
[[51 0 0 0 0 0 0 0 0 0]
[ 0 62 0 0 0 0 0 0 0 0]
[ 0 0 44 0 0 0 0 0 0 0]
[ 0 0 0 50 0 0 0 1 0 0]
[ 0 0 0 0 52 0 0 0 0 0]
[ 0 0 0 0 0 56 1 0 0 1]
[ 0 0 0 0 0 0 52 0 0 0]
[ 0 0 0 0 0 0 0 52 0 1]
[ 0 0 0 0 0 0 0 0 49 1]
[ 0 1 0 1 0 0 0 0 2 63]]
Metricas:
accuracy: 0.9833333333333333
recall: 0.9847339981176442
precision: 0.9842113060204071
f-score: 0.9844122013917114
###Markdown
Misma conclusión anterior con la diferencia que los valores fueron del 98% y además con este algoritmo se demoró menos en ejecutarse. SVC
###Code
#Método SVC utilizando GridsearchCV
from sklearn.svm import SVC
#Selección de hiperparámetros
metric_svc = {
'C':[1,10,100,1000],
'gamma':[1,0.1,0.001,0.0001],
'kernel':['linear','rbf']
}
svc = SVC()
svc_gridsearchcv = GridSearchCV(estimator = svc, param_grid = metric_svc, cv = 10)
start_time = time.time()#Cronometro
svc_grid_result = svc_gridsearchcv.fit(X_train, y_train)
print("--- %s segundos ---" % (time.time() - start_time))
print("Mejor: %f usando %s" % (svc_grid_result.best_score_, svc_grid_result.best_params_))
#Calculo de métricas con la matriz de confusión
y_pred = svc_gridsearchcv.predict(X_test)
summary_metrics(y_test,y_pred)
###Output
Matriz de confusion:
[[50 0 0 0 1 0 0 0 0 0]
[ 0 62 0 0 0 0 0 0 0 0]
[ 0 0 44 0 0 0 0 0 0 0]
[ 0 0 0 51 0 0 0 0 0 0]
[ 0 0 0 0 52 0 0 0 0 0]
[ 0 0 0 0 0 56 1 0 0 1]
[ 0 1 0 0 0 0 51 0 0 0]
[ 0 0 0 0 0 0 0 52 0 1]
[ 0 1 0 0 0 0 0 0 49 0]
[ 0 0 0 0 0 0 0 0 2 65]]
Metricas:
accuracy: 0.9851851851851852
recall: 0.9857959958214326
precision: 0.9861584873697762
f-score: 0.9858850029534775
###Markdown
Misma conclusión que en los casos anteriores solo que en este caso se tuvo un 98% en cada métrica y además fue el segundo que de tardó más en ejecutarse. Respuesta Considero que el mejor modelo es KNN, esto debido a que en comparación con los otros dos, hizo una mayor cantidad de iteraciones puesto que se le entrego una mayor cantidad de hiperparámetros y más aún fue el modelo que se demoro menos tiempo en ejecutarse. Es por esto que para lo que sigue escogeré KNN como modelo. Ejercicio 4__Comprensión del modelo:__ Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, debe comprender e interpretar minuciosamente los resultados y gráficos asocados al modelo en estudio, para ello debe resolver los siguientes puntos: * **Cross validation**: usando **cv** (con n_fold = 10), sacar una especie de "intervalo de confianza" sobre alguna de las métricas estudiadas en clases: * $\mu \pm \sigma$ = promedio $\pm$ desviación estandar * **Curva de Validación**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_validation_curve.htmlsphx-glr-auto-examples-model-selection-plot-validation-curve-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico. * **Curva AUC–ROC**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.htmlsphx-glr-auto-examples-model-selection-plot-roc-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico. Cross Validation
###Code
#Cross Validation usando KNN
from sklearn.model_selection import cross_val_score
precision = cross_val_score(estimator=knn_gridsearchcv,
X=X_train,
y=y_train,
cv=10)
precision = [round(x,2) for x in precision]
print('Precisiones: {} '.format(precision))
print('Precision promedio: {0: .3f} +/- {1: .3f}'.format(np.mean(precision),
np.std(precision)))
###Output
Precisiones: [0.98, 0.98, 0.99, 0.98, 0.98, 1.0, 0.99, 0.98, 0.98, 0.99]
Precision promedio: 0.985 +/- 0.007
###Markdown
Curva de Validación
###Code
#Curva de validación con código entregado en el link del enunciado
from sklearn.model_selection import validation_curve
param_range = np.array([i for i in range(1, 10)])
#Validation curve usando los mejores hiperparámetros
train_scores, test_scores = validation_curve(
KNeighborsClassifier(weights = 'distance',metric = 'euclidean'), X_train, y_train, param_name="n_neighbors", param_range=param_range,
scoring="accuracy", n_jobs=1)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.title("Validation Curve with KNN")
plt.xlabel("n_neighbors")
plt.ylabel("Score")
plt.ylim(0.9, 1.1)
lw = 2
plt.semilogx(param_range, train_scores_mean, label="Training score",
color="darkorange", lw=lw)
plt.fill_between(param_range, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color="darkorange", lw=lw)
plt.semilogx(param_range, test_scores_mean, label="Cross-validation score",
color="navy", lw=lw)
plt.fill_between(param_range, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.2,
color="navy", lw=lw)
plt.legend(loc="best")
plt.show()
###Output
_____no_output_____
###Markdown
Notamos que la curva de training se mantiene constante en 1, esto puede deberse simplemente a la naturaleza del modelo ya que basicamente lo que se está haciendo es que KNN se aprende demasiado bien el conjunto de datos a tal punto de que siempre acierta a la predicción.Por otro lado notamos que la curva de cross validation, con respecto al score de Accuracy, luego de alcanzar su valor máximo empieza a decaer, lo que tiene mucho sentido debido a como funciona KNN. Con esto me refiero que a medida que acumento la cantidad de neighbors es mas probable que agrupe una mayor cantidad de datos errados con respecto a los que si son correctos. Por lo tanto a medida que aumentamos los neighbors existe tendencia a tener overfitting Curva AUC-ROC
###Code
#Determinamos la cantidad de neighbors necesarios
index = np.argmax(test_scores_mean)
param_range[index]
#Codigo sacado del link del enunciado
from itertools import cycle
from sklearn.metrics import roc_curve, auc
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from scipy import interp
from sklearn.metrics import roc_auc_score
y = label_binarize(y, classes=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
n_classes = y.shape[1]
n_samples, n_features = X.shape
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30,
train_size=0.70,
random_state=1997)
classifier = KNeighborsClassifier(weights = 'distance',metric = 'euclidean', n_neighbors = param_range[index])
y_score = classifier.fit(X_train, y_train).predict(X_test)
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
### plt.figure()
lw = 2
plt.plot(fpr[2], tpr[2], color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc[2])
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
#AOC-ROC para multiples clases (código también obtenido del link)
import sys
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Some extension of Receiver operating characteristic to multi-class')
plt.legend(loc="lower right")
plt.show()
###Output
C:\Users\Nikolo\miniconda3\envs\mat281\lib\site-packages\ipykernel_launcher.py:8: DeprecationWarning: scipy.interp is deprecated and will be removed in SciPy 2.0.0, use numpy.interp instead
###Markdown
En este caso vemos que al igual que la curva anterior como el modelo predice tan bien el problema que la mayoria de los datos obtiene excelentes predicciones, sin embargo, otros se equivocan solo un poco, lo cual se puede deber a que entrena muy bien cierta clase mientras que otras las descuida solo un poco. Ejercicio 5__Reducción de la dimensión:__ Tomando en cuenta el mejor modelo encontrado en el `Ejercicio 3`, debe realizar una reducción de dimensionalidad del conjunto de datos. Para ello debe abordar el problema ocupando los dos criterios visto en clases: * **Selección de atributos*** **Extracción de atributos**__Preguntas a responder:__Una vez realizado la reducción de dimensionalidad, debe sacar algunas estadísticas y gráficas comparativas entre el conjunto de datos original y el nuevo conjunto de datos (tamaño del dataset, tiempo de ejecución del modelo, etc.) Selección de atributos
###Code
x_training = digits.drop(columns="target").drop(['c00','c32','c39'], axis=1) #SE DROPEAN COLUMNAS ADICIONALES PUES TIENEN SOLO 0 EN SUS ENTRADAS LO QUE GENERA PROBLEMAS
y_training = digits["target"]
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
k = 15 # número de atributos a seleccionar
columnas = list(x_training.columns.values)
seleccionadas = SelectKBest(f_classif, k=k).fit(x_training, y_training)
catrib = seleccionadas.get_support()
atributos = [columnas[i] for i in list(catrib.nonzero()[0])]
atributos
###Output
_____no_output_____
###Markdown
Extracción de atributos
###Code
# ajustar modelo utilizando PCA
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
x = StandardScaler().fit_transform(x_training)
pca = PCA(n_components=15)
principalComponents = pca.fit_transform(x)
seleccionadas = SelectKBest(f_classif, k=k).fit(x_training, y_training)
# graficar varianza por componente
percent_variance = np.round(pca.explained_variance_ratio_* 100, decimals =2)
columns=[]
for i in range(1, 16):
if i == 1:
columns.append(f'PC{i}')
else:
columns.append(f'PC{i}')
columns
plt.figure(figsize=(12,4))
plt.bar(x= range(1,16), height=percent_variance, tick_label=columns)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component')
plt.title('PCA Scree Plot')
plt.show()
# graficar varianza por la suma acumulada de los componente
percent_variance_cum = np.cumsum(percent_variance)
columns = []
for i in range(1, 16):
if i == 1:
columns.append(f'PC{i}')
else:
columns.append(columns[0] + f'+...+PC{i}')
columns
plt.figure(figsize=(20,10))
plt.bar(x= range(1,16), height=percent_variance_cum, tick_label=columns)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component Cumsum')
plt.title('PCA Scree Plot')
plt.show()
###Output
_____no_output_____
###Markdown
Respuestas Nuevo intervalo de confianza Intervalo de confianza mediante la Selección de atributos
###Code
X_k = x_training[atributos]
X_train2, X_test2, y_train2, y_test2 = train_test_split(X_k, y_training, test_size=0.30,
train_size=0.70,
random_state=1997)
classifier = KNeighborsClassifier(weights = 'distance',metric = 'euclidean', n_neighbors = param_range[index])
precision = cross_val_score(estimator=knn_gridsearchcv,
X=X_train2,
y=y_train2,
cv=10)
precision = [round(x,2) for x in precision]
print('Precisiones: {} '.format(precision))
print('Precision promedio: {0: .3f} +/- {1: .3f}'.format(np.mean(precision),
np.std(precision)))
###Output
Precisiones: [0.98, 0.94, 0.95, 0.91, 0.97, 0.96, 0.97, 0.96, 0.93, 0.91]
Precision promedio: 0.948 +/- 0.024
###Markdown
Intervalo de confianza con PCA
###Code
X_train3, X_test3, y_train3, y_test3 = train_test_split(principalComponents, y_training, test_size=0.30,
train_size=0.70,
random_state=1997)
classifier = KNeighborsClassifier(weights = 'distance',metric = 'euclidean', n_neighbors = param_range[index])
precision = cross_val_score(estimator=knn_gridsearchcv,
X=X_train3,
y=y_train3,
cv=10)
precision = [round(x,2) for x in precision]
print('Precisiones: {} '.format(precision))
print('Precision promedio: {0: .3f} +/- {1: .3f}'.format(np.mean(precision),
np.std(precision)))
###Output
Precisiones: [0.96, 0.94, 0.98, 0.94, 0.98, 0.97, 0.96, 0.97, 0.96, 0.96]
Precision promedio: 0.962 +/- 0.013
###Markdown
Podemos notar que los intervalos de confianza redujeron considerablemente el porcentaje de predicción, esto se atribuye a que como se tienen menor cantidad de datos el modelo no predice tan bien como en los casos anteriores, sin embargo, se cumple el objetivo de mejorar el tiempo de computo del algoritmo, bajando de alrededor de los 13 segundo a 4 y 3 segundos. Tiempo de ejecución
###Code
#Método K-Nearest Neighbours Seleccionando atributos
from sklearn.neighbors import KNeighborsClassifier
start_time = time.time()
#Selección de hiperparámetros
metric_knn = {
'n_neighbors' : [3, 5, 11, 19],
'weights' : ['uniform', 'distance'],
'metric' : ['euclidean', 'manhattan'],
'algorithm' : ['auto','ball_tree', 'kd_tree']
}
knn = KNeighborsClassifier()
knn_gridsearchcv = GridSearchCV(estimator = knn, param_grid = metric_knn, cv = 10)
start_time = time.time()#Cronometro
knn_grid_result = knn_gridsearchcv.fit(X_train2, y_train2)
print("--- %s segundos ---" % (time.time() - start_time))
#Método K-Nearest Neighbours extrayendo atributos y usando PCA
from sklearn.neighbors import KNeighborsClassifier
start_time = time.time()
#Selección de hiperparámetros
metric_knn = {
'n_neighbors' : [3, 5, 11, 19],
'weights' : ['uniform', 'distance'],
'metric' : ['euclidean', 'manhattan'],
'algorithm' : ['auto','ball_tree', 'kd_tree']
}
knn = KNeighborsClassifier()
knn_gridsearchcv = GridSearchCV(estimator = knn, param_grid = metric_knn, cv = 10)
start_time = time.time()#Cronometro
knn_grid_result = knn_gridsearchcv.fit(X_train3, y_train3)
print("--- %s segundos ---" % (time.time() - start_time))
###Output
--- 8.348847150802612 segundos ---
###Markdown
Vemos que el tiempo de ejecución se redujo notoriamente, esto ya que estamos trabajando con una menor cantidad de datos, basta observar la lista de atributos donde claramente se ven menos columnas Ejercicio 6__Visualizando Resultados:__ A continuación se provee código para comparar las etiquetas predichas vs las etiquetas reales del conjunto de _test_.
###Code
def mostar_resultados(digits,model,nx=5, ny=5,label = "correctos"):
"""
Muestra los resultados de las prediciones de un modelo
de clasificacion en particular. Se toman aleatoriamente los valores
de los resultados.
- label == 'correcto': retorna los valores en que el modelo acierta.
- label == 'incorrecto': retorna los valores en que el modelo no acierta.
Observacion: El modelo que recibe como argumento debe NO encontrarse
'entrenado'.
:param digits: dataset 'digits'
:param model: modelo de sklearn
:param nx: numero de filas (subplots)
:param ny: numero de columnas (subplots)
:param label: datos correctos o incorrectos
:return: graficos matplotlib
"""
X = digits.drop(columns="target").values
y = digits["target"].values
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.2, random_state = 42)
model.fit(X_train, Y_train) # ajustando el modelo
y_pred = list(model.predict(X_test))
# Mostrar los datos correctos
if label=="correctos":
mask = (y_pred == Y_test)
color = "green"
# Mostrar los datos correctos
elif label=="incorrectos":
mask = (y_pred != Y_test)
color = "red"
else:
raise ValueError("Valor incorrecto")
X_aux = X_test[mask]
y_aux_true = Y_test[mask]
y_aux_pred = np.array(y_pred)[mask] #corregido
# We'll plot the first 100 examples, randomly choosen
fig, ax = plt.subplots(nx, ny, figsize=(12,12))
correccion = X_aux.shape[0] #corregido
for i in range(nx):
for j in range(ny):
index = j + ny * i
if index < correccion: #corregido
data = X_aux[index, :].reshape(8,8)
label_pred = str(int(y_aux_pred[index]))
label_true = str(int(y_aux_true[index]))
ax[i][j].imshow(data, interpolation='nearest', cmap='gray_r')
ax[i][j].text(0, 0, label_pred, horizontalalignment='center', verticalalignment='center', fontsize=10, color=color)
ax[i][j].text(7, 0, label_true, horizontalalignment='center', verticalalignment='center', fontsize=10, color='blue')
ax[i][j].get_xaxis().set_visible(False)
ax[i][j].get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
**Pregunta*** Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, grafique los resultados cuando: * el valor predicho y original son iguales * el valor predicho y original son distintos * Cuando el valor predicho y original son distintos , ¿Por qué ocurren estas fallas?
###Code
# Grafica de los valores correctos
mostar_resultados(digits,KNeighborsClassifier(),nx=5, ny=5,label = "correctos")
#Gráfico de los valores incorrectos
mostar_resultados(digits,KNeighborsClassifier(),nx=5, ny=5,label = "incorrectos")
###Output
_____no_output_____
###Markdown
Tarea N°02 Instrucciones1.- Completa tus datos personales (nombre y rol USM) en siguiente celda.**Nombre**: Javier Alonso Valladares Cortes**Rol**: 201710508-92.- Debes pushear este archivo con tus cambios a tu repositorio personal del curso, incluyendo datos, imágenes, scripts, etc.3.- Se evaluará:- Soluciones- Código- Que Binder esté bien configurado.- Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error. I.- Clasificación de dígitosEn este laboratorio realizaremos el trabajo de reconocer un dígito a partir de una imagen.  El objetivo es a partir de los datos, hacer la mejor predicción de cada imagen. Para ellos es necesario realizar los pasos clásicos de un proyecto de _Machine Learning_, como estadística descriptiva, visualización y preprocesamiento. * Se solicita ajustar al menos tres modelos de clasificación: * Regresión logística * K-Nearest Neighbours * Uno o más algoritmos a su elección [link](https://scikit-learn.org/stable/supervised_learning.htmlsupervised-learning) (es obligación escoger un _estimator_ que tenga por lo menos un hiperparámetro). * En los modelos que posean hiperparámetros es mandatorio buscar el/los mejores con alguna técnica disponible en `scikit-learn` ([ver más](https://scikit-learn.org/stable/modules/grid_search.htmltuning-the-hyper-parameters-of-an-estimator)).* Para cada modelo, se debe realizar _Cross Validation_ con 10 _folds_ utilizando los datos de entrenamiento con tal de determinar un intervalo de confianza para el _score_ del modelo.* Realizar una predicción con cada uno de los tres modelos con los datos _test_ y obtener el _score_. * Analizar sus métricas de error (**accuracy**, **precision**, **recall**, **f-score**) Exploración de los datosA continuación se carga el conjunto de datos a utilizar, a través del sub-módulo `datasets` de `sklearn`.
###Code
import numpy as np
import pandas as pd
from sklearn import datasets
import matplotlib.pyplot as plt
%matplotlib inline
digits_dict = datasets.load_digits()
print(digits_dict["DESCR"])
digits_dict.keys()
digits_dict["target"]
###Output
_____no_output_____
###Markdown
A continuación se crea dataframe declarado como `digits` con los datos de `digits_dict` tal que tenga 65 columnas, las 6 primeras a la representación de la imagen en escala de grises (0-blanco, 255-negro) y la última correspondiente al dígito (`target`) con el nombre _target_.
###Code
digits = (
pd.DataFrame(
digits_dict["data"],
)
.rename(columns=lambda x: f"c{x:02d}")
.assign(target=digits_dict["target"])
.astype(int)
)
digits.head()
###Output
_____no_output_____
###Markdown
Ejercicio 1**Análisis exploratorio:** Realiza tu análisis exploratorio, no debes olvidar nada! Recuerda, cada análisis debe responder una pregunta.Algunas sugerencias:* ¿Cómo se distribuyen los datos?* ¿Cuánta memoria estoy utilizando?* ¿Qué tipo de datos son?* ¿Cuántos registros por clase hay?* ¿Hay registros que no se correspondan con tu conocimiento previo de los datos?
###Code
digits.describe()
###Output
_____no_output_____
###Markdown
* ¿Cómo se distribuyen los datos?
###Code
digits.describe().loc['mean'].mean() #Calculamos el promedio
digits.describe().loc['std'].mean() #Calculamos el promedio de la desviación estandar
###Output
_____no_output_____
###Markdown
Podemos ver que los datos se distribuyen con un promedio aproximado de 4.878 y una desviación estandar 3.671 * ¿Cuánta memoria estoy utilizando?
###Code
digits.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1797 entries, 0 to 1796
Data columns (total 65 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 c00 1797 non-null int32
1 c01 1797 non-null int32
2 c02 1797 non-null int32
3 c03 1797 non-null int32
4 c04 1797 non-null int32
5 c05 1797 non-null int32
6 c06 1797 non-null int32
7 c07 1797 non-null int32
8 c08 1797 non-null int32
9 c09 1797 non-null int32
10 c10 1797 non-null int32
11 c11 1797 non-null int32
12 c12 1797 non-null int32
13 c13 1797 non-null int32
14 c14 1797 non-null int32
15 c15 1797 non-null int32
16 c16 1797 non-null int32
17 c17 1797 non-null int32
18 c18 1797 non-null int32
19 c19 1797 non-null int32
20 c20 1797 non-null int32
21 c21 1797 non-null int32
22 c22 1797 non-null int32
23 c23 1797 non-null int32
24 c24 1797 non-null int32
25 c25 1797 non-null int32
26 c26 1797 non-null int32
27 c27 1797 non-null int32
28 c28 1797 non-null int32
29 c29 1797 non-null int32
30 c30 1797 non-null int32
31 c31 1797 non-null int32
32 c32 1797 non-null int32
33 c33 1797 non-null int32
34 c34 1797 non-null int32
35 c35 1797 non-null int32
36 c36 1797 non-null int32
37 c37 1797 non-null int32
38 c38 1797 non-null int32
39 c39 1797 non-null int32
40 c40 1797 non-null int32
41 c41 1797 non-null int32
42 c42 1797 non-null int32
43 c43 1797 non-null int32
44 c44 1797 non-null int32
45 c45 1797 non-null int32
46 c46 1797 non-null int32
47 c47 1797 non-null int32
48 c48 1797 non-null int32
49 c49 1797 non-null int32
50 c50 1797 non-null int32
51 c51 1797 non-null int32
52 c52 1797 non-null int32
53 c53 1797 non-null int32
54 c54 1797 non-null int32
55 c55 1797 non-null int32
56 c56 1797 non-null int32
57 c57 1797 non-null int32
58 c58 1797 non-null int32
59 c59 1797 non-null int32
60 c60 1797 non-null int32
61 c61 1797 non-null int32
62 c62 1797 non-null int32
63 c63 1797 non-null int32
64 target 1797 non-null int32
dtypes: int32(65)
memory usage: 456.4 KB
###Markdown
Podemos ver que la memoria usada por digits es de 456.4 KB. * ¿Qué tipo de datos son? También podemos ver que el tipo de datos que estamos trabajando son int32, es decir variables númericas. * ¿Cuántos registros por clase hay? Existen 1797 registros por cada clase. * ¿Hay registros que no se correspondan con tu conocimiento previo de los datos? La clase c00 puede ser una clase que corresponda segun lo que se sabe, debido a que es una clase llena de ceros, los cuales no tienen ninguna información que aportar al desarrollo de la tarea. Ejercicio 2**Visualización:** Para visualizar los datos utilizaremos el método `imshow` de `matplotlib`. Resulta necesario convertir el arreglo desde las dimensiones (1,64) a (8,8) para que la imagen sea cuadrada y pueda distinguirse el dígito. Superpondremos además el label correspondiente al dígito, mediante el método `text`. Esto nos permitirá comparar la imagen generada con la etiqueta asociada a los valores. Realizaremos lo anterior para los primeros 25 datos del archivo.
###Code
digits_dict["images"][0]
###Output
_____no_output_____
###Markdown
Visualiza imágenes de los dígitos utilizando la llave `images` de `digits_dict`. Sugerencia: Utiliza `plt.subplots` y el método `imshow`. Puedes hacer una grilla de varias imágenes al mismo tiempo!
###Code
nx, ny = 5, 5
fig, axs = plt.subplots(nx, ny, figsize=(12, 12))
for i in range(5):
for j in range(5):
axs[i,j].imshow(digits_dict["images"][i*5 +j],cmap='gray_r') #Graficamos todas las imagenes
axs[i,j].text(0, 0, digits_dict['target'][i*5 +j], horizontalalignment='center',
verticalalignment='center', fontsize=10, color='blue') #Agregamos el label de la imagen
###Output
_____no_output_____
###Markdown
Ejercicio 3**Machine Learning**: En esta parte usted debe entrenar los distintos modelos escogidos desde la librería de `skelearn`. Para cada modelo, debe realizar los siguientes pasos:* **train-test** * Crear conjunto de entrenamiento y testeo (usted determine las proporciones adecuadas). * Imprimir por pantalla el largo del conjunto de entrenamiento y de testeo. * **modelo**: * Instanciar el modelo objetivo desde la librería sklearn. * *Hiper-parámetros*: Utiliza `sklearn.model_selection.GridSearchCV` para obtener la mejor estimación de los parámetros del modelo objetivo.* **Métricas**: * Graficar matriz de confusión. * Analizar métricas de error.__Preguntas a responder:__* ¿Cuál modelo es mejor basado en sus métricas?* ¿Cuál modelo demora menos tiempo en ajustarse?* ¿Qué modelo escoges?
###Code
X = digits.drop(columns="target").values
y = digits["target"].values
from sklearn import datasets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) #Dividimos los datos
#Como tenemos una cantidad de valores entre 1000-100000, es adecuado tener una relacion 80-20
# Impresion del largo de las filas
print('Veamos el largo de los conjuntos:\n')
print('Cantidad inicial de datos : ',len(X))
print('Largo del conjunto de entrenamiento : ',len(X_train))
print('Largo del conjunto de testeo : ',len(X_test))
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsRegressor
from sklearn import svm
#para cada uno crear un conjunto de hiperparametros
parametros_lr = {'penalty': ['l1','l2','elasticnet','none'],'tol':[0.1,0.2,0.3]}
clf_lr = GridSearchCV(LogisticRegression(),parametros_lr,cv = 5, return_train_score =False) #Aplicamos GridSearchCV
clf_lr.fit(X_train,y_train)
parametros_kn = {'algorithm':['brute','kd_tree','ball_tree','auto'],'leaf_size':[1,10,20],'n_neighbors':[1,2,3,4,10,20]}
clf_kn = GridSearchCV(KNeighborsRegressor(),parametros_kn,cv = 5, return_train_score =False)#Aplicamos GridSearchCV
clf_kn.fit(X_train,y_train)
parametros_sv = {'kernel':['rbf','linear'],'C':[1,10,20,30]}
clf_sv = GridSearchCV(svm.SVC(),parametros_sv,cv = 5, return_train_score =False)#Aplicamos GridSearchCV
clf_sv.fit(X_train,y_train)
#Imprimimos la mejor combinación de parámetros para este modelo y el tiempo maximo que se demorá en ajustar
print(clf_lr.best_score_)
print(clf_lr.best_params_)
print('tiempo de entrenamiento = '+str(pd.DataFrame(clf_lr.cv_results_)['std_fit_time'].max()))
#Imprimimos la mejor combinación de parámetros para este modelo y el tiempo maximo que se demorá en ajustar
print(clf_kn.best_score_)
print(clf_kn.best_params_)
print('tiempo de entrenamiento = '+str(pd.DataFrame(clf_kn.cv_results_)['std_fit_time'].max()))
#Imprimimos la mejor combinación de parámetros para este modelo y el tiempo maximo que se demorá en ajustar
print(clf_sv.best_score_)
print(clf_sv.best_params_)
print('tiempo de entrenamiento = '+str(pd.DataFrame(clf_sv.cv_results_)['std_fit_time'].max()))
#Inicializamos el modelo con la mejor combinación
rlog = LogisticRegression(penalty='none',tol=0.1)
rlog.fit(X_train,y_train)
#Graficamos la mátriz de confusión y los valores para distintas métricas
from metrics_classification import *
from sklearn.metrics import confusion_matrix
y_true = list(y_test)
y_pred = list(rlog.predict(X_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
# ejemplo
df_temp = pd.DataFrame(
{
'y':y_true,
'yhat':y_pred
}
)
df_metrics = summary_metrics(df_temp)
print("\nMetricas para los regresores")
print("")
print(df_metrics)
#Inicializamos el modelo con la mejor combinación
model_kn = KNeighborsRegressor(algorithm='brute',n_neighbors=3,leaf_size = 1)
model_kn.fit(X_train,y_train)
#Graficamos la mátriz de confusión y los valores para distintas métricas
y_true = list(y_test)
y_pred_0 = list(model_kn.predict(X_test))
y_pred = [int(elem) for elem in y_pred_0]
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
# ejemplo
df_temp = pd.DataFrame(
{
'y':y_true,
'yhat':y_pred
}
)
df_metrics = summary_metrics(df_temp)
print("\nMetricas para los regresores")
print("")
print(df_metrics)
#Inicializamos el modelo con la mejor combinación
model_svc = svm.SVC(C=10,kernel='rbf',probability=True)
model_svc.fit(X_train,y_train)
#Graficamos la mátriz de confusión y los valores para distintas métricas
y_true = list(y_test)
y_pred = list(model_svc.predict(X_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
# ejemplo
df_temp = pd.DataFrame(
{
'y':y_true,
'yhat':y_pred
}
)
df_metrics = summary_metrics(df_temp)
print("\nMetricas para los regresores")
print("")
print(df_metrics)
###Output
Matriz de confusion:
[[33 0 0 0 0 0 0 0 0 0]
[ 0 28 0 0 0 0 0 0 0 0]
[ 0 0 33 0 0 0 0 0 0 0]
[ 0 0 0 33 0 1 0 0 0 0]
[ 0 0 0 0 46 0 0 0 0 0]
[ 0 0 0 0 0 46 1 0 0 0]
[ 0 0 0 0 0 0 35 0 0 0]
[ 0 0 0 0 0 0 0 33 0 1]
[ 0 0 0 0 0 1 0 0 29 0]
[ 0 0 0 0 0 0 0 1 0 39]]
Metricas para los regresores
accuracy recall precision fscore
0 0.9861 0.9862 0.9876 0.9868
###Markdown
¿Cuál modelo es mejor basado en sus métricas? Basado en las métricas, podemos que claramente el mejor modelo es el SVM, ya que los valores que arroja son bastante más cercanos a uno que los otros dos modelos. ¿Cuál modelo demora menos tiempo en ajustarse? El modelo que demora menos es el modelo de K-Nearest Neighbours. ¿Qué modelo escoges? Finalmente escogemos el modelo de SVM, ya que tiene los mejores valores de las métricas, a pesar de que el modelo de K-Nearest Neighbours resultara más rapido. Ejercicio 4__Comprensión del modelo:__ Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, debe comprender e interpretar minuciosamente los resultados y gráficos asocados al modelo en estudio, para ello debe resolver los siguientes puntos: * **Cross validation**: usando **cv** (con n_fold = 10), sacar una especie de "intervalo de confianza" sobre alguna de las métricas estudiadas en clases: * $\mu \pm \sigma$ = promedio $\pm$ desviación estandar * **Curva de Validación**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_validation_curve.htmlsphx-glr-auto-examples-model-selection-plot-validation-curve-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico. * **Curva AUC–ROC**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.htmlsphx-glr-auto-examples-model-selection-plot-roc-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico. Utilizaremos la metrica precision
###Code
#Aplicamos cross validation para calcular un promedio y una desviación estándar
from sklearn.model_selection import cross_val_score
scores = cross_val_score(model_svc, X, y, cv=10,scoring='precision_micro')
print('Tenemos el intervalo ' + str(round(scores.mean(),3)) + ' ' +'±'+ ' ' + str(round(scores.std(),3)))
#Graficamos la curva de validación con el codigo indicado
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_digits
from sklearn.svm import SVC
from sklearn.model_selection import validation_curve
param_range = np.logspace(-6, -1, 5)
train_scores, test_scores = validation_curve(
model_svc, X, y, param_name="gamma", param_range=param_range,
scoring="precision_micro", n_jobs=1)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.title("Validation Curve with SVM")
plt.xlabel(r"$\gamma$")
plt.ylabel("Score")
plt.ylim(0.0, 1.1)
lw = 2
plt.semilogx(param_range, train_scores_mean, label="Training score",
color="darkorange", lw=lw)
plt.fill_between(param_range, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color="darkorange", lw=lw)
plt.semilogx(param_range, test_scores_mean, label="Cross-validation score",
color="navy", lw=lw)
plt.fill_between(param_range, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.2,
color="navy", lw=lw)
plt.legend(loc="best")
plt.show()
#Graficamos la curva ROC con el codigo asociado
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn import svm, datasets
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from scipy import interp
from sklearn.metrics import roc_auc_score
# Binarize the output
y = label_binarize(y, classes=[0, 1, 2])
n_classes = y.shape[1]
# shuffle and split training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2,
random_state=0)
# Learn to predict each class against the other
classifier = OneVsRestClassifier(model_svc)
y_score = classifier.fit(X_train, y_train).decision_function(X_test)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
plt.figure()
lw = 2
plt.plot(fpr[2], tpr[2], color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc[2])
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
De la curva de validación, se puede ver que el valor score se mantiene dentro del intervalo de confianza que calculamos previamente, bastante cercano a uno, por lo cual se puede conluir que nuestro modelo se ajusto bastante bien a los datos. Por otro lado podemos ver en la curva ROC, que se cubre practicamente toda el área bajo la cruva, por lo cual el modelo es bastante bueno. Ejercicio 5__Reducción de la dimensión:__ Tomando en cuenta el mejor modelo encontrado en el `Ejercicio 3`, debe realizar una redcción de dimensionalidad del conjunto de datos. Para ello debe abordar el problema ocupando los dos criterios visto en clases: * **Selección de atributos*** **Extracción de atributos**__Preguntas a responder:__Una vez realizado la reducción de dimensionalidad, debe sacar algunas estadísticas y gráficas comparativas entre el conjunto de datos original y el nuevo conjunto de datos (tamaño del dataset, tiempo de ejecución del modelo, etc.) Selección de atributos
###Code
#Importamos las librerias necesarias
from sklearn.feature_selection import SelectKBest,chi2
X_new = SelectKBest(chi2, k=20).fit_transform(X, y) #Seleccionamos los mejores datos
X_new.shape
###Output
_____no_output_____
###Markdown
Extracción de atributos
###Code
#Escalamos nuestros datos con la función standarscaler
from sklearn.preprocessing import StandardScaler
df = digits
features = df.drop(columns=['target']).columns
x_aux = df.loc[:, features].values
y_aux = df.loc[:, ['target']].values
x_aux = StandardScaler().fit_transform(x_aux)
# Ajustamos el modelo
from sklearn.decomposition import PCA
pca = PCA(n_components=30) #Utilizamos 30 componentes
principalComponents = pca.fit_transform(x_aux)
# graficar varianza por componente
percent_variance = np.round(pca.explained_variance_ratio_* 100, decimals =2)
columns = ['PC1', 'PC2', 'PC3', 'PC4','PC5','PC6','PC7','PC8','PC9','PC10','PC11',
'PC12','PC13','PC14','PC15','PC16','PC17','PC18','PC19','PC20','PC21',
'PC22','PC23','PC24','PC25','PC26','PC27','PC28','PC29','PC30']
plt.figure(figsize=(12,4))
plt.bar(x= range(1,31), height=percent_variance, tick_label=columns)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component')
plt.title('PCA Scree Plot')
plt.show()
# graficar varianza por la suma acumulada de los componente
percent_variance_cum = np.cumsum(percent_variance)
columns = ['PC1', '+PC2', '+PC3', '+PC4','+PC5','+PC6','+PC7','+PC8','+PC9','+PC10','+PC11',
'+PC12','+PC13','+PC14','+PC15','+PC16','+PC17','+PC18','+PC19','+PC10','+PC21'
,'+PC22','+PC23','+PC24','+PC25','+PC26','+PC27','+PC28','+PC29','+PC30']
plt.figure(figsize=(12,4))
plt.bar(x= range(1,31), height=percent_variance_cum, tick_label=columns)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component Cumsum')
plt.title('PCA Scree Plot')
plt.show()
###Output
_____no_output_____
###Markdown
Luego, podemos ver que la varianza de las variables se puede explicar en aproximadamente un 85% considerando 30 componentes, a continuacion realizamos el ajuste para estas componentes
###Code
pca = PCA(n_components=30) #Inicializamos nuestro modelo
columns_aux = ['PC1', 'PC2', 'PC3', 'PC4','PC5','PC6','PC7','PC8','PC9','PC10','PC11',
'PC12','PC13','PC14','PC15','PC16','PC17','PC18','PC19','PC20','PC21',
'PC22','PC23','PC24','PC25','PC26','PC27','PC28','PC29','PC30']
principalComponents = pca.fit_transform(x_aux)
principalDataframe = pd.DataFrame(data = principalComponents, columns = columns_aux)
targetDataframe = df[['target']]
newDataframe = pd.concat([principalDataframe, targetDataframe],axis = 1)
newDataframe.head() #Creamos un nuevo dataframe con las nuevas clases filtradas
# componenetes proyectadas
Y_aux= df[['target']]
X_new = pca.fit_transform(df[digits.drop(columns=['target']).columns])
X_train_new, X_test_new, Y_train_new, Y_test_new = train_test_split(X_new, Y_aux, test_size=0.2, random_state = 2)
#Comparamos las cantidad de datos de los conjuntos
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
langs = ['Original', 'Nuevo']
students = [X.shape[1],X_new.shape[1]]
ax.bar(langs,students)
plt.show()
###Output
_____no_output_____
###Markdown
Podemos ver que el conjunto original tiene muchos más datos que al cual le aplicamos un filtro.
###Code
X = digits.drop(columns="target").values
y = digits["target"].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
parametros_sv = {'kernel':['rbf','linear'],'C':[1,10,20,30]}
clf_sv = GridSearchCV(svm.SVC(),parametros_sv,cv = 5, return_train_score =False)
clf_sv.fit(X_train,y_train)
print(clf_sv.best_score_)
print(clf_sv.best_params_)
print('tiempo de entrenamiento = '+str(pd.DataFrame(clf_sv.cv_results_)['std_fit_time'].max()))
t_original = pd.DataFrame(clf_sv.cv_results_)['std_fit_time'].max()
parametros_sv = {'kernel':['rbf','linear'],'C':[1,10,20,30]}
clf_sv = GridSearchCV(svm.SVC(),parametros_sv,cv = 5, return_train_score =False)
clf_sv.fit(X_train_new,Y_train_new)
print(clf_sv.best_score_)
print(clf_sv.best_params_)
print('tiempo de entrenamiento = '+str(pd.DataFrame(clf_sv.cv_results_)['std_fit_time'].max()))
t_nuevo = pd.DataFrame(clf_sv.cv_results_)['std_fit_time'].max()
#Comparamos los tiempos que demora el modelo en ajustarse con los distintos conjuntos que tenemos
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
langs = ['Tiempo original', 'Tiempo nuevo']
students = [t_original,t_nuevo]
ax.bar(langs,students)
plt.show()
###Output
_____no_output_____
###Markdown
Además, claramente podemos observar que el modelo es más véloz si utilizamos el nuevo conjunto de datos. Ejercicio 6__Visualizando Resultados:__ A continuación se provee código para comparar las etiquetas predichas vs las etiquetas reales del conjunto de _test_.
###Code
def mostar_resultados(digits,model,nx=5, ny=5,label = "correctos"):
"""
Muestra los resultados de las prediciones de un modelo
de clasificacion en particular. Se toman aleatoriamente los valores
de los resultados.
- label == 'correcto': retorna los valores en que el modelo acierta.
- label == 'incorrecto': retorna los valores en que el modelo no acierta.
Observacion: El modelo que recibe como argumento debe NO encontrarse
'entrenado'.
:param digits: dataset 'digits'
:param model: modelo de sklearn
:param nx: numero de filas (subplots)
:param ny: numero de columnas (subplots)
:param label: datos correctos o incorrectos
:return: graficos matplotlib
"""
X = digits.drop(columns="target").values
y = digits["target"].values
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.2, random_state = 42)
model.fit(X_train, Y_train) # ajustando el modelo
Y_pred = np.array((modelo.predict(X_test)))
# Mostrar los datos correctos
if label=="correctos":
mask = (Y_pred == Y_test)
color = "green"
# Mostrar los datos correctos
elif label=="incorrectos":
mask = (Y_pred != Y_test)
color = "red"
else:
raise ValueError("Valor incorrecto")
X_aux = X_test[mask]
y_aux_true = Y_test[mask]
y_aux_pred = Y_pred[mask]
# We'll plot the first 100 examples, randomly choosen
fig, ax = plt.subplots(nx, ny, figsize=(12,12))
for i in range(nx):
for j in range(ny):
index = j + ny * i
data = X_aux[index, :].reshape(8,8)
label_pred = str(int(y_aux_pred[index]))
label_true = str(int(y_aux_true[index]))
ax[i][j].imshow(data, interpolation='nearest', cmap='gray_r')
ax[i][j].text(0, 0, label_pred, horizontalalignment='center', verticalalignment='center', fontsize=10, color=color)
ax[i][j].text(7, 0, label_true, horizontalalignment='center', verticalalignment='center', fontsize=10, color='blue')
ax[i][j].get_xaxis().set_visible(False)
ax[i][j].get_yaxis().set_visible(False)
plt.show()
modelo = svm.SVC(C=10,kernel='rbf',probability=True) #Inicializamos el modelo del ejercicio 3
###Output
_____no_output_____
###Markdown
**Pregunta*** Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, grafique los resultados cuando: * el valor predicho y original son iguales * el valor predicho y original son distintos
###Code
mostar_resultados(digits,modelo,nx=5, ny=5,label = "correctos")
mostar_resultados(digits,modelo,nx=2, ny=2,label = "incorrectos")
###Output
_____no_output_____
###Markdown
Tarea N°02 Instrucciones1.- Completa tus datos personales (nombre y rol USM) en siguiente celda.**Nombre**: Fabián Rubilar Álvarez **Rol**: 201510509-K2.- Debes pushear este archivo con tus cambios a tu repositorio personal del curso, incluyendo datos, imágenes, scripts, etc.3.- Se evaluará:- Soluciones- Código- Que Binder esté bien configurado.- Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error. I.- Clasificación de dígitosEn este laboratorio realizaremos el trabajo de reconocer un dígito a partir de una imagen.  El objetivo es a partir de los datos, hacer la mejor predicción de cada imagen. Para ellos es necesario realizar los pasos clásicos de un proyecto de _Machine Learning_, como estadística descriptiva, visualización y preprocesamiento. * Se solicita ajustar al menos tres modelos de clasificación: * Regresión logística * K-Nearest Neighbours * Uno o más algoritmos a su elección [link](https://scikit-learn.org/stable/supervised_learning.htmlsupervised-learning) (es obligación escoger un _estimator_ que tenga por lo menos un hiperparámetro). * En los modelos que posean hiperparámetros es mandatorio buscar el/los mejores con alguna técnica disponible en `scikit-learn` ([ver más](https://scikit-learn.org/stable/modules/grid_search.htmltuning-the-hyper-parameters-of-an-estimator)).* Para cada modelo, se debe realizar _Cross Validation_ con 10 _folds_ utilizando los datos de entrenamiento con tal de determinar un intervalo de confianza para el _score_ del modelo.* Realizar una predicción con cada uno de los tres modelos con los datos _test_ y obtener el _score_. * Analizar sus métricas de error (**accuracy**, **precision**, **recall**, **f-score**) Exploración de los datosA continuación se carga el conjunto de datos a utilizar, a través del sub-módulo `datasets` de `sklearn`.
###Code
import numpy as np
import pandas as pd
from sklearn import datasets
import matplotlib.pyplot as plt
%matplotlib inline
digits_dict = datasets.load_digits()
print(digits_dict["DESCR"])
digits_dict.keys()
digits_dict["target"]
###Output
_____no_output_____
###Markdown
A continuación se crea dataframe declarado como `digits` con los datos de `digits_dict` tal que tenga 65 columnas, las 6 primeras a la representación de la imagen en escala de grises (0-blanco, 255-negro) y la última correspondiente al dígito (`target`) con el nombre _target_.
###Code
digits = (
pd.DataFrame(
digits_dict["data"],
)
.rename(columns=lambda x: f"c{x:02d}")
.assign(target=digits_dict["target"])
.astype(int)
)
digits.head()
###Output
_____no_output_____
###Markdown
Ejercicio 1**Análisis exploratorio:** Realiza tu análisis exploratorio, no debes olvidar nada! Recuerda, cada análisis debe responder una pregunta.Algunas sugerencias:* ¿Cómo se distribuyen los datos?* ¿Cuánta memoria estoy utilizando?* ¿Qué tipo de datos son?* ¿Cuántos registros por clase hay?* ¿Hay registros que no se correspondan con tu conocimiento previo de los datos?
###Code
#Primero veamos los tipos de datos del DF y cierta información que puede ser de utilidad
digits.info()
#Veamos si hay valores nulos en las columnas
if True not in digits.isnull().any().values:
print('No existen valores nulos')
#Veamos que elementos únicos tenemos en la columna target del DF
digits.target.unique()
#Veamos cuantos registros por clase existen luego de saber que hay 10 tipos de clase en la columna target
(u,v) = np.unique(digits['target'] , return_counts = True)
for i in range(0,10):
print ('Tenemos', v[i], 'registros para', u[i])
#Como tenemos 10 tipos de elementos en target, veamos las caracteristicas que poseen los datos
caract_datos = [len(digits[digits['target'] ==i ].target) for i in range(0,10)]
print ('El total de los datos es:', sum(caract_datos))
print ('El máximo de los datos es:', max(caract_datos))
print ('El mínimo de los datos es:', min(caract_datos))
print ('El promedio de los datos es:', 0.1*sum(caract_datos))
###Output
El total de los datos es: 1797
El máximo de los datos es: 183
El mínimo de los datos es: 174
El promedio de los datos es: 179.70000000000002
###Markdown
Por lo tanto, tenemos un promedio de 180 (aproximando por arriba) donde el menor valor es de 174 y el mayor valor es de 183.
###Code
#Para mejorar la visualización, construyamos un histograma
digits.target.plot.hist(bins=12, alpha=0.5)
###Output
_____no_output_____
###Markdown
Sabemos que cada dato corresponde a una matriz cuadrada de dimensión 8 con entradas de 0 a 16. Cada dato proviene de otra matriz cuadrada de dimensión 32, el cual ha sido procesado por un método de reducción de dimensiones. Además, cada dato es una imagen de un número entre 0 a 9, por lo tanto se utilizan 8$\times$8 = 64 bits, sumado al bit para guardar información. Así, como tenemos 1797 datos, calculamos 1797$\times$65 = 116805 bits en total. Ahora, si no se aplica la reducción de dimensiones, tendriamos 32$\times$32$\times$1797 = 1840128 bits, que es aproximadamente 15,7 veces mayor. Ejercicio 2**Visualización:** Para visualizar los datos utilizaremos el método `imshow` de `matplotlib`. Resulta necesario convertir el arreglo desde las dimensiones (1,64) a (8,8) para que la imagen sea cuadrada y pueda distinguirse el dígito. Superpondremos además el label correspondiente al dígito, mediante el método `text`. Esto nos permitirá comparar la imagen generada con la etiqueta asociada a los valores. Realizaremos lo anterior para los primeros 25 datos del archivo.
###Code
digits_dict["images"][0]
###Output
_____no_output_____
###Markdown
Visualiza imágenes de los dígitos utilizando la llave `images` de `digits_dict`. Sugerencia: Utiliza `plt.subplots` y el método `imshow`. Puedes hacer una grilla de varias imágenes al mismo tiempo!
###Code
nx, ny = 5, 5
fig, axs = plt.subplots(nx, ny, figsize=(12, 12))
for x in range(0,5):
for y in range(0,5):
axs[x,y].imshow(digits_dict['images'][5*x+y], cmap = 'plasma')
axs[x,y].text(3,4,s = digits['target'][5*x+y], fontsize = 30)
###Output
_____no_output_____
###Markdown
Ejercicio 3**Machine Learning**: En esta parte usted debe entrenar los distintos modelos escogidos desde la librería de `skelearn`. Para cada modelo, debe realizar los siguientes pasos:* **train-test** * Crear conjunto de entrenamiento y testeo (usted determine las proporciones adecuadas). * Imprimir por pantalla el largo del conjunto de entrenamiento y de testeo. * **modelo**: * Instanciar el modelo objetivo desde la librería sklearn. * *Hiper-parámetros*: Utiliza `sklearn.model_selection.GridSearchCV` para obtener la mejor estimación de los parámetros del modelo objetivo.* **Métricas**: * Graficar matriz de confusión. * Analizar métricas de error.__Preguntas a responder:__* ¿Cuál modelo es mejor basado en sus métricas?* ¿Cuál modelo demora menos tiempo en ajustarse?* ¿Qué modelo escoges?
###Code
X = digits.drop(columns="target").values
y = digits["target"].values
from sklearn import datasets
from sklearn.model_selection import train_test_split
#Ahora vemos los conjuntos de testeo y entrenamiento
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2,random_state=42)
print('El conjunto de testeo tiene la siguiente cantidad de datos:', len(y_test))
print('El conjunto de entrenamiento tiene la siguiente cantidad de datos:', len(y_train))
#REGRESIÓN LOGÍSTICA
from sklearn.linear_model import LogisticRegression
from metrics_classification import *
from sklearn.metrics import r2_score
from sklearn.metrics import confusion_matrix
#Creando el modelo
rlog = LogisticRegression()
rlog.fit(X_train, y_train) #Ajustando el modelo
#Matriz de confusión
y_true = list(y_test)
y_pred = list(rlog.predict(X_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
#Métricas
df_temp = pd.DataFrame(
{
'y':y_true,
'yhat':y_pred
}
)
df_metrics = summary_metrics(df_temp)
print("\nMetricas para los regresores")
print("")
print(df_metrics)
#K-NEAREST NEIGHBORS
from sklearn.neighbors import KNeighborsClassifier
from sklearn import neighbors
from sklearn import preprocessing
#Creando el modelo
knn = neighbors.KNeighborsClassifier()
knn.fit(X_train,y_train) #Ajustando el modelo
#Matriz de confusión
y_true = list(y_test)
y_pred = list(knn.predict(X_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
#Métricas
df_temp = pd.DataFrame(
{
'y':y_true,
'yhat':y_pred
}
)
df_metrics = summary_metrics(df_temp)
print("\nMetricas para los regresores")
print("")
print(df_metrics)
#ÁRBOL DE DECISIÓN
from sklearn.tree import DecisionTreeClassifier
#Creando el modelo
add = DecisionTreeClassifier(max_depth=10)
add = add.fit(X_train, y_train) #Ajustando el modelo
#Matriz de confusión
y_true = list(y_test)
y_pred = list(add.predict(X_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
#Métricas
df_temp = pd.DataFrame(
{
'y':y_true,
'yhat':y_pred
}
)
df_metrics = summary_metrics(df_temp)
print("\nMetricas para los regresores")
print("")
print(df_metrics)
#GRIDSEARCH
from sklearn.model_selection import GridSearchCV
model = DecisionTreeClassifier()
# rango de parametros
rango_criterion = ['gini','entropy']
rango_max_depth = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 15, 20, 30, 40, 50, 70, 90, 120, 150])
param_grid = dict(criterion = rango_criterion, max_depth = rango_max_depth)
print(param_grid)
print('\n')
gs = GridSearchCV(estimator=model,
param_grid=param_grid,
scoring='accuracy',
cv=10,
n_jobs=-1)
gs = gs.fit(X_train, y_train)
print(gs.best_score_)
print('\n')
print(gs.best_params_)
###Output
{'criterion': ['gini', 'entropy'], 'max_depth': array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 15,
20, 30, 40, 50, 70, 90, 120, 150])}
0.8761308281141267
{'criterion': 'entropy', 'max_depth': 11}
###Markdown
Ejercicio 4__Comprensión del modelo:__ Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, debe comprender e interpretar minuciosamente los resultados y gráficos asocados al modelo en estudio, para ello debe resolver los siguientes puntos: * **Cross validation**: usando **cv** (con n_fold = 10), sacar una especie de "intervalo de confianza" sobre alguna de las métricas estudiadas en clases: * $\mu \pm \sigma$ = promedio $\pm$ desviación estandar * **Curva de Validación**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_validation_curve.htmlsphx-glr-auto-examples-model-selection-plot-validation-curve-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico. * **Curva AUC–ROC**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.htmlsphx-glr-auto-examples-model-selection-plot-roc-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico.
###Code
#Cross Validation
from sklearn.model_selection import cross_val_score
model = KNeighborsClassifier()
precision = cross_val_score(estimator = model, X = X_train, y = y_train, cv = 10)
med = precision.mean()#Media
desv = precision.std()#Desviación estandar
a = med - desv
b = med + desv
print('(',a,',', b,')')
#Curva de Validación
from sklearn.model_selection import validation_curve
knn.get_params()
parameters = np.arange(1,10)
train_scores, test_scores = validation_curve(model,
X_train,
y_train,
param_name = 'n_neighbors',
param_range = parameters,
scoring = 'accuracy',
n_jobs = -1)
train_scores_mean = np.mean(train_scores, axis = 1)
train_scores_std = np.std(train_scores, axis = 1)
test_scores_mean = np.mean(test_scores, axis = 1)
test_scores_std = np.std(test_scores, axis = 1)
plt.figure(figsize=(12,8))
plt.title('Validation Curve (KNeighbors)')
plt.xlabel('n_neighbors')
plt.ylabel('scores')
#Train
plt.semilogx(parameters,
train_scores_mean,
label = 'Training Score',
color = 'red',
lw =2)
plt.fill_between(parameters,
train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std,
alpha = 0.2,
color = 'red',
lw = 2)
#Test
plt.semilogx(parameters,
test_scores_mean,
label = 'Cross Validation Score',
color = 'navy',
lw =2)
plt.fill_between(parameters,
test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std,
alpha = 0.2,
color = 'navy',
lw = 2)
plt.legend(loc = 'Best')
plt.show()
#Curva AUC–ROC
###Output
_____no_output_____
###Markdown
Ejercicio 5__Reducción de la dimensión:__ Tomando en cuenta el mejor modelo encontrado en el `Ejercicio 3`, debe realizar una reducción de dimensionalidad del conjunto de datos. Para ello debe abordar el problema ocupando los dos criterios visto en clases: * **Selección de atributos*** **Extracción de atributos**__Preguntas a responder:__Una vez realizado la reducción de dimensionalidad, debe sacar algunas estadísticas y gráficas comparativas entre el conjunto de datos original y el nuevo conjunto de datos (tamaño del dataset, tiempo de ejecución del modelo, etc.)
###Code
#Selección de atributos
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
df = pd.DataFrame(X)
df.columns = [f'P{k}' for k in range(1,X.shape[1]+1)]
df['y']=y
print('Vemos que el df respectivo es de la forma:')
print('\n')
print(df.head())
# Separamos las columnas objetivo
x_training = df.drop(['y',], axis=1)
y_training = df['y']
# Aplicando el algoritmo univariante de prueba F.
k = 40 # número de atributos a seleccionar
columnas = list(x_training.columns.values)
seleccionadas = SelectKBest(f_classif, k=k).fit(x_training, y_training)
catrib = seleccionadas.get_support()
atributos = [columnas[i] for i in list(catrib.nonzero()[0])]
print('\n')
print('Los atributos quedan como:')
print('\n')
print(atributos)
#Veamos que pasa si entrenamos un nuevo modelo K-NEAREST NEIGHBORS con los atributos seleccionados anteriormente
x=df[atributos]
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.2,random_state=42)
#Creando el modelo
knn = neighbors.KNeighborsClassifier()
knn.fit(x_train,y_train) #Ajustando el modelo
#Matriz de confusión
y_true = list(y_test)
y_pred = list(knn.predict(x_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
#Métricas
df_temp = pd.DataFrame(
{
'y':y_true,
'yhat':y_pred
}
)
df_metrics = summary_metrics(df_temp)
print("\nMetricas para los regresores ")
print("")
print(df_metrics)
#Extracción de atributos
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
x = StandardScaler().fit_transform(X)
n_components = 50
pca = PCA(n_components)
principalComponents = pca.fit_transform(x)
# Graficar varianza por componente
percent_variance = np.round(pca.explained_variance_ratio_* 100, decimals =2)
columns = [ 'P'+str(i) for i in range(n_components)]
plt.figure(figsize=(20,4))
plt.bar(x= range(0,n_components), height=percent_variance, tick_label=columns)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component')
plt.title('PCA Scree Plot')
plt.show()
# graficar varianza por la suma acumulada de los componente
percent_variance_cum = np.cumsum(percent_variance)
columns = [ 'P' + str(0) + '+...+P' + str(i) for i in range(n_components) ]
plt.figure(figsize=(20,4))
plt.bar(x= range(0,n_components), height=percent_variance_cum, tick_label=columns)
plt.xticks(range(len(columns)), columns, rotation=90)
plt.xlabel('Principal Component Cumsum')
plt.title('PCA Scree Plot')
plt.show()
###Output
_____no_output_____
###Markdown
Ejercicio 6__Visualizando Resultados:__ A continuación se provee código para comparar las etiquetas predichas vs las etiquetas reales del conjunto de _test_.
###Code
def mostar_resultados(digits,model,nx=5, ny=5,label = "correctos"):
"""
Muestra los resultados de las prediciones de un modelo
de clasificacion en particular. Se toman aleatoriamente los valores
de los resultados.
- label == 'correcto': retorna los valores en que el modelo acierta.
- label == 'incorrecto': retorna los valores en que el modelo no acierta.
Observacion: El modelo que recibe como argumento debe NO encontrarse
'entrenado'.
:param digits: dataset 'digits'
:param model: modelo de sklearn
:param nx: numero de filas (subplots)
:param ny: numero de columnas (subplots)
:param label: datos correctos o incorrectos
:return: graficos matplotlib
"""
X = digits.drop(columns = "target").values
y = digits["target"].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42)
model.fit(X_train, y_train) # ajustando el modelo
y_pred = model.predict(X_test)
# Mostrar los datos correctos
if label == "correctos":
mask = (y_pred == y_test)
color = "green"
# Mostrar los datos correctos
elif label == "incorrectos":
mask = (y_pred != y_test)
color = "red"
else:
raise ValueError("Valor incorrecto")
X_aux = X_test[mask]
y_aux_true = y_test[mask]
y_aux_pred = y_pred[mask]
# We'll plot the first 100 examples, randomly choosen
fig, ax = plt.subplots(nx, ny, figsize=(12,12))
for i in range(nx):
for j in range(ny):
index = j + ny * i
data = X_aux[index, :].reshape(8,8)
label_pred = str(int(y_aux_pred[index]))
label_true = str(int(y_aux_true[index]))
ax[i][j].imshow(data, interpolation = 'nearest', cmap = 'gray_r')
ax[i][j].text(0, 0, label_pred, horizontalalignment = 'center', verticalalignment = 'center', fontsize = 10, color = color)
ax[i][j].text(7, 0, label_true, horizontalalignment = 'center', verticalalignment = 'center', fontsize = 10, color = 'blue')
ax[i][j].get_xaxis().set_visible(False)
ax[i][j].get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
**Pregunta*** Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, grafique los resultados cuando: * el valor predicho y original son iguales * el valor predicho y original son distintos * Cuando el valor predicho y original son distintos , ¿Por qué ocurren estas fallas?
###Code
mostar_resultados(digits, KNeighborsClassifier(), nx=5, ny=5,label = "correctos")
mostar_resultados(digits, neighbors.KNeighborsClassifier(), nx=5, ny=5,label = "incorrectos")
###Output
_____no_output_____
###Markdown
Tarea N°02 Instrucciones1.- Completa tus datos personales (nombre y rol USM) en siguiente celda.**Nombre**: Cristóbal Vivar Vargas**Rol**: 201723025-82.- Debes pushear este archivo con tus cambios a tu repositorio personal del curso, incluyendo datos, imágenes, scripts, etc.3.- Se evaluará:- Soluciones- Código- Que Binder esté bien configurado.- Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error. I.- Clasificación de dígitosEn este laboratorio realizaremos el trabajo de reconocer un dígito a partir de una imagen.  El objetivo es a partir de los datos, hacer la mejor predicción de cada imagen. Para ellos es necesario realizar los pasos clásicos de un proyecto de _Machine Learning_, como estadística descriptiva, visualización y preprocesamiento. * Se solicita ajustar al menos tres modelos de clasificación: * Regresión logística * K-Nearest Neighbours * Uno o más algoritmos a su elección [link](https://scikit-learn.org/stable/supervised_learning.htmlsupervised-learning) (es obligación escoger un _estimator_ que tenga por lo menos un hiperparámetro). * En los modelos que posean hiperparámetros es mandatorio buscar el/los mejores con alguna técnica disponible en `scikit-learn` ([ver más](https://scikit-learn.org/stable/modules/grid_search.htmltuning-the-hyper-parameters-of-an-estimator)).* Para cada modelo, se debe realizar _Cross Validation_ con 10 _folds_ utilizando los datos de entrenamiento con tal de determinar un intervalo de confianza para el _score_ del modelo.* Realizar una predicción con cada uno de los tres modelos con los datos _test_ y obtener el _score_. * Analizar sus métricas de error (**accuracy**, **precision**, **recall**, **f-score**) Exploración de los datosA continuación se carga el conjunto de datos a utilizar, a través del sub-módulo `datasets` de `sklearn`.
###Code
import numpy as np
import pandas as pd
from sklearn import datasets
import matplotlib.pyplot as plt
%matplotlib inline
digits_dict = datasets.load_digits()
print(digits_dict["DESCR"])
digits_dict.keys()
digits_dict["target"]
###Output
_____no_output_____
###Markdown
A continuación se crea dataframe declarado como `digits` con los datos de `digits_dict` tal que tenga 65 columnas, las 6 primeras a la representación de la imagen en escala de grises (0-blanco, 255-negro) y la última correspondiente al dígito (`target`) con el nombre _target_.
###Code
digits = (
pd.DataFrame(
digits_dict["data"],
)
.rename(columns=lambda x: f"c{x:02d}")
.assign(target=digits_dict["target"])
.astype(int)
)
digits.head()
###Output
_____no_output_____
###Markdown
Ejercicio 1**Análisis exploratorio:** Realiza tu análisis exploratorio, no debes olvidar nada! Recuerda, cada análisis debe responder una pregunta.Algunas sugerencias:* ¿Cómo se distribuyen los datos?* ¿Cuánta memoria estoy utilizando?* ¿Qué tipo de datos son?* ¿Cuántos registros por clase hay?* ¿Hay registros que no se correspondan con tu conocimiento previo de los datos? Distribución de los Datos:
###Code
#Desripición de las columnas del DataFrame
digits.describe()
#Grafico de cada columna para ver distribucion de los datos
columnas = digits.columns
y = [i for i in range(len(digits))]
c = 0
fig = plt.figure(figsize = (30,30))
for i in range(64):
plt.subplot(8,8,i+1)
plt.scatter(digits[columnas[i]], y)
plt.title(columnas[i])
###Output
_____no_output_____
###Markdown
Se observa que a primera columna de graficos presenta una distribucion cercana a uniforme al igual que la octava, mientras que las demás presentan una distribución bastante aleatoria. Memoria:
###Code
#Memoria utilizada
import sys
memoria = digits.memory_usage() #Se determina la memoria usada en el DataFrame por columna
memoria
#Se suma la memoria de cada columna para conocer el total
total = 0
for i in range(0,len(memoria)):
total += memoria[i]
print("El DataFrame digits usa un total de:",total, 'bytes')
###Output
El DataFrame digits usa un total de: 467348 bytes
###Markdown
Tipos de Datos:
###Code
print(np.array(digits.dtypes))
digits.dtypes.unique()
###Output
_____no_output_____
###Markdown
Los datos de todas las columnas son enteros Registros por clase:
###Code
#Se muestra una Dataframe con la cantidad de Registros por clase
clas_reg = (pd.value_counts(digits.target)
.to_frame()
.reset_index()
.sort_values(by = "index")
.rename(columns = {"index": "Clase", "target": "Cantidad"})
.reset_index(drop = True)
)
clas_reg
###Output
_____no_output_____
###Markdown
¿Hay valores NaN's?:
###Code
digits.isnull().sum().sum()
###Output
_____no_output_____
###Markdown
O sea, no hay valores NaN en todo el DataFrame Ejercicio 2**Visualización:** Para visualizar los datos utilizaremos el método `imshow` de `matplotlib`. Resulta necesario convertir el arreglo desde las dimensiones (1,64) a (8,8) para que la imagen sea cuadrada y pueda distinguirse el dígito. Superpondremos además el label correspondiente al dígito, mediante el método `text`. Esto nos permitirá comparar la imagen generada con la etiqueta asociada a los valores. Realizaremos lo anterior para los primeros 25 datos del archivo.
###Code
digits_dict["images"][0]
###Output
_____no_output_____
###Markdown
Visualiza imágenes de los dígitos utilizando la llave `images` de `digits_dict`. Sugerencia: Utiliza `plt.subplots` y el método `imshow`. Puedes hacer una grilla de varias imágenes al mismo tiempo!
###Code
#Se crea una grilla de 5 x 5
fig, axs = plt.subplots(5, 5, figsize=(12, 12))
#Se itera por las posiciones en la grilla mostrando las imagenes
for i in range(0, 5):
for j in range(0,5):
img = digits_dict["images"][j + 5*i] #Se muestran en orden las imagenes
axs[i,j].imshow(img)
plt.show()
###Output
_____no_output_____
###Markdown
Ejercicio 3**Machine Learning**: En esta parte usted debe entrenar los distintos modelos escogidos desde la librería de `skelearn`. Para cada modelo, debe realizar los siguientes pasos:* **train-test** * Crear conjunto de entrenamiento y testeo (usted determine las proporciones adecuadas). * Imprimir por pantalla el largo del conjunto de entrenamiento y de testeo. * **modelo**: * Instanciar el modelo objetivo desde la librería sklearn. * *Hiper-parámetros*: Utiliza `sklearn.model_selection.GridSearchCV` para obtener la mejor estimación de los parámetros del modelo objetivo.* **Métricas**: * Graficar matriz de confusión. * Analizar métricas de error.__Preguntas a responder:__* ¿Cuál modelo es mejor basado en sus métricas?* ¿Cuál modelo demora menos tiempo en ajustarse?* ¿Qué modelo escoges?
###Code
import metrics_classification as metrics
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import confusion_matrix
import time
X = digits.drop(columns="target").values
y = digits["target"].values
###Output
_____no_output_____
###Markdown
Regresión Logística:
###Code
#Spliteo train-test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30,
train_size=0.70,
random_state=1998)
print('El train set tiene',len(X_train), 'filas')
print('El test set tiene',len(X_test),'filas')
# Se importa un Modelo de Regresion Logística
from sklearn.linear_model import LogisticRegression
#Diccionario de Hiper-Parámetros a comparar con gridsearch
metric_lr = {
'penalty' : ['l1', 'l2'],
'C' : [100, 10 , 1, 0.1, 0.01],
'class_weight' : ['balanced', None],
'solver' : ['liblinear'],
}
lr = LogisticRegression()
lr_gridsearchcv = GridSearchCV(estimator = lr, param_grid = metric_lr, cv = 10)
start_time = time.time() #Tiempo de inicio
lr_grid_result = lr_gridsearchcv.fit(X_train, y_train)
# Se presenta el tiempo que tomó ajustarse el modelo
print(" El modelo se ajustó en %s segundos" % (time.time() - start_time))
# Se presenta el mejor score del modelo y los parametros usados para obtener ese score
print("El mejor score tuvo un valor de: %f usando los parametros: \n %s"
% (lr_grid_result.best_score_, lr_grid_result.best_params_))
#Predicción del modelo
y_pred = lr_gridsearchcv.predict(X_test)
#Definición de DataFrame para usar en summary_metrics
df_log = pd.DataFrame({
'y': y_test,
'yhat': y_pred
})
print("La matriz de confusión asociada al modelo es: \n \n",confusion_matrix(y_test,y_pred), "\n \n Y las métricas son:")
metrics.summary_metrics(df_log)
###Output
La matriz de confusión asociada al modelo es:
[[55 0 0 0 0 1 0 0 0 0]
[ 0 53 0 1 0 0 0 0 1 0]
[ 0 0 60 2 0 0 0 0 0 0]
[ 0 0 0 52 0 2 0 0 1 1]
[ 0 2 0 0 45 0 0 0 0 0]
[ 0 0 0 0 0 49 0 0 0 1]
[ 0 0 0 0 0 0 53 0 0 0]
[ 0 0 0 1 0 0 0 49 1 1]
[ 0 3 1 0 0 2 0 0 48 0]
[ 0 0 0 1 0 0 0 0 2 52]]
Y las métricas son:
###Markdown
Se observa que las 4 métricas son muy parecidas y cercanas a 1. K-Nearest Neighbors:
###Code
#Spliteo train-test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30,
train_size=0.70,
random_state=1998)
print('El train set tiene',len(X_train), 'filas')
print('El test set tiene',len(X_test),'filas')
# Se importa un Modelo de K-Nearest Neighburs:
from sklearn.neighbors import KNeighborsClassifier
#Diccionario de Hiper-Parámetros a comparar con gridsearch
metric_knn = {
'n_neighbors' : [3, 6, 15,30],
'weights' : ['uniform', 'distance'],
'metric' : ['euclidean', 'minkowski'],
'algorithm' : ['auto','brute', 'kd_tree']
}
knn = KNeighborsClassifier()
knn_gridsearchcv = GridSearchCV(estimator = knn, param_grid = metric_knn, cv = 10)
start_time = time.time() #Tiempo de inicio
knn_grid_result = knn_gridsearchcv.fit(X_train, y_train)
# Se presenta el tiempo que tomó ajustarse el modelo
print(" El modelo se ajustó en %s segundos" % (time.time() - start_time))
# Se presenta el mejor score del modelo y los parametros usados para obtener ese score
print("El mejor score tuvo un valor de: %f usando los parametros: \n %s"
% (knn_grid_result.best_score_, knn_grid_result.best_params_))
#Predicción del Modelo:
y_pred = knn_gridsearchcv.predict(X_test)
#Definición de DataFrame para usar en summary_metrics
df_knn = pd.DataFrame({
'y': y_test,
'yhat': y_pred
})
print("La matriz de Confusión asociada al modelo es: \n \n",confusion_matrix(y_test,y_pred))
metrics.summary_metrics(df_knn)
###Output
La matriz de Confusión asociada al modelo es:
[[56 0 0 0 0 0 0 0 0 0]
[ 0 55 0 0 0 0 0 0 0 0]
[ 0 0 62 0 0 0 0 0 0 0]
[ 0 0 0 54 0 0 0 1 0 1]
[ 0 0 0 0 47 0 0 0 0 0]
[ 0 0 0 0 0 49 0 0 0 1]
[ 0 0 0 0 0 0 53 0 0 0]
[ 0 0 0 0 0 0 0 52 0 0]
[ 0 0 0 1 0 0 0 0 53 0]
[ 0 0 0 2 1 0 0 0 1 51]]
###Markdown
Se observa que las 4 métricas son parecidas y cercanas a 1, incluso más que el modelo de Regresión Logística. Decision Tree Classifier:
###Code
#Spliteo train-test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30,
train_size=0.70,
random_state=1998)
print('El train set tiene',len(X_train), 'filas')
print('El test set tiene',len(X_test),'filas')
# Se importa un Modelo de Regresión de Arboles de Decisión
from sklearn.tree import DecisionTreeClassifier
#Diccionario de Hiper-Parámetros a comparar con gridsearch
param_DTR = {
'criterion' : ['gini', 'entropy'],
'splitter' : ['best', 'random'],
'max_features' : ['auto', 'sqrt', 'log2'],
'max_depth': [6,10,15,20,30]
}
DTC = DecisionTreeClassifier()
DTC_gridsearchcv = GridSearchCV(estimator = DTC, param_grid = param_DTR, cv = 10)
start_time = time.time() #Tiempo de inicio
DTC_grid_result = DTC_gridsearchcv.fit(X_train, y_train)
# Se presenta el tiempo que tomó ajustarse el modelo
print(" El modelo se ajustó en %s segundos" % (time.time() - start_time))
# Se presenta el mejor score del modelo y los parametros usados para obtener ese score
print("El mejor score tuvo un valor de: %f usando los parametros: \n %s"
% (DTC_grid_result.best_score_, DTC_grid_result.best_params_))
#Predicción del Modelo:
y_pred = DTC_gridsearchcv.predict(X_test)
#Definición de DataFrame para usar en summary_metrics
df_DTC = pd.DataFrame({
'y': y_test,
'yhat': y_pred
})
print("La matriz de Confusión asociada al modelo es: \n \n",confusion_matrix(y_test,y_pred))
metrics.summary_metrics(df_DTC)
###Output
La matriz de Confusión asociada al modelo es:
[[54 0 0 1 1 0 0 0 0 0]
[ 0 44 0 0 2 1 2 1 2 3]
[ 1 3 50 1 0 0 0 1 5 1]
[ 0 3 2 33 1 7 0 0 3 7]
[ 1 2 0 1 42 0 0 0 0 1]
[ 0 0 0 4 0 38 2 0 0 6]
[ 1 1 1 0 1 5 41 0 2 1]
[ 0 0 2 1 2 0 0 42 3 2]
[ 0 11 1 3 2 2 1 0 33 1]
[ 2 2 1 1 2 4 0 2 5 36]]
###Markdown
Se observa que las 4 métricas son parecidas pero son peores que los modelos de Regresión logística y KNN ¿Cuál modelo es mejor basado en sus métricas? Se observa que netamente fijándose en las métricas, el mejor modelo es K-Nearest Neighbors con metricas:
###Code
metrics.summary_metrics(df_knn)
###Output
_____no_output_____
###Markdown
¿Cuál modelo demora menos tiempo en ajustarse? El modelo que se demoró menos en ajustarse fue Decission Tree Classifier con un tiempo de 2.804 segundos ¿Qué modelo escoges? Personalmente encuentro que el modelo de K-Nearest Neighbors es la elección correcta pues sus mpetricas fueron las mejores y su tiempo de ejecución fue razonable, así que elegiré este. Ejercicio 4__Comprensión del modelo:__ Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, debe comprender e interpretar minuciosamente los resultados y gráficos asocados al modelo en estudio, para ello debe resolver los siguientes puntos: * **Cross validation**: usando **cv** (con n_fold = 10), sacar una especie de "intervalo de confianza" sobre alguna de las métricas estudiadas en clases: * $\mu \pm \sigma$ = promedio $\pm$ desviación estandar * **Curva de Validación**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_validation_curve.htmlsphx-glr-auto-examples-model-selection-plot-validation-curve-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico. * **Curva AUC–ROC**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.htmlsphx-glr-auto-examples-model-selection-plot-roc-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico.
###Code
#Cross Validation
from sklearn.model_selection import cross_val_score
precision = cross_val_score(estimator=knn_gridsearchcv,
X=X_train,
y=y_train,
cv=10)
precision = [round(x,2) for x in precision]
print('Precisiones: {} '.format(precision))
print('Precision promedio: {0: .3f} +/- {1: .3f}'.format(np.mean(precision),
np.std(precision)))
#Curva de validación (copiado del link del enunciado)
from sklearn.model_selection import validation_curve
param_range = np.array([i for i in range(1,10)])
# Validation curve
# Se utilizan los mejores hiperparámetros encontrado en el ejercicio 3 menos n_neighbors
# pues este se varía en la curva de validación
train_scores, test_scores = validation_curve(
KNeighborsClassifier(algorithm = 'auto', metric = 'euclidean', weights = 'distance'), #
X_train,
y_train,
param_name="n_neighbors",
param_range=param_range,
scoring="accuracy",
n_jobs=1
)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.title("Curva de Validación para KNN")
plt.xlabel("n_neighbors")
plt.ylabel("Score")
plt.ylim(0.95, 1.05)
lw = 2
plt.semilogx(param_range, train_scores_mean, label="Training score",
color="darkorange", lw=lw)
plt.fill_between(param_range, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color="darkorange", lw=lw)
plt.semilogx(param_range, test_scores_mean, label="Cross-validation score",
color="navy", lw=lw)
plt.fill_between(param_range, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.2,
color="navy", lw=lw)
plt.legend(loc="best")
plt.show()
###Output
_____no_output_____
###Markdown
Se observa que la línea de training score es perfecta e igual a 1 pues el modelo KNN guarda en la memoria todo el train set y luego lo ocupa para predecir. Por lo tanto, al predecir con el train set, ya tiene exactamente su cluster apropiado.
###Code
from sklearn.preprocessing import label_binarize
from sklearn.metrics import roc_curve, auc
from scipy import interp
from sklearn.metrics import roc_auc_score
from sklearn.multiclass import OneVsRestClassifier
from itertools import cycle
# Binarize the output
y = label_binarize(y, classes=digits["target"].unique())
n_classes = y.shape[1]
n_samples, n_features = X.shape
# shuffle and split training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.3,
train_size = 0.7,
random_state=1998)
# Learn to predict each class against the other
classifier = OneVsRestClassifier(KNeighborsClassifier(algorithm = 'auto', metric = 'euclidean', weights = 'distance'))
y_score = classifier.fit(X_train, y_train).predict(X_test)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
plt.figure(figsize=(10,10))
lw = 2
plt.plot(fpr[2], tpr[2], color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc[2])
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure(figsize=(12,12))
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Some extension of Receiver operating characteristic to multi-class')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Se observa que la curva es cercana a perfecta para casi todas las clases debido a lo explicado en el gráfico anterior. Habiendo dicho esto, las curvas con una leve inclinación, se deben a que el modelo aun si fue bastante bueno en las métricas, no las tuvo perfectas. Ejercicio 5__Reducción de la dimensión:__ Tomando en cuenta el mejor modelo encontrado en el `Ejercicio 3`, debe realizar una redcción de dimensionalidad del conjunto de datos. Para ello debe abordar el problema ocupando los dos criterios visto en clases: * **Selección de atributos*** **Extracción de atributos**__Preguntas a responder:__Una vez realizado la reducción de dimensionalidad, debe sacar algunas estadísticas y gráficas comparativas entre el conjunto de datos original y el nuevo conjunto de datos (tamaño del dataset, tiempo de ejecución del modelo, etc.) Selección de Atributos:
###Code
#Notar que las columnas que se presentan tienen un solo valor constante igual a 0
print(digits["c00"].unique())
print(digits["c32"].unique())
print(digits["c39"].unique())
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
# Separamos las columnas objetivo
x_training = digits.drop(['c00','c32','c39','target'], axis=1) #Se dropean las columnas constantes mencionadas anteriormente
y_training = digits['target']
# Aplicando el algoritmo univariante de prueba F.
k = 20 # número de atributos a seleccionar
columnas = list(x_training.columns.values)
seleccionadas = SelectKBest(f_classif, k=k).fit(x_training, y_training)
catrib = seleccionadas.get_support()
atributos = [columnas[i] for i in list(catrib.nonzero()[0])]
digits_atributos = digits[atributos + ["target"]]
print("Las columnas seleccionadas por la prueba F son:\n",atributos)
###Output
Las columnas seleccionadas por la prueba F son:
['c02', 'c10', 'c13', 'c20', 'c21', 'c26', 'c28', 'c30', 'c33', 'c34', 'c36', 'c38', 'c42', 'c43', 'c44', 'c46', 'c54', 'c58', 'c60', 'c61']
###Markdown
Comparativas (Selección de atributos):
###Code
dfs_size = [digits.size,digits_atributos.size]
print("digits Original tenía", dfs_size[0], "elementos")
print("digits_atributos tiene", dfs_size[1], "elementos")
fig = plt.figure(figsize=(10,5))
plt.bar(x =["digits Original", "digits_atributos"], height = dfs_size, color = "blue" )
plt.title("Comparativa tamaño de los DataFrames")
plt.ylabel("Cantidad de Elementos")
plt.show()
#Se suma la memoria de cada columna para conocer el total
total2 = 0
memoria = digits_atributos.memory_usage() #Se determina la memoria usada en el DataFrame nuevo por columna
for i in range(0,len(memoria)):
total2 += memoria[i]
print("El DataFrame digits_atributos usa un total de:",total2, 'bytes')
print('En comparación el DataFrame original usaba un total de:', total, 'bytes')
lista = [1e5 * i for i in range(6)]
fig = plt.figure(figsize=(10,5))
plt.bar(x = ["digits Original", "digits_atributos"], height = [total,total2],color = "red")
plt.yticks(lista)
plt.title("Comparativa de memoria utilizada")
plt.ylabel("bytes")
plt.show()
X = digits.drop("target",axis = 1)
y = digits["target"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30,
train_size=0.70,
random_state=1998)
start_time = time.time()
knn_gridsearchcv.fit(X_train, y_train)
# Se presenta el tiempo en que se ejecutó el modelo con el dataset original
time_original = time.time() - start_time
print(" El modelo se ejecutó en %s segundos con el DataFrame Original" % (time_original))
#Spliteo train-test con el dataframe digits_pca
X = digits_atributos.drop("target",axis=1)
y = digits["target"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30,
train_size=0.70,
random_state=1998)
start_time = time.time()
knn_gridsearchcv.fit(X_train, y_train)
# Se presenta el tiempo en que se ejecutó el modelo con el dataframe digits_pca
time_atributos = time.time() - start_time
print(" El modelo se ejecutó en %s segundos con el DataFrame digits_atributos" % (time_atributos))
lista = [2 * i for i in range(9)]
fig = plt.figure(figsize=(10,5))
plt.bar(x = ["digits Original", "digits_atributos"], height = [time_original,time_atributos],color = "green")
plt.yticks(lista)
plt.title("Comparativa de tiempo de ejecución del modelo")
plt.ylabel("Segundos")
plt.show()
###Output
_____no_output_____
###Markdown
Extracción de atributos:
###Code
from sklearn.preprocessing import StandardScaler
#Se estandarizan los datos pues pca es suceptible a la distribucion de los datos
x = digits.drop("target",axis =1).values
y = digits["target"].values
x = StandardScaler().fit_transform(x)
# Se ajusta el modelo
from sklearn.decomposition import PCA
n_components = 20
pca = PCA(n_components=n_components)
principalComponents = pca.fit_transform(x)
# graficar varianza por componente
percent_variance = np.round(pca.explained_variance_ratio_* 100, decimals =2)
columns = [f"PC{i}" for i in range(1,n_components+1)]
plt.figure(figsize=(17,6))
plt.bar(x= range(1,n_components+1), height=percent_variance, tick_label=columns)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component')
plt.title('PCA Scree Plot')
plt.show()
# graficar varianza por la suma acumulada de los componente
percent_variance_cum = np.cumsum(percent_variance)
columns_sum =[f"PC1+...+PC{i+1}" for i in range(2,n_components)]
columns_sum = ["PC1", "PC1+PC2"] + columns_sum
plt.figure(figsize=(17,6))
plt.bar(x= range(1,n_components+1), height=percent_variance_cum, tick_label=columns_sum)
plt.ylabel('Percentate of Variance Explained')
plt.yticks([10*i for i in range(11)])
plt.xlabel('Principal Component Cumsum')
plt.xticks(rotation =45)
plt.title('PCA Scree Plot')
plt.show()
principalDataframe = pd.DataFrame(data = principalComponents, columns = columns)
targetDataframe = digits[['target']]
digits_pca = pd.concat([principalDataframe, targetDataframe],axis = 1)
digits_pca.head()
###Output
_____no_output_____
###Markdown
Comparativas (Extracción de atributos):
###Code
dfs_pca_size = [digits.size,digits_pca.size]
print("digits Original tenía", dfs_pca_size[0], "elementos")
print("digits_atributos tiene", dfs_pca_size[1], "elementos")
fig = plt.figure(figsize=(10,5))
plt.bar(x =["digits Original", "digits_pca"], height = dfs_pca_size, color = "blue" )
plt.title("Comparativa tamaño de los DataFrames")
plt.ylabel("Cantidad de Elementos")
plt.show()
#Se suma la memoria de cada columna para conocer el total
total3 = 0
memoria = digits_pca.memory_usage() #Se determina la memoria usada en el DataFrame nuevo por columna
for i in range(0,len(memoria)):
total3 += memoria[i]
print("El DataFrame digits_pca usa un total de:",total2, 'bytes')
print('En comparación el DataFrame original usaba un total de:', total, 'bytes')
lista = [1e5 * i for i in range(6)]
fig = plt.figure(figsize=(10,5))
plt.bar(x = ["digits Original", "digits_pca"], height = [total,total3],color = "red")
plt.yticks(lista)
plt.title("Comparativa de memoria utilizada")
plt.ylabel("bytes")
plt.show()
X = digits.drop("target",axis = 1)
y = digits["target"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30,
train_size=0.70,
random_state=1998)
start_time = time.time()
knn_gridsearchcv.fit(X_train, y_train)
# Se presenta el tiempo en que se ejecutó el modelo con el dataset original
time_original = time.time() - start_time
print(" El modelo se ejecutó en %s segundos con el DataFrame Original" % (time_original))
#Spliteo train-test con el dataframe solo con atributos
X = digits_pca.drop("target",axis=1)
y = digits["target"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30,
train_size=0.70,
random_state=1998)
start_time = time.time()
knn_gridsearchcv.fit(X_train, y_train)
# Se presenta el tiempo en que se ejecutó el modelo con el dataset solo con atributos
time_pca = time.time() - start_time
print(" El modelo se ejecutó en %s segundos con el DataFrame digits_pca" % (time_pca))
lista = [2 * i for i in range(9)]
fig = plt.figure(figsize=(10,5))
plt.bar(x = ["digits Original", "digits_pca"], height = [time_original,time_pca],color = "green")
plt.yticks(lista)
plt.title("Comparativa de tiempo de ejecución del modelo")
plt.ylabel("Segundos")
plt.show()
###Output
_____no_output_____
###Markdown
Ejercicio 6__Visualizando Resultados:__ A continuación se provee código para comparar las etiquetas predichas vs las etiquetas reales del conjunto de _test_.
###Code
def mostar_resultados(digits,model,nx=5, ny=5,label = "correctos"):
"""
Muestra los resultados de las prediciones de un modelo
de clasificacion en particular. Se toman aleatoriamente los valores
de los resultados.
- label == 'correcto': retorna los valores en que el modelo acierta.
- label == 'incorrecto': retorna los valores en que el modelo no acierta.
Observacion: El modelo que recibe como argumento debe NO encontrarse
'entrenado'.
:param digits: dataset 'digits'
:param model: modelo de sklearn
:param nx: numero de filas (subplots)
:param ny: numero de columnas (subplots)
:param label: datos correctos o incorrectos
:return: graficos matplotlib
"""
X = digits.drop(columns="target").values
y = digits["target"].values
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.2, random_state = 42)
model.fit(X_train, Y_train) # ajustando el modelo
y_pred = model.predict(X_test)
# Mostrar los datos correctos
if label=="correctos":
mask = (y_pred == Y_test)
color = "green"
# Mostrar los datos correctos
elif label=="incorrectos":
mask = (y_pred != Y_test)
color = "red"
else:
raise ValueError("Valor incorrecto")
X_aux = X_test[mask]
y_aux_true = Y_test[mask]
y_aux_pred = y_pred[mask]
# We'll plot the first 100 examples, randomly choosen
fig, ax = plt.subplots(nx, ny, figsize=(12,12))
for i in range(nx):
for j in range(ny):
index = j + ny * i
if index < X_aux.shape[0]:
data = X_aux[index, :].reshape(8,8)
label_pred = str(int(y_aux_pred[index]))
label_true = str(int(y_aux_true[index]))
ax[i][j].imshow(data, interpolation='nearest', cmap='gray_r')
ax[i][j].text(0, 0, label_pred, horizontalalignment='center', verticalalignment='center', fontsize=10, color=color)
ax[i][j].text(7, 0, label_true, horizontalalignment='center', verticalalignment='center', fontsize=10, color='blue')
ax[i][j].get_xaxis().set_visible(False)
ax[i][j].get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
**Pregunta*** Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, grafique los resultados cuando: * el valor predicho y original son iguales * el valor predicho y original son distintos * Cuando el valor predicho y original son distintos , ¿Por qué ocurren estas fallas? El valor predicho y original son iguales:
###Code
mostar_resultados(digits,model = KNeighborsClassifier() ,nx=3, ny=3,label = "correctos")
###Output
_____no_output_____
###Markdown
El valor predicho y original son distintos:
###Code
mostar_resultados(digits,model = KNeighborsClassifier() ,nx=3, ny=3,label = "incorrectos")
###Output
_____no_output_____
###Markdown
Tarea N°02 Instrucciones1.- Completa tus datos personales (nombre y rol USM) en siguiente celda.**Nombre**: Alan Grez Jimenez**Rol**: 201710519-42.- Debes pushear este archivo con tus cambios a tu repositorio personal del curso, incluyendo datos, imágenes, scripts, etc.3.- Se evaluará:- Soluciones- Código- Que Binder esté bien configurado.- Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error. I.- Clasificación de dígitosEn este laboratorio realizaremos el trabajo de reconocer un dígito a partir de una imagen.  El objetivo es a partir de los datos, hacer la mejor predicción de cada imagen. Para ellos es necesario realizar los pasos clásicos de un proyecto de _Machine Learning_, como estadística descriptiva, visualización y preprocesamiento. * Se solicita ajustar al menos tres modelos de clasificación: * Regresión logística * K-Nearest Neighbours * Uno o más algoritmos a su elección [link](https://scikit-learn.org/stable/supervised_learning.htmlsupervised-learning) (es obligación escoger un _estimator_ que tenga por lo menos un hiperparámetro). * En los modelos que posean hiperparámetros es mandatorio buscar el/los mejores con alguna técnica disponible en `scikit-learn` ([ver más](https://scikit-learn.org/stable/modules/grid_search.htmltuning-the-hyper-parameters-of-an-estimator)).* Para cada modelo, se debe realizar _Cross Validation_ con 10 _folds_ utilizando los datos de entrenamiento con tal de determinar un intervalo de confianza para el _score_ del modelo.* Realizar una predicción con cada uno de los tres modelos con los datos _test_ y obtener el _score_. * Analizar sus métricas de error (**accuracy**, **precision**, **recall**, **f-score**) Exploración de los datosA continuación se carga el conjunto de datos a utilizar, a través del sub-módulo `datasets` de `sklearn`.
###Code
import numpy as np
import pandas as pd
from sklearn import datasets
import matplotlib.pyplot as plt
import missingno as msno
import time
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
digits_dict = datasets.load_digits()
print(digits_dict["DESCR"])
digits_dict.keys()
digits_dict["target"]
###Output
_____no_output_____
###Markdown
A continuación se crea dataframe declarado como `digits` con los datos de `digits_dict` tal que tenga 65 columnas, las 6 primeras a la representación de la imagen en escala de grises (0-blanco, 255-negro) y la última correspondiente al dígito (`target`) con el nombre _target_.
###Code
digits_dict = datasets.load_digits()
digits = (
pd.DataFrame(
digits_dict["data"],
)
.rename(columns=lambda x: f"c{x:02d}")
.assign(target=digits_dict["target"])
.astype(int)
)
digits.head()
###Output
_____no_output_____
###Markdown
Ahora veremos si existen datos faltantes:
###Code
msno.matrix(digits.dropna())
###Output
_____no_output_____
###Markdown
No existen datos faltantes. Ahora, veamos una visualización por target.
###Code
import seaborn as sns
sns.countplot(x='target',data=digits)
###Output
_____no_output_____
###Markdown
Ejercicio 2**Visualización:** Para visualizar los datos utilizaremos el método `imshow` de `matplotlib`. Resulta necesario convertir el arreglo desde las dimensiones (1,64) a (8,8) para que la imagen sea cuadrada y pueda distinguirse el dígito. Superpondremos además el label correspondiente al dígito, mediante el método `text`. Esto nos permitirá comparar la imagen generada con la etiqueta asociada a los valores. Realizaremos lo anterior para los primeros 25 datos del archivo.
###Code
digits_dict['images'][0]
###Output
_____no_output_____
###Markdown
Visualiza imágenes de los dígitos utilizando la llave `images` de `digits_dict`. Sugerencia: Utiliza `plt.subplots` y el método `imshow`. Puedes hacer una grilla de varias imágenes al mismo tiempo!
###Code
# Hagamos un pequeño trabajo de índices y nombres.
#indx
nx, ny = 5, 5
indx = [ ]
for i in range(nx):
for j in range(ny):
indx.append( (i,j) )
#name
name = [ ]
for k in range(nx*ny):
if k < 10:
name.append( "c0"+str(k) )
else:
name.append( "c"+str(k) )
fig, axs = plt.subplots(nx, ny, figsize=(20, 20))
for k in range(nx*ny):
i,j = indx[k]
axs[i,j].imshow(digits_dict["images"][k],cmap='Greys')
axs[i][j].set_title(name[k])
plt.show()
###Output
_____no_output_____
###Markdown
Ejercicio 3**Machine Learning**: En esta parte usted debe entrenar los distintos modelos escogidos desde la librería de `skelearn`. Para cada modelo, debe realizar los siguientes pasos:* **train-test** * Crear conjunto de entrenamiento y testeo (usted determine las proporciones adecuadas). * Imprimir por pantalla el largo del conjunto de entrenamiento y de testeo. * **modelo**: * Instanciar el modelo objetivo desde la librería sklearn. * *Hiper-parámetros*: Utiliza `sklearn.model_selection.GridSearchCV` para obtener la mejor estimación de los parámetros del modelo objetivo.* **Métricas**: * Graficar matriz de confusión. * Analizar métricas de error.__Preguntas a responder:__* ¿Cuál modelo es mejor basado en sus métricas?* ¿Cuál modelo demora menos tiempo en ajustarse?* ¿Qué modelo escoges? Train test
###Code
# datos
X = digits.drop(columns="target").values
y = digits["target"].values
from sklearn.model_selection import train_test_split
# split dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 2)
# print rows train and test sets
print('Separando informacion:\n')
print('numero de filas data original : ',len(X))
print('numero de filas train set : ',len(X_train))
print('numero de filas test set : ',len(X_test))
###Output
Separando informacion:
numero de filas data original : 1797
numero de filas train set : 1437
numero de filas test set : 360
###Markdown
Modelo
###Code
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn import svm
from sklearn.ensemble import RandomForestClassifier
# Logistic Regression
LogReg_params = {'tol':[1e-4,1e-5], 'C': [1.0,5.0,10.0],'solver': ['newton-cg'], 'max_iter':range(1500,2500,100)}
rlog = GridSearchCV(LogisticRegression(max_iter=3000),LogReg_params)
rlog.fit(X_train, y_train)
# K-Nearest Neighbours
Neigh_params = {'algorithm': ('auto', 'ball_tree', 'kd_tree', 'brute'), 'leaf_size': (20,30,40),
'n_neighbors':(4,10,15,20), 'weights': ('uniform', 'distance')}
neigh = GridSearchCV(KNeighborsClassifier(),Neigh_params)
neigh.fit(X_train, y_train)
# A elección: svm.SVC
SVC_params = {'C':range(1,10),'degree':range(1,5),'kernel':('poly', 'rbf', 'sigmoid')}
svm_svc = GridSearchCV(svm.SVC(),SVC_params)
svm_svc.fit(X_train, y_train)
# A elección: svm.SVC
Rand_Forest_params = {'n_estimators':range(1,100,10),'criterion':['gini','entropy'],'min_samples_split':range(2,5),
'min_samples_leaf':range(1,5)}
randForest = GridSearchCV(RandomForestClassifier(),Rand_Forest_params)
randForest.fit(X_train, y_train)
#Hiper-Parámetros
print("Hiper-Parámetros: ")
print( "Logistic Regression: "+ str(rlog.cv_results_['params'][0]))
print("KNeighborsClassifier: " + str(neigh.cv_results_['params'][0]))
print( "svm.SVC: "+ str(svm_svc.cv_results_['params'][0]))
print( "RandomForestClassifier: "+ str(randForest.cv_results_['params'][0]))
###Output
Hiper-Parámetros:
Logistic Regression: {'C': 1.0, 'max_iter': 1500, 'solver': 'newton-cg', 'tol': 0.0001}
KNeighborsClassifier: {'algorithm': 'auto', 'leaf_size': 20, 'n_neighbors': 4, 'weights': 'uniform'}
svm.SVC: {'C': 1, 'degree': 1, 'kernel': 'poly'}
RandomForestClassifier: {'criterion': 'gini', 'min_samples_leaf': 1, 'min_samples_split': 2, 'n_estimators': 1}
###Markdown
En consecuencia, los mejores modelos son:
###Code
train_times = {}
# Regresión Logística.
rlog = LogisticRegression(C= 1.0, solver ='newton-cg', tol = 0.0001,max_iter=1500)
t1=time.time()
rlog.fit(X_train, y_train) # ajustando el modelo
train_times["LogisticRegression"]= time.time() - t1
# K-Nearest Neighbours
neigh = KNeighborsClassifier(algorithm = 'auto', leaf_size = 20, n_neighbors = 4, weights = 'uniform')
t1=time.time()
neigh.fit(X_train, y_train)
train_times["KNeighborsClassifier"] = time.time() - t1
# A elección: svm.SVC
svm_svc = svm.SVC( degree= 1,kernel = 'poly')
t1=time.time()
svm_svc.fit(X_train, y_train)
train_times["svm.SVC"]= time.time() - t1
# A elección: RandomForestClassifier
randForest = RandomForestClassifier(criterion= 'gini', min_samples_leaf= 1, min_samples_split= 2, n_estimators= 1)
t1=time.time()
randForest.fit(X_train, y_train)
train_times["RandomForestClassifier"]= time.time() - t1
print('Los tiempos de entrenamiento son:')
for model,tiempo in train_times.items():
print("El tiempo de entrenamiento del modelo: {m} es {t}".format(m = model,t = tiempo))
###Output
Los tiempos de entrenamiento son:
El tiempo de entrenamiento del modelo: LogisticRegression es 9.809859991073608
El tiempo de entrenamiento del modelo: KNeighborsClassifier es 0.02494215965270996
El tiempo de entrenamiento del modelo: svm.SVC es 0.0557858943939209
El tiempo de entrenamiento del modelo: RandomForestClassifier es 0.005047321319580078
###Markdown
Métricas
###Code
from sklearn.metrics import confusion_matrix,accuracy_score,recall_score,precision_score,f1_score
for model in [rlog, neigh, svm_svc,randForest]:
print('##################### {m} #####################'.format(m = str(model)))
y_true = list(y_test)
y_pred = list(model.predict(X_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
print('\nMetricas:\n ')
print('accuracy: ',accuracy_score(y_true,y_pred))
print('recall: ',recall_score(y_true,y_pred,average='weighted'))
print('precision: ',precision_score(y_true,y_pred,average='weighted'))
print('f-score: ',f1_score(y_true,y_pred,average='weighted'))
print("")
###Output
##################### LogisticRegression(max_iter=1500, solver='newton-cg') #####################
Matriz de confusion:
[[31 0 0 0 1 0 0 0 0 0]
[ 0 41 0 1 0 0 0 0 1 1]
[ 0 0 31 0 0 0 0 0 0 0]
[ 0 0 0 33 0 0 0 2 1 0]
[ 0 0 0 0 31 0 0 0 3 1]
[ 0 1 0 0 1 40 0 0 0 1]
[ 0 1 0 0 0 0 33 0 1 0]
[ 0 0 0 0 0 0 0 39 0 1]
[ 0 0 0 0 1 0 0 0 34 1]
[ 0 0 0 0 0 1 0 0 1 26]]
Metricas:
accuracy: 0.9416666666666667
recall: 0.9416666666666667
precision: 0.944844330702475
f-score: 0.9424086721254218
##################### KNeighborsClassifier(leaf_size=20, n_neighbors=4) #####################
Matriz de confusion:
[[32 0 0 0 0 0 0 0 0 0]
[ 0 44 0 0 0 0 0 0 0 0]
[ 0 0 31 0 0 0 0 0 0 0]
[ 0 0 0 35 0 0 0 1 0 0]
[ 0 0 0 0 33 0 0 1 1 0]
[ 0 0 0 0 0 43 0 0 0 0]
[ 0 0 0 0 0 0 35 0 0 0]
[ 0 0 0 0 0 0 0 40 0 0]
[ 0 1 0 0 0 0 0 0 35 0]
[ 0 0 0 0 0 1 0 1 0 26]]
Metricas:
accuracy: 0.9833333333333333
recall: 0.9833333333333333
precision: 0.9840395883903637
f-score: 0.9833113636560299
##################### SVC(degree=1, kernel='poly') #####################
Matriz de confusion:
[[31 0 0 0 1 0 0 0 0 0]
[ 0 44 0 0 0 0 0 0 0 0]
[ 0 0 31 0 0 0 0 0 0 0]
[ 0 0 0 34 0 0 0 0 2 0]
[ 0 0 0 0 31 0 0 0 3 1]
[ 0 0 0 0 0 43 0 0 0 0]
[ 0 1 0 0 0 0 34 0 0 0]
[ 0 0 0 0 0 0 0 39 1 0]
[ 0 1 0 0 0 0 0 0 34 1]
[ 0 0 0 0 0 1 0 0 2 25]]
Metricas:
accuracy: 0.9611111111111111
recall: 0.9611111111111111
precision: 0.9641242135090263
f-score: 0.9616808512835242
##################### RandomForestClassifier(n_estimators=1) #####################
Matriz de confusion:
[[27 0 1 1 0 0 0 0 1 2]
[ 0 35 4 0 0 2 1 0 1 1]
[ 0 0 22 4 0 0 0 0 4 1]
[ 0 1 2 27 0 3 0 0 2 1]
[ 0 2 0 0 25 4 0 1 1 2]
[ 0 0 0 1 0 35 0 2 3 2]
[ 0 0 2 0 0 0 31 0 2 0]
[ 0 0 0 0 1 3 0 32 3 1]
[ 0 2 0 0 0 2 5 2 22 3]
[ 0 3 1 2 1 2 0 0 1 18]]
Metricas:
accuracy: 0.7611111111111111
recall: 0.7611111111111111
precision: 0.7744224356217351
f-score: 0.764669683315553
###Markdown
Luego de realizar y comprar cuatro modelos, podemos afirmar que quien tiene mayor precisión es *KNeighborsClassifier(leaf_size=20, n_neighbors=4)* considerando que es el segundo más rápido en después de *RandomForestClassifier*, es por esto, que nos mantendremos trabajando con *KNeighborsClassifier* Ejercicio 4__Comprensión del modelo:__ Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, debe comprender e interpretar minuciosamente los resultados y gráficos asocados al modelo en estudio, para ello debe resolver los siguientes puntos: * **Cross validation**: usando **cv** (con n_fold = 10), sacar una especie de "intervalo de confianza" sobre alguna de las métricas estudiadas en clases: * $\mu \pm \sigma$ = promedio $\pm$ desviación estandar * **Curva de Validación**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_validation_curve.htmlsphx-glr-auto-examples-model-selection-plot-validation-curve-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico. * **Curva AUC–ROC**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.htmlsphx-glr-auto-examples-model-selection-plot-roc-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico. **Cross Validation:**
###Code
from sklearn.model_selection import cross_val_score
scores = cross_val_score(neigh, X, y, cv=10)
print("Precisión: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
###Output
Precisión: 0.97 (+/- 0.03)
###Markdown
**Curva de Validación**
###Code
from sklearn.model_selection import validation_curve
# Create range of values for parameter
param_range = np.arange(1, 250, 2)
# Calculate accuracy on training and test set using range of parameter values
train_scores, test_scores = validation_curve(neigh, X, y, param_name="n_neighbors", param_range=param_range,
cv=10, scoring="accuracy", n_jobs=-1)
# Calculate mean and standard deviation for training set scores
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
# Calculate mean and standard deviation for test set scores
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
# Plot mean accuracy scores for training and test sets
plt.plot(param_range, train_mean, label="Training score", color="black")
plt.plot(param_range, test_mean, label="Cross-validation score", color="dimgrey")
# Plot accurancy bands for training and test sets
plt.fill_between(param_range, train_mean - train_std, train_mean + train_std, color="gray")
plt.fill_between(param_range, test_mean - test_std, test_mean + test_std, color="gainsboro")
# Create plot
plt.title("Validation Curve With KNeighborsClassifier")
plt.xlabel("Number Of Neighbors")
plt.ylabel("Accuracy Score")
plt.tight_layout()
plt.legend(loc="best")
plt.show()
###Output
_____no_output_____
###Markdown
**Curva AUC–ROC**
###Code
from sklearn.preprocessing import label_binarize
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve, auc
y_test=label_binarize(y_test, classes=[i for i in range(10)])
# K-Nearest Neighbours
neigh = KNeighborsClassifier(algorithm = 'auto', leaf_size = 20, n_neighbors = 4, weights = 'uniform')
y_score = neigh.fit(X_train, y_train).predict_proba(X_test)
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(10):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
plt.figure(figsize=(10,10))
lw = 2
for i in range(10):
plt.plot(fpr[i], tpr[i],
lw=lw, label='ROC curve del dijito {0} (area ={1:f})'
''.format(i,roc_auc[i]) )
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([-0.01, 1.0])
plt.ylim([0.0, 1.005])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Ejercicio 5__Reducción de la dimensión:__ Tomando en cuenta el mejor modelo encontrado en el `Ejercicio 3`, debe realizar una redcción de dimensionalidad del conjunto de datos. Para ello debe abordar el problema ocupando los dos criterios visto en clases: * **Selección de atributos*** **Extracción de atributos**__Preguntas a responder:__Una vez realizado la reducción de dimensionalidad, debe sacar algunas estadísticas y gráficas comparativas entre el conjunto de datos original y el nuevo conjunto de datos (tamaño del dataset, tiempo de ejecución del modelo, etc.) Comparemos los métodos de **selección de atributos** a través de *SelectKBest* y **Extracción de atributos** a través de *análisis de componentes principales (PCA)* Selección de atributos: SelectKBest
###Code
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
# Aplicando el algoritmo univariante de prueba F.
k = 20 # número de atributos a seleccionar
# Seleccionamos 25 atributos dado que en PCA, trabajaremos con las 25 componentes principales.
# Separamos las columnas objetivo
columnas = list(digits.drop(['target'], axis= 1).columns.values)
seleccionadas = SelectKBest(f_classif, k=k).fit(X_train, y_train)
catrib = seleccionadas.get_support()
atributos = [columnas[i] for i in list(catrib.nonzero()[0])]
###Output
_____no_output_____
###Markdown
Para lo cual, trabajeremos con la selección:['c02', 'c10', 'c13', 'c18', 'c19', 'c20', 'c21', 'c26', 'c27', 'c28', 'c30', 'c33', 'c34', 'c35', 'c36', 'c38', 'c42', 'c43', 'c44', 'c46', 'c53', 'c54', 'c58', 'c60', 'c61'] Que nos entrega *SelectKBest*. Extracción de atributos: PCA
###Code
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
# Reescalamiento de los datos.
X_new = StandardScaler().fit_transform(X_train)
# Ajuste modelo
pca = PCA(n_components = 64)
principalComponents = pca.fit_transform(X_new)
# Graficar varianza por componente
percent_variance = pca.explained_variance_ratio_* 100
columns = [ "PC{j}".format(j = i) for i in range(64)]
plt.figure(figsize=(15,4))
plt.bar(x= range(1,65), height=percent_variance, tick_label=columns)
plt.xticks(rotation=75)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component')
plt.title('PCA Scree Plot')
plt.show()
# graficar varianza por la suma acumulada de los componente
percent_variance_cum = np.cumsum(percent_variance)
columns = [ "S{j}".format(j = i) for i in range(64)]
plt.figure(figsize=(15,4))
plt.bar(x= range(1,65), height=percent_variance_cum , tick_label=columns)
plt.xticks(rotation=75)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component Cumsum')
plt.title('PCA Scree Plot')
plt.show()
percent_variance_cum[39]
#percent_variance_cum
###Output
_____no_output_____
###Markdown
Es decir, que la varianza explicada de las variables se puede explicar en 95.08% considerando solo las 39 primeras componentes principales. Realicemos el ajuste para las 39 componentes principales y realicemos la nueva gráfica proyectada a estas componentes
###Code
pca = PCA(n_components=39)
principalComponents = pca.fit_transform(X_train)
principalDataframe = pd.DataFrame(data = principalComponents, columns = [ "PC{j}".format(j = i) for i in range(39)])
targetDataframe = digits[['target']]
newDataframe = pd.concat([principalDataframe, targetDataframe],axis = 1)
newDataframe.head()
###Output
_____no_output_____
###Markdown
Comparación Ahora, comparemos los modelos.
###Code
# datos
X = digits.drop(columns="target").values
y = digits["target"].values
train_times = {}
cantidad_atributos = {}
# split dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 2)
cantidad_atributos["KNC_original"] = [X_train.shape[1]]
# print rows train and test sets
print('Separando informacion:\n')
print('numero de filas data original : ',len(X))
print('numero de filas train set : ',len(X_train))
print('numero de filas test set : ',len(X_test))
# K-Nearest Neighbours
original = KNeighborsClassifier(algorithm = 'auto', leaf_size = 20, n_neighbors = 4, weights = 'uniform')
t1 = time.time()
original.fit(X_train, y_train)
train_times["KNC_original"] = [time.time() - t1]
model = original
print('##################### {m} #####################'.format(m = str(model)))
y_true = list(y_test)
y_pred = list(model.predict(X_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
print('\nMetricas:\n ')
print('accuracy: ',accuracy_score(y_true,y_pred))
print('recall: ',recall_score(y_true,y_pred,average='weighted'))
print('precision: ',precision_score(y_true,y_pred,average='weighted'))
print('f-score: ',f1_score(y_true,y_pred,average='weighted'))
print("")
# PCA
pca = PCA(n_components=39)
X_new = pca.fit_transform(X)
X_train, X_test, Y_train, Y_test = train_test_split(X_new, y, test_size=0.2, random_state = 2)
cantidad_atributos["PCA_39"] = [X_train.shape[1]]
kn_pca = KNeighborsClassifier(algorithm = 'auto', leaf_size = 20, n_neighbors = 4, weights = 'uniform')
t1 = time.time()
kn_pca.fit(X_train, y_train)
train_times["PCA_39"] = [time.time() - t1]
model = kn_pca
print('##################### {m} #####################'.format(m = str(model)))
y_true = list(y_test)
y_pred = list(model.predict(X_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
print('\nMetricas:\n ')
print('accuracy: ',accuracy_score(y_true,y_pred))
print('recall: ',recall_score(y_true,y_pred,average='weighted'))
print('precision: ',precision_score(y_true,y_pred,average='weighted'))
print('f-score: ',f1_score(y_true,y_pred,average='weighted'))
print("")
# SelectKBest.
# atributos: lista de los atributos seleccionados por SelectKBest
X_train, X_test, y_train, y_test = train_test_split(digits[atributos], y, test_size=0.2, random_state = 2)
cantidad_atributos["SKB_KNC"] = [X_train.shape[1]]
# train new model
neigh_skb = KNeighborsClassifier(algorithm = 'auto', leaf_size = 20, n_neighbors = 4, weights = 'uniform')
t1=time.time()
neigh_skb.fit(X_train, y_train)
train_times["SKB_KNC"] = [time.time() - t1]
model = neigh_skb
print('##################### {m} #####################'.format(m = str(model)))
y_true = list(y_test)
y_pred = list(model.predict(X_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true,y_pred))
print('\nMetricas:\n ')
print('accuracy: ',accuracy_score(y_true,y_pred))
print('recall: ',recall_score(y_true,y_pred,average='weighted'))
print('precision: ',precision_score(y_true,y_pred,average='weighted'))
print('f-score: ',f1_score(y_true,y_pred,average='weighted'))
print("")
print("Los tiempos de entrenamiento son: ")
print("KNC_original: "+str(train_times["KNC_original"][0]))
print("PCA_39: "+str(train_times["PCA_39"][0]))
print("SKB_KNC: "+str(train_times["SKB_KNC"][0]))
print("La cantidad de atributos considerados del dataset son: ")
print("KNC_original: "+str(cantidad_atributos["KNC_original"][0]))
print("PCA_39: "+str(cantidad_atributos["PCA_39"][0]))
print("SKB_KNC: "+str(cantidad_atributos["SKB_KNC"][0]))
# Figure
fig, axs = plt.subplots(1, 2 , figsize=(15, 5) )
nombres = list(cantidad_atributos.keys())
datos = np.array(list(cantidad_atributos.values())).T[0]
xx = range(len(datos))
axs[0].set_title('Número de atributos')
axs[0].bar(xx, datos, width=0.8, align='center')
axs[0].set_xticks(xx)
axs[0].set_xticklabels(nombres)
datos = np.array(list(train_times.values())).T[0]
axs[1].set_title('Tiempo de entrenamiento')
axs[1].bar(xx, datos, width=0.8, align='center')
axs[1].set_xticks(xx)
axs[1].set_xticklabels(nombres)
plt.show()
###Output
_____no_output_____
###Markdown
Ejercicio 6__Visualizando Resultados:__ A continuación se provee código para comparar las etiquetas predichas vs las etiquetas reales del conjunto de _test_.
###Code
def mostar_resultados(digits,model,nx=3, ny=3,label = "correctos"):
"""
Muestra los resultados de las prediciones de un modelo
de clasificacion en particular. Se toman aleatoriamente los valores
de los resultados.
- label == 'correcto': retorna los valores en que el modelo acierta.
- label == 'incorrecto': retorna los valores en que el modelo no acierta.
Observacion: El modelo que recibe como argumento debe NO encontrarse
'entrenado'.
:param digits: dataset 'digits'
:param model: modelo de sklearn
:param nx: numero de filas (subplots)
:param ny: numero de columnas (subplots)
:param label: datos correctos o incorrectos
:return: graficos matplotlib
"""
digits_dict = datasets.load_digits()
digits = (
pd.DataFrame(
digits_dict["data"],
)
.rename(columns=lambda x: f"c{x:02d}")
.assign(target=digits_dict["target"])
.astype(int)
)
X = digits.drop(columns="target").values
y = digits["target"].values
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.2, random_state = 42)
model.fit(X_train, Y_train) # ajustando el modelo
y_pred = list(model.predict(X_test))
# Mostrar los datos correctos
if label=="correctos":
mask = (y_pred == y_test)
color = "green"
# Mostrar los datos correctos
elif label=="incorrectos":
mask = (y_pred != y_test)
color = "red"
else:
raise ValueError("Valor incorrecto")
X_aux = X_test[mask]
y_aux_true = np.array(y_test)[mask]
y_aux_pred = np.array(y_pred)[mask]
# We'll plot the first 100 examples, randomly choosen
fig, ax = plt.subplots(nx, ny, figsize=(12,12))
for i in range(nx):
for j in range(ny):
index = j + ny * i
data = X_aux[index, :].reshape(8,8)
label_pred = str(int(y_aux_pred[index]))
label_true = str(int(y_aux_true[index]))
ax[i][j].imshow(data, interpolation='nearest', cmap='gray_r')
ax[i][j].text(0, 0, label_pred, horizontalalignment='center', verticalalignment='center', fontsize=10, color=color)
ax[i][j].text(7, 0, label_true, horizontalalignment='center', verticalalignment='center', fontsize=10, color='blue')
ax[i][j].get_xaxis().set_visible(False)
ax[i][j].get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
**Pregunta*** Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, grafique los resultados cuando: * el valor predicho y original son iguales * el valor predicho y original son distintos * Cuando el valor predicho y original son distintos , ¿Por qué ocurren estas fallas?
###Code
model = KNeighborsClassifier(leaf_size = 20, n_neighbors = 4, weights = 'uniform')
mostar_resultados(digits, model, nx=5, ny=5, label = "correctos")
mostar_resultados(digits, model, nx=5, ny=5, label = "incorrectos")
###Output
_____no_output_____
###Markdown
Tarea N°02 Instrucciones1.- Completa tus datos personales (nombre y rol USM) en siguiente celda.**Nombre**: Maximiliano Ramírez Núñez**Rol**: 201710507-02.- Debes pushear este archivo con tus cambios a tu repositorio personal del curso, incluyendo datos, imágenes, scripts, etc.3.- Se evaluará:- Soluciones- Código- Que Binder esté bien configurado.- Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error. I.- Clasificación de dígitosEn este laboratorio realizaremos el trabajo de reconocer un dígito a partir de una imagen.  El objetivo es a partir de los datos, hacer la mejor predicción de cada imagen. Para ellos es necesario realizar los pasos clásicos de un proyecto de _Machine Learning_, como estadística descriptiva, visualización y preprocesamiento. * Se solicita ajustar al menos tres modelos de clasificación: * Regresión logística * K-Nearest Neighbours * Uno o más algoritmos a su elección [link](https://scikit-learn.org/stable/supervised_learning.htmlsupervised-learning) (es obligación escoger un _estimator_ que tenga por lo menos un hiperparámetro). * En los modelos que posean hiperparámetros es mandatorio buscar el/los mejores con alguna técnica disponible en `scikit-learn` ([ver más](https://scikit-learn.org/stable/modules/grid_search.htmltuning-the-hyper-parameters-of-an-estimator)).* Para cada modelo, se debe realizar _Cross Validation_ con 10 _folds_ utilizando los datos de entrenamiento con tal de determinar un intervalo de confianza para el _score_ del modelo.* Realizar una predicción con cada uno de los tres modelos con los datos _test_ y obtener el _score_. * Analizar sus métricas de error (**accuracy**, **precision**, **recall**, **f-score**) Exploración de los datosA continuación se carga el conjunto de datos a utilizar, a través del sub-módulo `datasets` de `sklearn`.
###Code
import numpy as np
import pandas as pd
from sklearn import datasets
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn import tree
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
from sklearn import svm
import warnings
import timeit
warnings.filterwarnings("ignore")
sns.set_palette("deep", desat=.6)
sns.set(rc={'figure.figsize':(20,30)})
%matplotlib inline
digits_dict = datasets.load_digits()
print(digits_dict["DESCR"])
digits_dict.keys()
digits_dict["target"]
###Output
_____no_output_____
###Markdown
A continuación se crea dataframe declarado como `digits` con los datos de `digits_dict` tal que tenga 65 columnas, las 6 primeras a la representación de la imagen en escala de grises (0-blanco, 255-negro) y la última correspondiente al dígito (`target`) con el nombre _target_.
###Code
digits = (
pd.DataFrame(
digits_dict["data"],
)
.rename(columns=lambda x: f"c{x:02d}")
.assign(target=digits_dict["target"])
.astype(int)
)
digits.head()
###Output
_____no_output_____
###Markdown
Ejercicio 1**Análisis exploratorio:** Realiza tu análisis exploratorio, no debes olvidar nada! Recuerda, cada análisis debe responder una pregunta.Algunas sugerencias:* ¿Cómo se distribuyen los datos?* ¿Cuánta memoria estoy utilizando?* ¿Qué tipo de datos son?* ¿Cuántos registros por clase hay?* ¿Hay registros que no se correspondan con tu conocimiento previo de los datos?
###Code
digits.describe()
print(len(digits.columns))
gr = digits.groupby(['target']).size().reset_index(name='counts')
fig, ax = plt.subplots(figsize=(8,4),nrows=1)
sns.barplot(data=gr, x='target', y='counts', palette="Blues_d",ax=ax)
ax.set_title('Distribución de clases')
plt.show()
print("")
###Output
65
###Markdown
Notamos que todas las clases están distribuidad uniformemente
###Code
df=digits.drop(['target'],axis=1) #df sin target
figure(num=None, figsize=(30, 30)) #Ajustamos nuestra ventana de ploteo
k=1 #Establesemos un contador para el ploteo.
for i in df.columns: #recorrer columnas para generar histogramas
plt.subplot(8,8,k)
plt.hist(df[i], bins = 60)
plt.title('Histograma para la celda '+i)
k+=1
plt.show()
#Memoria utilizada 456.4 KB
digits.info()
#Tipos de datos
digits.dtypes.unique()
#Todos tienen la misma cantidad de elementos
digits.describe().T['count'].unique()
###Output
_____no_output_____
###Markdown
Ejercicio 2**Visualización:** Para visualizar los datos utilizaremos el método `imshow` de `matplotlib`. Resulta necesario convertir el arreglo desde las dimensiones (1,64) a (8,8) para que la imagen sea cuadrada y pueda distinguirse el dígito. Superpondremos además el label correspondiente al dígito, mediante el método `text`. Esto nos permitirá comparar la imagen generada con la etiqueta asociada a los valores. Realizaremos lo anterior para los primeros 25 datos del archivo.
###Code
digits_dict["images"][0]
###Output
_____no_output_____
###Markdown
Visualiza imágenes de los dígitos utilizando la llave `images` de `digits_dict`. Sugerencia: Utiliza `plt.subplots` y el método `imshow`. Puedes hacer una grilla de varias imágenes al mismo tiempo!
###Code
nx, ny = 5, 5
fig, axs = plt.subplots(nx, ny, figsize=(12, 12))
for i in range(1,26):
plt.subplot(5,5,i)
plt.imshow(digits_dict["images"][i])
###Output
_____no_output_____
###Markdown
Ejercicio 3**Machine Learning**: En esta parte usted debe entrenar los distintos modelos escogidos desde la librería de `skelearn`. Para cada modelo, debe realizar los siguientes pasos:* **train-test** * Crear conjunto de entrenamiento y testeo (usted determine las proporciones adecuadas). * Imprimir por pantalla el largo del conjunto de entrenamiento y de testeo. * **modelo**: * Instanciar el modelo objetivo desde la librería sklearn. * *Hiper-parámetros*: Utiliza `sklearn.model_selection.GridSearchCV` para obtener la mejor estimación de los parámetros del modelo objetivo.* **Métricas**: * Graficar matriz de confusión. * Analizar métricas de error.__Preguntas a responder:__* ¿Cuál modelo es mejor basado en sus métricas?* ¿Cuál modelo demora menos tiempo en ajustarse?* ¿Qué modelo escoges?
###Code
X = digits.drop(columns="target").values
y = digits["target"].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
print("Largo Train: ", X_train.shape)
print("Largo Test: ", X_test.shape)
###Output
Largo Train: (1437, 64)
Largo Test: (360, 64)
###Markdown
Regresión Logística
###Code
parameters = {'penalty': ['l1', 'l2', 'elasticnet'], 'C':[1, 10]}
reg = LogisticRegression()
gs = GridSearchCV(reg, parameters)
gs.fit(X_train, y_train)
print("Best: %f con %s" % (gs.best_score_, gs.best_params_))
#Entrenar modelo
clf = LogisticRegression(penalty='l2', C=1)
clf.fit(X_train, y_train)
#Predicción
y_pred= clf.predict(X_test)
#Evaluar
confusion_matrix(y_test, y_pred)
#Métricas
target_names = ['numero '+ str(i) for i in range(0,10)]
print(classification_report(y_test, y_pred, target_names=target_names, digits=5))
###Output
precision recall f1-score support
numero 0 1.00000 1.00000 1.00000 33
numero 1 0.96552 1.00000 0.98246 28
numero 2 0.97059 1.00000 0.98507 33
numero 3 0.97059 0.97059 0.97059 34
numero 4 1.00000 0.95652 0.97778 46
numero 5 0.91667 0.93617 0.92632 47
numero 6 0.94444 0.97143 0.95775 35
numero 7 1.00000 0.97059 0.98507 34
numero 8 0.96667 0.96667 0.96667 30
numero 9 0.97436 0.95000 0.96203 40
accuracy 0.96944 360
macro avg 0.97088 0.97220 0.97137 360
weighted avg 0.96994 0.96944 0.96952 360
###Markdown
KNN
###Code
parameters = {'n_neighbors':[1, 10]}
knn = KNeighborsClassifier()
gs = GridSearchCV(knn, parameters)
gs.fit(X_train, y_train)
print("Best: %f con %s" % (gs.best_score_, gs.best_params_))
#Entrenar modelo
clf = KNeighborsClassifier(n_neighbors=1)
clf.fit(X_train, y_train)
#Predicción
y_pred= clf.predict(X_test)
#Evaluar
confusion_matrix(y_test, y_pred)
#Métricas
target_names = ['numero '+ str(i) for i in range(0,10)]
print(classification_report(y_test, y_pred, target_names=target_names, digits=5))
###Output
precision recall f1-score support
numero 0 1.00000 1.00000 1.00000 33
numero 1 0.93333 1.00000 0.96552 28
numero 2 1.00000 1.00000 1.00000 33
numero 3 0.97143 1.00000 0.98551 34
numero 4 0.97826 0.97826 0.97826 46
numero 5 0.97872 0.97872 0.97872 47
numero 6 0.97222 1.00000 0.98592 35
numero 7 1.00000 0.97059 0.98507 34
numero 8 1.00000 0.93333 0.96552 30
numero 9 0.94872 0.92500 0.93671 40
accuracy 0.97778 360
macro avg 0.97827 0.97859 0.97812 360
weighted avg 0.97816 0.97778 0.97771 360
###Markdown
SVM
###Code
from sklearn.svm import SVC
parameters = {'kernel':('linear', 'rbf'), 'C':range(10)}
sv = svm.SVC()
gs = GridSearchCV(sv, parameters)
gs.fit(X_train, y_train)
print("Best: %f con %s" % (gs.best_score_, gs.best_params_))
from sklearn.svm import SVC
#Entrenar modelo
clf = SVC(kernel= 'rbf', C=7)
%timeit clf.fit(X_train, y_train)
#Predicción
y_pred= clf.predict(X_test)
#Evaluar
confusion_matrix(y_test, y_pred)
#Métricas
target_names = ['numero '+ str(i) for i in range(0,10)]
print(classification_report(y_test, y_pred, target_names=target_names, digits=5))
###Output
precision recall f1-score support
numero 0 1.00000 1.00000 1.00000 33
numero 1 1.00000 1.00000 1.00000 28
numero 2 1.00000 1.00000 1.00000 33
numero 3 1.00000 0.97059 0.98507 34
numero 4 1.00000 1.00000 1.00000 46
numero 5 0.95833 0.97872 0.96842 47
numero 6 0.97222 1.00000 0.98592 35
numero 7 0.97059 0.97059 0.97059 34
numero 8 1.00000 0.96667 0.98305 30
numero 9 0.97500 0.97500 0.97500 40
accuracy 0.98611 360
macro avg 0.98761 0.98616 0.98681 360
weighted avg 0.98630 0.98611 0.98613 360
###Markdown
Ejercicio 4__Comprensión del modelo:__ Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, debe comprender e interpretar minuciosamente los resultados y gráficos asocados al modelo en estudio, para ello debe resolver los siguientes puntos: * **Cross validation**: usando **cv** (con n_fold = 10), sacar una especie de "intervalo de confianza" sobre alguna de las métricas estudiadas en clases: * $\mu \pm \sigma$ = promedio $\pm$ desviación estandar * **Curva de Validación**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_validation_curve.htmlsphx-glr-auto-examples-model-selection-plot-validation-curve-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico. * **Curva AUC–ROC**: Replica el ejemplo del siguiente [link](https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.htmlsphx-glr-auto-examples-model-selection-plot-roc-py) pero con el modelo, parámetros y métrica adecuada. Saque conclusiones del gráfico. **Se selecciona SVC como mejor modelo**
###Code
from sklearn.model_selection import cross_val_score
svm_best = svm.SVC(kernel='rbf', C=10)
scores = cross_val_score(svm_best, X, y, cv=10)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
###Output
Accuracy: 0.98 (+/- 0.03)
###Markdown
Curva de validación
###Code
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_digits
from sklearn.svm import SVC
from sklearn.model_selection import validation_curve
parameters = {'kernel':['rbf'], 'C': np.arange(1,10)}
svm = SVC()
gs = GridSearchCV(svm, parameters,return_train_score=True)
gs.fit(X_train,y_train)
C_values= np.arange(1,10)
test_accuracy = []
for C_val in C_values:
svm = SVC(kernel='rbf', C=C_val)
svm.fit(X_train,y_train)
test_accuracy.append(svm.score(X_test,y_test))
fig, ax = plt.subplots(figsize=(15,8))
ax.plot(C_values,gs.cv_results_['mean_train_score'],color='g',lw=1.5,label='train_acc')
ax.plot(C_values,gs.cv_results_['mean_test_score'],color='y',lw=1.5,label='cv_acc')
ax.plot(C_values,test_accuracy,color='r',lw=1.5,label='test_acc')
plt.fill_between(C_values, gs.cv_results_['mean_test_score']-gs.cv_results_['std_test_score'], gs.cv_results_['mean_test_score']+gs.cv_results_['std_test_score'],color='gray', alpha=0.2)
plt.title("CV Accuracy versus Value of C")
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
###Output
_____no_output_____
###Markdown
La conclusión del gráfico es que el mejor parámetro C correspondería a $C=4$, pues es el que cumple la regla de la menor desviación estándar, lo que coincide con un buen score para un conjunto de test, a diferencia de la elección de gridsearch de $C=7$, donde se ve que el score para el conjunto de test no es tan bueno. Curva ROC
###Code
from sklearn.metrics import roc_curve, auc
from sklearn import datasets
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import LinearSVC
from sklearn.preprocessing import label_binarize
#from sklearn.cross_validation import train_test_split
from sklearn.model_selection import train_test_split
from itertools import cycle
from sklearn import svm
from sklearn.model_selection import train_test_split
from sklearn.multiclass import OneVsRestClassifier
from scipy import interp
from sklearn.metrics import roc_auc_score
%matplotlib inline
# Binarize the output
y = label_binarize(y, classes=[i for i in range(10)])
n_classes = y.shape[1]
# shuffle and split training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2,
random_state=42)
# Learn to predict each class against the other
classifier = OneVsRestClassifier(SVC(kernel='rbf', C=4, probability=True,
random_state=42))
y_score = classifier.fit(X_train, y_train).decision_function(X_test)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
import matplotlib.colors
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure(figsize=(8,6))
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.4f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.4f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
colors = cycle([plt.cm.tab20(i) for i in range(10)])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color,
label='ROC curve Número {0} (area = {1:0.4f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Some extension of Receiver operating characteristic to multi-class')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
En general todas las categorías de números a predecir son bien predichas por el modelo, con un accuracy bastante bueno. Los caso en los que existen errores, sin ambargo, muy pocos, son los casos en 3 y 8 por ejemplo, esto podría explicarse debido a la forma parecida de los números. Al igual que con el 6 y el 9. Ejercicio 5__Reducción de la dimensión:__ Tomando en cuenta el mejor modelo encontrado en el `Ejercicio 3`, debe realizar una redcción de dimensionalidad del conjunto de datos. Para ello debe abordar el problema ocupando los dos criterios visto en clases: * **Selección de atributos*** **Extracción de atributos**__Preguntas a responder:__Una vez realizado la reducción de dimensionalidad, debe sacar algunas estadísticas y gráficas comparativas entre el conjunto de datos original y el nuevo conjunto de datos (tamaño del dataset, tiempo de ejecución del modelo, etc.) Selección de atributos (selectKBest)
###Code
#escogemos las 40 caracteristicas más explicativas.
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
from sklearn.feature_selection import chi2
X_training = digits.drop('target',axis=1)
y_training = digits['target']
columnas = list(X_training.columns.values)
seleccionadas = SelectKBest(f_classif, k=40).fit(X_training, y_training)
#Se escogen 40 ya que hay aproximadamente 20 atributos que son sólamente ceros, y se espera que el método los reconozca como
#no importantes
catrib = seleccionadas.get_support()
atributos = [columnas[i] for i in list(catrib.nonzero()[0])]
atributos
df_selec= digits[atributos]
#estadísticas
import statsmodels.api as sm
model = sm.OLS(digits['target'], sm.add_constant(df_selec))
results = model.fit()
print(results.summary())
figure(num=None, figsize=(30, 30)) #Ajustamos nuestra ventana de ploteo
k=1 #Establesemos un contador para el ploteo.
for i in df_selec.columns: #recorrer columnas para generar histogramas
plt.subplot(8,8,k)
plt.hist(df[i], bins = 60)
plt.title('Histograma para la celda '+i)
k+=1
plt.show()
total_original = digits.drop(['target'],axis=1).shape[0]*digits.drop(['target'],axis=1).shape[1]
total_nuevo = df_selec.shape[0]*df_selec.shape[1]
df_comparar = pd.DataFrame(columns=['Df', 'counts'])
df_comparar.loc[0]= ['Original',total_original]
df_comparar.loc[1]= ['Nuevo',total_nuevo]
df_comparar
fig, ax = plt.subplots(figsize=(8,4),nrows=1)
sns.barplot(data=df_comparar, x='Df', y='counts', palette="Blues_d",ax=ax)
ax.set_title('Número de datos')
plt.show()
print("La cantidad de datos del DataFrame original es:", total_original)
print("La cantidad de datos del DataFrame despues de la selección es:", total_nuevo)
#comparamos con el primer modelo, Kernel='rbf, C=7
#Entrenar modelo
clf_2 = SVC(kernel= 'rbf', C=7)
print("Tiempo de ejecución del modelo")
%timeit clf_2.fit(X_train_2, y_train_2)
#Predicción
y_pred_2= clf_2.predict(X_test_2)
#Evaluar
confusion_matrix(y_test_2, y_pred_2)
#Métricas
target_names = ['numero '+ str(i) for i in range(0,10)]
print(classification_report(y_test_2, y_pred_2, target_names=target_names, digits=5))
###Output
precision recall f1-score support
numero 0 1.00000 1.00000 1.00000 33
numero 1 1.00000 1.00000 1.00000 28
numero 2 1.00000 1.00000 1.00000 33
numero 3 1.00000 0.97059 0.98507 34
numero 4 0.97872 1.00000 0.98925 46
numero 5 0.95918 1.00000 0.97917 47
numero 6 1.00000 1.00000 1.00000 35
numero 7 1.00000 0.97059 0.98507 34
numero 8 1.00000 0.96667 0.98305 30
numero 9 0.97500 0.97500 0.97500 40
accuracy 0.98889 360
macro avg 0.99129 0.98828 0.98966 360
weighted avg 0.98917 0.98889 0.98890 360
###Markdown
Resulta ser levemente más rápido (diferencia muy poco significativa). Y se obtiene una mejora en el accuracy Extracción de atributos (PCA)
###Code
#Escalamiento de los datos
from sklearn.preprocessing import StandardScaler
features = X_training.columns
X_escal = StandardScaler().fit_transform(X)
# ajustar modelo
from sklearn.decomposition import PCA
n=40
pca = PCA(n_components=n)
principalComponents = pca.fit_transform(X_escal)
# graficar varianza por componente
percent_variance = np.round(pca.explained_variance_ratio_* 100, decimals =2)
columns = ['PC'+ str(i) for i in range(1,n+1)]
plt.figure(figsize=(12,4))
plt.bar(x= range(1,n+1), height=percent_variance, tick_label=columns)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component')
plt.title('PCA Scree Plot')
plt.xticks(rotation=90)
plt.show()
# graficar varianza por la suma acumulada de los componente
percent_variance_cum = np.cumsum(percent_variance)
columns = ['PC1' + '+...+' + 'PC' + str(i) for i in range(2,n+1)]
columns.insert(0, 'PC1')
plt.figure(figsize=(12,4))
plt.bar(x= range(1,n+1), height=percent_variance_cum, tick_label=columns)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component Cumsum')
plt.title('PCA Scree Plot')
plt.xticks(rotation=90)
plt.show()
percent_variance_cum
###Output
_____no_output_____
###Markdown
notamos que si tomamos las primeras $40$ componentes principales, podemos explicar las demás variables del modelo en un 95.1%
###Code
pca = PCA(n_components=40)
principalComponents = pca.fit_transform(X)
principalDataframe = pd.DataFrame(data = principalComponents, columns = ['PC'+str(i) for i in range(1,41)])
targetDataframe = digits[['target']]
newDataframe = pd.concat([principalDataframe, targetDataframe],axis = 1)
newDataframe.head()
#Estadisticas para PCA
model_pca = sm.OLS(targetDataframe, sm.add_constant(principalDataframe))
results = model_pca.fit()
print(results.summary())
#Crear dataframe para PCA
total_nuevo_pca = principalComponents.shape[0]*principalComponents.shape[1]
df_comparar_pca = pd.DataFrame(columns=['Df', 'counts'])
df_comparar_pca.loc[0]= ['Original',total_original]
df_comparar_pca.loc[1]= ['Nuevo',total_nuevo_pca]
fig, ax = plt.subplots(figsize=(8,4),nrows=1)
sns.barplot(data=df_comparar_pca, x='Df', y='counts', palette="Blues_d",ax=ax)
ax.set_title('Número de datos')
plt.show()
print("La cantidad de datos del DataFrame original es:", total_original)
print("La cantidad de datos del DataFrame despues de la selección es:", total_nuevo_pca)
X_pca= principalComponents
y_pca= targetDataframe
X_train_pca, X_test_pca, y_train_pca, y_test_pca = train_test_split(X_pca, y_pca, test_size=.2,
random_state=42)
#Entrenar modelo
clf_pca = SVC(kernel= 'rbf', C=7)
print("Tiempo de ejecución del modelo")
%timeit clf_pca.fit(X_train_pca, y_train_pca)
#Predicción
y_pred_pca= clf_pca.predict(X_test_pca)
#Evaluar
confusion_matrix(y_test_pca, y_pred_pca)
#Métricas
target_names = ['numero '+ str(i) for i in range(0,10)]
print(classification_report(y_test_pca, y_pred_pca, target_names=target_names, digits=5))
###Output
precision recall f1-score support
numero 0 1.00000 1.00000 1.00000 33
numero 1 1.00000 1.00000 1.00000 28
numero 2 1.00000 1.00000 1.00000 33
numero 3 1.00000 0.97059 0.98507 34
numero 4 1.00000 1.00000 1.00000 46
numero 5 0.97872 0.97872 0.97872 47
numero 6 0.97222 1.00000 0.98592 35
numero 7 0.97059 0.97059 0.97059 34
numero 8 1.00000 1.00000 1.00000 30
numero 9 0.97500 0.97500 0.97500 40
accuracy 0.98889 360
macro avg 0.98965 0.98949 0.98953 360
weighted avg 0.98897 0.98889 0.98889 360
###Markdown
tiene un tiempo de ejecución practicamente igual al modelo original, y obtiene un accuracy ligeramente mejor Ejercicio 6__Visualizando Resultados:__ A continuación se provee código para comparar las etiquetas predichas vs las etiquetas reales del conjunto de _test_.
###Code
def mostar_resultados(digits,model,nx=5, ny=5,label = "correctos"):
"""
Muestra los resultados de las prediciones de un modelo
de clasificacion en particular. Se toman aleatoriamente los valores
de los resultados.
- label == 'correcto': retorna los valores en que el modelo acierta.
- label == 'incorrecto': retorna los valores en que el modelo no acierta.
Observacion: El modelo que recibe como argumento debe NO encontrarse
'entrenado'.
:param digits: dataset 'digits'
:param model: modelo de sklearn
:param nx: numero de filas (subplots)
:param ny: numero de columnas (subplots)
:param label: datos correctos o incorrectos
:return: graficos matplotlib
"""
X = digits.drop(columns="target").values
Y = digits["target"].values
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state = 42)
model.fit(X_train, Y_train) # ajustando el modelo
Y_pred = list(model.predict(X_test))
# Mostrar los datos correctos
if label=="correctos":
mask = Y_pred == Y_test
color = "green"
# Mostrar los datos correctos
elif label=="incorrectos":
mask = Y_pred != Y_test
color = "red"
else:
raise ValueError("Valor incorrecto")
X_aux = X_test[mask]
y_aux_true = np.array(Y_test)[mask]
y_aux_pred = np.array(Y_pred)[mask]
# We'll plot the first 100 examples, randomly choosen
fig, ax = plt.subplots(nx, ny, figsize=(12,12))
for i in range(nx):
for j in range(ny):
index = j + ny * i
data = X_aux[index, :].reshape(8,8)
label_pred = str(int(y_aux_pred[index]))
label_true = str(int(y_aux_true[index]))
ax[i][j].imshow(data, interpolation='nearest', cmap='gray_r')
ax[i][j].text(0, 0, label_pred, horizontalalignment='center', verticalalignment='center', fontsize=10, color=color)
ax[i][j].text(7, 0, label_true, horizontalalignment='center', verticalalignment='center', fontsize=10, color='blue')
ax[i][j].get_xaxis().set_visible(False)
ax[i][j].get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
**Pregunta*** Tomando en cuenta el mejor modelo entontrado en el `Ejercicio 3`, grafique los resultados cuando: * el valor predicho y original son iguales * el valor predicho y original son distintos
###Code
modelo = SVC(kernel= 'rbf', C=4) # Mejor modelo
###Output
_____no_output_____
###Markdown
Correctos
###Code
mostar_resultados(digits,modelo,nx=5, ny=5,label = "correctos")
###Output
_____no_output_____
###Markdown
Incorrectos
###Code
mostar_resultados(digits,modelo,nx=2, ny=2,label = "incorrectos")
###Output
_____no_output_____ |
corpora__analysis.ipynb | ###Markdown
Dataset gathering from wikimedia dumps---Here we gather dataset from the given link and unzip the folder and extract the files. I am using **'wikiextractor'** library to do the necessary commands.
###Code
!wget http://dumps.wikimedia.org/tawiki/latest/tawiki-latest-pages-articles.xml.bz2
!bunzip2 tawiki-latest-pages-articles.xml.bz2
!ls -ltr
!git clone https://github.com/attardi/wikiextractor.git
!ls
!python ./wikiextractor/wikiextractor/WikiExtractor.py tawiki-latest-pages-articles.xml --no-templates -q
###Output
_____no_output_____
###Markdown
Getting List of article collection files---
###Code
import glob
flist=glob.glob('text/*/*')
len(flist)
flist[:2]
def future_name(fn):
a,b,c=fn.split('/')
return '/'.join([a,b,b+'_'+c+'.txt'])
future_name(flist[0])
import os
for f in flist:
os.rename(f,future_name(f))
flist2=glob.glob('text/*/*')
len(flist2),flist2[:2]
with open(flist2[0], encoding='utf-8') as f:
text=f.read()
print(text[:1000])
###Output
<doc id="3" url="https://ta.wikipedia.org/wiki?curid=3" title="முதற் பக்கம்">
முதற் பக்கம்
<templatestyles src="Main Page/minerva.css" />
</doc>
<doc id="12" url="https://ta.wikipedia.org/wiki?curid=12" title="கட்டிடக்கலை">
கட்டிடக்கலை
கட்டிடக்கலை என்பது கட்டிடங்கள் மற்றும் அதன் உடல் கட்டமைப்புகளை வடிவமைத்தல், செயல்முறைத் திட்டமிடல், மற்றும் கட்டிடங்கள் கட்டுவதை உள்ளடக்கியதாகும். கட்டடக்கலை படைப்புகள், கட்டிடங்கள் பொருள் வடிவம், பெரும்பாலும் கலாச்சார சின்னங்களாக மற்றும் கலை படைப்புகளாக காணப்படுகின்றது. வரலாற்று நாகரிகங்கள் பெரும்பாலும் அவர்களின் கட்டிடகலை சாதனைகளின் மூலம் அடையாளம் காணப்படுகின்றன.
ஒரு விரிவான வரைவிலக்கணம், பெருமட்டத்தில், நகரத் திட்டமிடல், நகர்ப்புற வடிவமைப்பு மற்றும் நிலத்தோற்றம் முதலியவற்றையும், நுண்மட்டத்தில், தளபாடங்கள், உற்பத்திப்பொருள் முதலியவற்றை உள்ளடக்கிய, முழு உருவாக்கச் சூழலின் வடிவமைப்பைக் கட்டிடக்கலைக்குள் அடக்கும்.
மேற்படி விடயத்தில், தற்போது கிடைக்கும் மிகப் பழைய ஆக்கம், கி.பி. முதலாம் நூற்றாண்டைச் சேர்ந்த உரோமானியக் கட்டடக் கலைஞரான விட்ருவியஸ்
###Markdown
Extract titles of articles
###Code
import re
example_title ='<doc id="12" url="https://ta.wikipedia.org/wiki?curid=12" title="கட்டிடக்கலை">'
pattern = 'title="(.*?)">'
with open(flist2[0], encoding='utf-8') as f:
text=f.read()
titles=re.findall(pattern, text)
print(len(titles), 'articles found')
###Output
84 articles found
###Markdown
Total number of all articles
###Code
pattern = 'title="(.*?)">'
def get_article_count(fname):
with open(fname, encoding='utf-8') as f:
text=f.read()
titles=re.findall(pattern, text)
return len(titles)
print(sum([get_article_count(f) for f in flist2]))
###Output
185250
###Markdown
Create tiny subset for analyses
###Code
!mv text tawiki_large
flist3 = glob.glob('tawiki_large/*/*')
flist3[0], len(flist3)
import random
random.shuffle(flist3)
flist3[:2]
flist_small = flist3[:40]
!mkdir tawiki_small
for file in flist_small:
with open(file, encoding='utf-8') as f:
text = f.read()
name=file.split('/')
newname=name[0].replace('large', 'small')+'/'+name[2]
with open(newname, "w") as text_file:
text_file.write(text)
pattern = 'title="(.*?)">'
def get_article_count(fname):
with open(fname, encoding='utf-8') as f:
text=f.read()
titles=re.findall(pattern, text)
return len(titles)
flist_small = glob.glob('tawiki_small/*')
print(sum([get_article_count(f) for f in flist_small]))
###Output
13597
###Markdown
Dataset Preprocessing and Tokenization of tamil words for selected text file---
###Code
import nltk, re, string, collections
from nltk.util import ngrams # function for making ngrams
# this corpus is pretty big, so let's look at just one of the files in it
with open("/content/drive/MyDrive/nlp_da1/tawiki_large/AA/AA_wiki_00.txt", "r") as file:
text = file.read()
# check to make sure the file read in alright; let's print out the first 1000 characters
text[0:1000]
# get rid of all the XML markup
text = re.sub('<.*>','',text)
# get rid of the "ENDOFARTICLE." text
text = re.sub('ENDOFARTICLE.','',text)
text = re.sub('[a-zA-Z]','',text)
# get rid of punctuation
punctuationNoPeriod = "[" + re.sub("\.","",string.punctuation) + "]"
text = re.sub(punctuationNoPeriod, "", text)
# make sure it looks ok
text[0:1000]
###Output
_____no_output_____
###Markdown
N-gram analyses (2,3,4)
###Code
# first get individual words
tokenized = text.split()
# and get a list of all the bi-grams
esBigrams = ngrams(tokenized, 2)
# and get a list of all the tri-grams
esBigrams2 = ngrams(tokenized, 3)
# and get a list of all the quad-grams
esBigrams3 = ngrams(tokenized, 4)
# If you like, you can uncomment the next like to take a look at
# the first ten to make sure they look ok. Please note that doing so
# will consume the generator & will break the next block of code, so you'll
# need to re-comment it and run this block again to get it to work.
#list(esBigrams)[:10]
# get the frequency of each bigram in our corpus
esBigramFreq = collections.Counter(esBigrams)
# what are the ten most popular ngrams in this Spanish corpus?
esBigramFreq.most_common(20)
tokenized
import pandas as pd
bigrams_series = (pd.Series(nltk.ngrams(tokenized, 2)).value_counts())[:12]
bigrams_series
###Output
_____no_output_____
###Markdown
Plotting of 20 most frequently occuring bigrams
###Code
import matplotlib.pyplot as plt
from matplotlib import rc
import matplotlib as mlp
from pathlib import Path
import matplotlib.font_manager as fontmanager
nirm = Path('/content/drive/MyDrive/nlp_da1/vijaya.ttf')
tam_font = fontmanager.FontProperties(fname=nirm)
bigrams_series.sort_values().plot.barh(color='blue', width=.9, figsize=(12, 8))
plt.title('20 Most Frequently Occuring Bigrams')
plt.yticks(fontproperties=tam_font)
plt.ylabel('Bigram')
plt.xlabel(' # of Occurances')
###Output
_____no_output_____
###Markdown
plotting of most frequently occuring unigrams
###Code
from collections import Counter
import seaborn as sns
import matplotlib.font_manager as fontmanager
import matplotlib as mpl
import matplotlib.pyplot as plt
frequency = Counter(tokenized)
df = pd.DataFrame(frequency.most_common(30))
plt.rcParams['figure.figsize'] = [12, 15]
df.columns =['Word', 'Frequency']
df_sorted= df.sort_values('Frequency')
df_sorted.head()
sns.set(font_scale = 1.3, style = 'whitegrid')
nirm = Path('/content/drive/MyDrive/nlp_da1/vijaya.ttf')
tam_font = fontmanager.FontProperties(fname=nirm)
# plotting
fig = plt.figure(figsize=(30, 25))
ax = df_sorted.plot.barh(x='Word', y='Frequency')
for i in ax.patches:
plt.text(i.get_width()+0.2, i.get_y()+0.5,
str(round((i.get_width()), 2)),
fontsize=10, fontweight='bold',
color='grey')
plt.title('Word Count')
plt.yticks(fontproperties=tam_font)
plt.ylabel('word')
plt.xlabel(' # of Occurances')
###Output
_____no_output_____
###Markdown
Compiling all text files together for complete analysesHere we will compile the text files, do tokenization process then save all texts files together in one file---
###Code
import os
for dirname, _, filenames in os.walk('/content/drive/MyDrive/nlp_da1/tawiki_small/'):
for filename in filenames:
with open('corpus.txt', 'a', encoding='latin-1') as ffile:
with open(os.path.join(dirname, filename), 'r', encoding='latin-1') as rfile:
ffile.write(rfile.read())
def getListWordsPreprocessed(corpus):
''' return a corpus after removing all the patterns and the xml markup (if any) '''
text = corpus.lower()
text = re.sub('<.*>', '', text)
text = re.sub('ENDOFARTICLE.', '', text)
punctuation2remove = "[" + re.sub('[,.;:?!()+/-]', '', string.punctuation) + "]"
text = re.sub(punctuation2remove, '', text)
text = re.sub('\n\n+', '\n', text)
text = re.sub(';+\n', '\n', text)
text = re.sub('\s*-\s', ' ', text)
text = re.sub('\s+\.', ' ', text)
text = re.sub('^\n', '', text, flags=re.MULTILINE)
text = re.sub('^\s*\w+\s*\n', '', text, flags=re.MULTILINE)
text = re.sub('\((\s*|\+*|\w\.\s*)\d+(\-*|\s*|,\s*)\d*\-*\)', ' ', text)
text = re.sub('\(\s*\)', ' ', text)
text = text.replace(',', '')
text = text.replace('.', '')
text = text.replace(';', '')
text = text.replace(':', '')
text = text.replace('?', '')
text = text.replace('!', '')
text = text.replace('(', '')
text = text.replace(')', '')
text = text.replace('/', '')
text = text.replace('+', '')
text = text.replace('-', '')
text = re.sub('\s\d+\s', '', text)
words = text.split()
#remove all words with 5 or fewer occurences
word_cnts = Counter(words)
trimmed_words = [word for word in words if word_cnts[word] > 5]
return trimmed_words
###Output
_____no_output_____
###Markdown
Frequently used tamil wordHere we do process to find the most frequently used tamil words by referring the given dataset
###Code
with open('./corpus.txt', 'r') as f:
text = f.read()
words = getListWordsPreprocessed(text)
print(words[:50])
print("Total amount of words: {}".format(len(words)))
print("Amount of unique words: {}".format(len(set(words))))
# creating a counter of words ...
vocabulary_counts = Counter(words)
# let's see the 10 most common words
print("10 most commmon words:")
print(vocabulary_counts.most_common(10))
# sorting the words in order of frequency (from most to least frequent)
vocabulary_sorted = sorted(vocabulary_counts, key=vocabulary_counts.get, reverse=True)
# creating the lookup tables
int_to_vocab = {ii: word for ii, word in enumerate(vocabulary_sorted)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
# create a vocabulary of ints (i..e map the complete vocabulary to its int values)
int_vocabulary = [vocab_to_int[word] for word in words]
print("First 50 int-words of the int vocabulary:")
print(int_vocabulary[:50])
frequent= vocabulary_counts.most_common(10)
frequent
freq = pd.DataFrame(frequent)
freq
###Output
_____no_output_____
###Markdown
Plotting 10 most freqeuntly used tamil words using barplot
###Code
from collections import Counter
import seaborn as sns
import matplotlib.font_manager as fontmanager
import matplotlib as mpl
import matplotlib.pyplot as plt
frequency = Counter(tokenized)
df = pd.DataFrame(frequency.most_common(30))
plt.rcParams['figure.figsize'] = [12, 15]
freq.columns =['Word', 'Frequency']
freq_sorted= freq.sort_values('Frequency')
sns.set(font_scale = 1.3, style = 'whitegrid')
nirm = Path('/content/drive/MyDrive/nlp_da1/vijaya.ttf')
tam_font = fontmanager.FontProperties(fname=nirm)
# plotting
fig = plt.figure(figsize=(40, 35))
ax = freq_sorted.plot.barh(x='Word', y='Frequency')
for i in ax.patches:
plt.text(i.get_width()+0.2, i.get_y()+0.5,
str(round((i.get_width()), 2)),
fontsize=10, fontweight='bold',
color='grey')
plt.title('Word Count')
plt.yticks(fontproperties=tam_font)
plt.ylabel('word')
plt.xlabel(' # of Occurances')
###Output
_____no_output_____ |
module_0/Notebooks/Basic_Image_manipulation.ipynb | ###Markdown
Live Tutorial 1a - Basic Image manipulation in a Python interactive notebook.---------- Qbio Summer School 2021--------------```Instructor: Luis U. AguileraAuthor: Luis U. AguileraContact Info: [email protected] (c) 2021 Dr. Brian Munsky. Dr. Luis Aguilera, Will RaymondColorado State University.Licensed under MIT License.``` Abstract This notebook provides a list of procedures to analyze microscope images. The notebook describes what a scientific image is. How to extract relevant information from the image. At the end of the tutorial, the student is expected to acquire the computational skills to implement the following list of objectives independently. List of objectives1. To load the python modules commonly used to work with microscopy data.2. To understand what is a computational image.3. To understand what is a monochromatic image and a color image.4. To select and slice the dimensions in a sequence of microscope images.5. To apply differents filters to remove noise from the image.6. To perform basic mathematic operations, including rotation, translation, and scaling. Working with images in python The following lines of code import and install some libraries. For more information, look at the library name on the Python Package Index [(PyPI)](https://pypi.org/).
###Code
# Loading libraries
import matplotlib.pyplot as plt # Library used for plotting
from matplotlib.patches import Rectangle # module to plot a rectangle in the image
import urllib.request # importing library to download data
import numpy as np # library for array manipulation
import seaborn as sn # plotring library
import pandas as pd # data frames library
import tifffile # library to store numpy arrays in TIFF
import pathlib; from pathlib import Path # library to work with file paths
# Installing and updating libraries
%%capture
!pip uninstall scikit-image -y
!pip install -U scikit-image
!pip install wget
import skimage # Library for image manipulation
from skimage.io import imread # sublibrary from skimage
import wget # importing library to download data
###Output
_____no_output_____
###Markdown
Downloading, opening and visualizing images
###Code
# Downloading the image from figshare SupFig1c_BG_MAX_Cell04.tif
urls = ['https://ndownloader.figshare.com/files/26751209','https://ndownloader.figshare.com/files/26751203','https://ndownloader.figshare.com/files/26751212','https://ndownloader.figshare.com/files/26751218']
print('Downloading file...')
urllib.request.urlretrieve(urls[1], './image_cell.tif') #
# importing the image as variable img
figName = './image_cell.tif'
img = imread(figName)
###Output
_____no_output_____
###Markdown
Understanding digital images. What is a digital image?
###Code
# what is img?
print('image type =', type(img))
###Output
_____no_output_____
###Markdown
What is the shape of the image?
###Code
print('image shape =',img.shape )
###Output
_____no_output_____
###Markdown
Displaying a section of the image. Notice that an image is only a matrix of numbers.
###Code
df = pd.DataFrame(img[0,250:260,250:260,0] ) # converting the image into a pandas data frame
# Plotting
fig, ax = plt.subplots(1,2, figsize=(25, 10))
ax[0].imshow(img[0,:,:,0],cmap='gray')
ax[0].add_patch(Rectangle(xy=(250, 250),width=10,height=10,linewidth=3,color='yellow',fill=False)) # rectangle in the image
# Plotting the heatmap of a section in the image
sn.heatmap(df, annot=True,cmap="gray",fmt='d', ax=ax[1])
plt.show()
plt.figure(figsize=(7,7))
plt.imshow(img[0,:,:,0],cmap='gray') # Notice that only a timepoint and a color is plotted.
plt.show()
###Output
_____no_output_____
###Markdown
From the [image's publication](https://www.biorxiv.org/content/10.1101/2020.04.03.024414v2) we can obtain the metadata. Indicating that the following information:Dimension | Meaning | Value---------|---------- |----------0 | Time | 35 (frames)1 | Y-dimension | 512 pixels2 | X-dimension | 512 pixels3 | Color | 3 color image (R,G,B) Intensity values in the image
###Code
# minimum and maximum intensity values on the image
max_intensity_value = np.amax(img)
min_intensity_value = np.amin(img)
print('Maximum intensity : ', max_intensity_value)
print('Minimum intensity : ', min_intensity_value)
###Output
_____no_output_____
###Markdown
Intensity distribution in the image
###Code
# plotting the intensity distribution for a specific timepoint and an specific channel
plt.figure(figsize=(7,7))
plt.hist(img[0,:,:,0].flatten(), bins=80,color='orangered')
plt.xlabel('Value')
plt.ylabel('Frequency')
plt.title('Intnesity Histogram')
plt.show()
###Output
_____no_output_____
###Markdown
Summary of image properties: * 4 dimensional tensor [T,Y,X,C]. * Numpy array* Intensity range (0, 6380) Grayscale images
###Code
# please try to run the following line of code and find why it doesn't work?
#plt.imshow(img)
# Visualzing a monochromatic image
plt.figure(figsize=(7,7))
plt.imshow(img[0,:,:,0],cmap='gray') # Notice that only a timepoint and a color is plotted.
plt.show()
# Visualzing a monochromatic image with a different colormap
plt.figure(figsize=(7,7))
plt.imshow(img[0,:,:,0],cmap= 'BrBG') # colormap options are: 'Accent', 'Accent_r', 'Blues', 'Blues_r', 'BrBG', 'BrBG_r', 'BuGn', 'BuGn_r', 'BuPu', 'BuPu_r', 'CMRmap', 'CMRmap_r', 'Dark2', 'Dark2_r', 'GnBu', 'GnBu_r', 'Greens', 'Greens_r', 'Greys', 'Greys_r', 'OrRd', 'OrRd_r', 'Oranges', 'Oranges_r', 'PRGn', 'PRGn_r', 'Paired', 'Paired_r', 'Pastel1', 'Pastel1_r', 'Pastel2', 'Pastel2_r', 'PiYG', 'PiYG_r', 'PuBu', 'PuBuGn', 'PuBuGn_r', 'PuBu_r', 'PuOr', 'PuOr_r', 'PuRd', 'PuRd_r', 'Purples', 'Purples_r', 'RdBu', 'RdBu_r', 'RdGy', 'RdGy_r', 'RdPu', 'RdPu_r', 'RdYlBu', 'RdYlBu_r', 'RdYlGn', 'RdYlGn_r', 'Reds', 'Reds_r', 'Set1', 'Set1_r', 'Set2', 'Set2_r', 'Set3', 'Set3_r', 'Spectral', 'Spectral_r', 'Wistia', 'Wistia_r', 'YlGn', 'YlGnBu', 'YlGnBu_r', 'YlGn_r', 'YlOrBr', 'YlOrBr_r', 'YlOrRd', 'YlOrRd_r', 'afmhot', 'afmhot_r', 'autumn', 'autumn_r', 'binary', 'binary_r', 'bone', 'bone_r', 'brg', 'brg_r', 'bwr', 'bwr_r', 'cividis', 'cividis_r', 'cool', 'cool_r', 'coolwarm', 'coolwarm_r', 'copper', 'copper_r', 'cubehelix', 'cubehelix_r', 'flag', 'flag_r', 'gist_earth', 'gist_earth_r', 'gist_gray', 'gist_gray_r', 'gist_heat', 'gist_heat_r', 'gist_ncar', 'gist_ncar_r', 'gist_rainbow', 'gist_rainbow_r', 'gist_stern', 'gist_stern_r', 'gist_yarg', 'gist_yarg_r', 'gnuplot', 'gnuplot2', 'gnuplot2_r', 'gnuplot_r', 'gray', 'gray_r', 'hot', 'hot_r', 'hsv', 'hsv_r', 'inferno', 'inferno_r', 'jet', 'jet_r', 'magma', 'magma_r', 'nipy_spectral', 'nipy_spectral_r', 'ocean',
plt.show()
###Output
_____no_output_____
###Markdown
Bit depth, intensity in images. Bit depth is the information stored on each pixel in the image. Bits | Color values: $2^n$---------|------------------1 bit | 2 8 bit | 256 12 bit | 409616 bit | 65536
###Code
# https://stackoverflow.com/questions/46689428/convert-np-array-of-type-float64-to-type-uint8-scaling-values/46689933
def convert(img, target_type_min, target_type_max, target_type):
'''
This function is inteded to normalize img and change the image to the specified target_type
img: numpy array
target_type_min: int
target_type_max: int
target_type: str, optins are: np.uint
'''
imin = img.min()
imax = img.max()
a = (target_type_max - target_type_min) / (imax - imin)
b = target_type_max - a * imax
new_img = (a * img + b).astype(target_type)
return new_img
###Output
_____no_output_____
###Markdown
Check this [link](https://numpy.org/doc/stable/user/basics.types.html) for a complete list of numpy data types.
###Code
# Normalizing and converting images between different bit-depts.
#Convert an image to unsigned byte format, with values in [0, 1].
img_int1 = convert(img, 0,1,target_type=np.bool_)
#Convert an image to unsigned byte format, with values in [0, 8].
img_int3 = convert(img, 0,8,target_type=np.uint8)
#Convert an image to unsigned byte format, with values in [0, 255].
img_int8 = convert(img, 0,255,target_type=np.uint8)
print('Range in 1-bit image: [', np.amin(img_int1),',' ,np.amax(img_int1) , ']' )
print('Range in 3-bit image: [', np.amin(img_int3),',' ,np.amax(img_int3) , ']' )
print('Range in 8-bit image: [', np.amin(img_int8),',' ,np.amax(img_int8) , ']' )
print('Range in 16-bit image: [', np.amin(img),',' ,np.amax(img) , ']' )
# Side-by-side comparison
fig, ax = plt.subplots(1,3, figsize=(30, 20))
ax[0].imshow(img_int3[0,:,:,0],cmap='gray')
ax[0].set(title='3bit')
ax[1].imshow(img_int8[0,:,:,0],cmap='gray')
ax[1].set(title='8bit')
ax[2].imshow(img[0,:,:,0],cmap='gray')
ax[2].set(title='16bit')
plt.show()
###Output
_____no_output_____
###Markdown
Values in the image
###Code
# Selecting a section of the images and converting this section into a data frame
min_selection_area = 300
max_selection_area = min_selection_area+10
df_3bit = pd.DataFrame(img_int3[0,min_selection_area:max_selection_area,min_selection_area:max_selection_area,0] ) # Range in 3-bit image: [ 0 , 8 ]
df_8bit = pd.DataFrame(img_int8[0,min_selection_area:max_selection_area,min_selection_area:max_selection_area,0] ) # Range in 8-bit image: [ 0 , 255 ]
df_16bit = pd.DataFrame(img[0,min_selection_area:max_selection_area,min_selection_area:max_selection_area,0] ) # Range in 16-bit image: [ 0 , 65536 ]. In this particular image the original maximum value is 6380
# Plotting
fig, ax = plt.subplots(1,3, figsize=(30, 7))
# Plotting the heatmap of a section in the image
sn.heatmap(df_3bit, annot=True,cmap="gray",fmt='d', ax=ax[0])
ax[0].set_title('3-bit image')
sn.heatmap(df_8bit, annot=True,cmap="gray",fmt='d', ax=ax[1])
ax[1].set_title('8-bit image')
sn.heatmap(df_16bit, annot=True,cmap="gray",fmt='d', ax=ax[2])
ax[2].set_title('16-bit image')
plt.show()
###Output
_____no_output_____
###Markdown
File size for different data types and bit depth
###Code
#saving the images to disk
tifffile.imwrite('temp_img_int8.tif', img_int8)
tifffile.imwrite('temp_img_int16.tif', img)
# Loading the images
print("File size of the 8-bit image in Mb is: ", round(Path('temp_img_int8.tif').stat().st_size/1e6))
print("File size of the 16-bit image in Mb is: ", round(Path('temp_img_int16.tif').stat().st_size/1e6))
###Output
_____no_output_____
###Markdown
Color images. Color channel [R,G,B].
###Code
# Visualzing a color image
plt.figure(figsize=(10,10))
plt.imshow(img_int8[0,:,:,:]) # Notice that only a timepoint and all colors are plotted.
plt.show()
###Output
_____no_output_____
###Markdown
Working with images in Python Basic image manipulation Slicing In this section we select parts of the image.The image is a numpy array with dimensions:```image [time, y-axis, x-axis, colors]```If we need to select the following elements:* timepoint(frame) 5* y-axis from 100 to 200 pixel* x-axis from 230 to 300 pixel* "Green" color (Color 1 in the standard format [R,G,B]),The way to slice the numpy array is as follows:```image[5, 100:200, 230:300, 1]```
###Code
# Ploting a subsection of the image.
# Time point: 0
# Y-range: [100:300]
# X-range: [100:300]
# Channel: Red (0)
plt.figure(figsize=(7,7))
plt.imshow(img_int8[0,100:300,100:300,0],cmap='gray') # Notice that only a timepoint and a color is plotted.
plt.show()
# Ploting a subsection of the image.
# Time point: 22
# Y-range: [230:300]
# X-range: [155:350]
# Channel: Blue (2)
plt.figure(figsize=(7,7))
plt.imshow(img_int8[22,23:300,155:350,2],cmap='gray') # Notice that only a timepoint and a color is plotted.
plt.show()
###Output
_____no_output_____
###Markdown
Thresholding
###Code
# Making values less than the average equal to zero.
img_copy = img.copy() # making a copy of our img
img_section = img_copy[0,:,:,0] # selecting a timepoint and color channel
#img_section[img_section>1000]=1000 # thresholding image values larger than 1000 equal to 1000.
img_section[img_section>np.mean(img_section)]=np.mean(img_section) # thresholding image values larger than the mean equal to the mean.
# Plotting
plt.figure(figsize=(7,7))
plt.imshow(img_section,cmap='gray') # Notice that only a timepoint and a color is plotted.
plt.show()
###Output
_____no_output_____
###Markdown
Filters [Filters](https://ai.stanford.edu/~syyeung/cvweb/tutorial1.html) are used to :* Noise reduction* Edge detection* Sharpening* BlurringThe mathematical operation is a 2D convolution. This convolution involves defining a smaller kernel matrix and applying the same mathematical operation to each pixel in the entire image. A more complete explanation can be found in this [video](https://youtu.be/8rrHTtUzyZA?t=72). Gaussian Filter. Noise reduction and blurring. $G_\sigma = \frac{1}{2\pi\sigma^2}e{\frac{x^2+y^2}{2\sigma^2}}$
###Code
# Section that creates the Gaussian Kernel Matrix
def gaussian_kernel (size_matrix,sigma):
'''
This function returns a normalized gaussian kernel matrix
size_matrix : int
sigma: float
'''
ax = np.linspace(-(size_matrix - 1) / 2., (size_matrix - 1) / 2., size_matrix)
xx, yy = np.meshgrid(ax, ax)
kernel = np.exp(-0.5 * (np.square(xx) + np.square(yy)) / np.square(sigma))
kernel = kernel/kernel.sum() # normalizing to the sum
return kernel
# Gaussian Kernel matrix for different sigmas.
kernel_gaussian_sigma_3 = gaussian_kernel (size_matrix=20,sigma=3)
kernel_gaussian_sigma_5 = gaussian_kernel (size_matrix=20,sigma=5)
kernel_gaussian_sigma_10 = gaussian_kernel (size_matrix=20,sigma=10)
# Side-by-side comparizon
fig, ax = plt.subplots(1,3, figsize=(20, 10))
ax[0].imshow(kernel_gaussian_sigma_3,cmap='gray')
ax[0].set(title='Gaussian kernel $\sigma$ =3')
ax[1].imshow(kernel_gaussian_sigma_5,cmap='gray')
ax[1].set(title='Gaussian kernel $\sigma$ =5')
ax[2].imshow(kernel_gaussian_sigma_10,cmap='gray')
ax[2].set(title='Gaussian kernel $\sigma$ =10')
plt.show()
###Output
_____no_output_____
###Markdown
Example using [gaussian filter scipy](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.gaussian_filter.html). For a complete list of filters in scipy use the following [link](https://docs.scipy.org/doc/scipy/reference/ndimage.html)
###Code
# Imoporting the library with the filter modules
from scipy.ndimage import gaussian_filter
img_copy = img.copy() # making a copy of our img
img_section = img_copy[0,:,:,0] # selecting a timepoint and color channel
# Applying the filter
img_gaussian_filter_simga_1 = gaussian_filter(img_section, sigma=1)
img_gaussian_filter_simga_10 = gaussian_filter(img_section, sigma=10)
# Side-by-side comparizon
fig, ax = plt.subplots(1,3, figsize=(30, 10))
ax[0].imshow(img_section,cmap='gray')
ax[0].set(title='Original')
# noise reduction
ax[1].imshow(img_gaussian_filter_simga_1,cmap='gray')
ax[1].set(title='Gaussian Filter $\sigma$ =1 Noise reduction')
# Blurring
ax[2].imshow(img_gaussian_filter_simga_10,cmap='gray')
ax[2].set(title='Gaussian Filter $\sigma$ =10 Image Blurring')
plt.show()
###Output
_____no_output_____
###Markdown
Filters in scikit-image. [Difference of gaussians](https://scikit-image.org/docs/stable/api/skimage.filters.htmlskimage.filters.difference_of_gaussians).This filter is used to locate elements between a low and a high value. For a complete list of filters in scikit-image use the following [link](https://scikit-image.org/docs/stable/api/skimage.filters.html).
###Code
# Importing skiimage filters module
from skimage.filters import difference_of_gaussians
img_copy = img.copy() # making a copy of our img
img_section = img_copy[0,:,:,0] # selecting a timepoint and color channel
# Applying the filter to our image
img_diff_gaussians = difference_of_gaussians(img_section,low_sigma=1, high_sigma=10)
#img_diff_gaussians = difference_of_gaussians(img_section,low_sigma=5, high_sigma=10)
# Side-by-side comparizon
fig, ax = plt.subplots(1,2, figsize=(20, 10))
ax[0].imshow(img_section,cmap='gray')
ax[0].set(title='Original')
ax[1].imshow(img_diff_gaussians,cmap='gray')
ax[1].set(title='Difference of gaussians')
plt.show()
###Output
_____no_output_____
###Markdown
Rotation Simple rotation can be achieved by array manipulation. Rotate an image 90$^\circ$ use transpose property of the array. [transpose](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.T.html)
###Code
img_copy = img.copy() # making a copy of our img
img_section = img_copy[0,:,:,0] # selecting a timepoint and color channel
transposed_img = img_section.T # transposed property in a numpy array
# Side-by-side comparizon
fig, ax = plt.subplots(1,2, figsize=(20, 10))
ax[0].imshow(img_section,cmap='gray')
ax[0].set(title='Original')
ax[1].imshow(transposed_img,cmap='gray')
ax[1].set(title= 'Image rotated by 90 degrees' )
plt.show()
###Output
_____no_output_____
###Markdown
Library [Rotate scipy](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.rotate.htmlscipy.ndimage.rotate)
###Code
# Importing skiimage rotation module
from scipy import ndimage as nd
img_copy = img.copy() # making a copy of our img
img_section = img_copy[0,:,:,0] # selecting a timepoint and color channel
# rotate image to a given angle
selected_angle = 90
img_rotation = nd.rotate(img_section, angle=selected_angle)
# Side-by-side comparizon
fig, ax = plt.subplots(1,2, figsize=(20, 10))
ax[0].imshow(img_section,cmap='gray')
ax[0].set(title='Original')
ax[1].imshow(img_rotation,cmap='gray')
ax[1].set(title= 'Image rotated by '+str(selected_angle)+ ' degrees' )
plt.show()
###Output
_____no_output_____
###Markdown
Image transformation. Consist of applying rotation, scaling, and translation processes to the image. List of available [transformations in skimage](https://scikit-image.org/docs/stable/auto_examples/transform/plot_transform_types.html). Blog with more information about [applying transformations to images](https://towardsdatascience.com/image-processing-with-python-applying-homography-for-image-warping-84cd87d2108f)
###Code
# Importing skiimage transformation module
from skimage import transform
img_copy = img.copy() # making a copy of our img
img_section = img_copy[0,:,:,0] # selecting a timepoint and color channel
# transformation matrix
tform = transform.SimilarityTransform(
scale = 0.95, # float, scaling value
rotation = np.pi/90, # Rotation angle in counter-clockwise direction as radians. pi/180 rad = 1 degrees
translation=(100, 1)) # (x, y) values for translation .
print('Transformation matrix : \n', tform.params , '\n')
# Applying the transformation
tf_img = transform.warp(img_section, tform.inverse)
# Side-by-side comparizon
fig, ax = plt.subplots(1,2, figsize=(20, 10))
ax[0].imshow(img_section,cmap='gray')
ax[0].set(title='Original')
ax[1].imshow(tf_img,cmap='gray')
ax[1].set_title('transformation')
plt.show()
###Output
_____no_output_____
###Markdown
Working with a sequence of images Video Visualizing a video with [ipywidgets](https://ipywidgets.readthedocs.io/en/latest/)
###Code
import ipywidgets as widgets # Importing library
from ipywidgets import interact, interactive, HBox, Layout, VBox # importing modules and functions.
# Figure size
plt.rcParams["figure.figsize"] = (10,10)
def video_viewer( drop_channel, time):
'''
This function is intended to display an image from an array of images (specifically, video: img_int8). img_int8 is a numpy array with dimension [T,Y,X,C].
drop_channel : str with options 'Ch_0', 'Ch_1', 'Ch_2', 'All'
time: int with range 0 to the number of frames in video.
'''
plt.figure(1)
if drop_channel == 'Ch_0':
temp_image = img_int8[time,:,:,0]
plt.imshow(temp_image,cmap='gray')
elif drop_channel == 'Ch_1':
temp_image = img_int8[time,:,:,1]
plt.imshow(temp_image,cmap='gray')
elif drop_channel == 'Ch_2':
temp_image = img_int8[time,:,:,2]
plt.imshow(temp_image,cmap='gray')
else:
temp_image = img_int8[time,:,:,:]
plt.imshow(temp_image)
plt.show()
# Defining an interactive plot
interactive_plot = interactive(video_viewer,
drop_channel = widgets.Dropdown(options=['Ch_0', 'Ch_1', 'Ch_2', 'All'],description='Channel',value='Ch_1'), # drop to select the channel
time = widgets.IntSlider(min=0,max=img_int8.shape[0]-1,step=1,value=0,description='Time')) # time slider parameters
# Creates the controls
controls = HBox(interactive_plot.children[:-1], layout = Layout(flex_flow='row wrap'))
# Creates the outputs
output = interactive_plot.children[-1]
# Display the controls and output as an interactive widget
display(VBox([controls, output]))
###Output
_____no_output_____
###Markdown
Images with 3-dimensional space, Fluorescence in situ hybridization (FISH) images.
###Code
# Downloading the image to Colab
%%capture
drive = pathlib.Path("/content")
found_files = list(drive.glob('**/FISH_example.zip'))
if len(found_files) != 0:
print(f"File already downloaded and can be found in {found_files[0]}.")
else:
!wget --no-check-certificate 'https://www.dropbox.com/s/i9mz2b3qminj4wh/FISH_example.zip?dl=0' -r -A 'uc*' -e robots=off -nd -O 'FISH_example.zip'
!unzip FISH_example.zip
# importing the image as variable img
figName_FISH = './FISH_example.tif'
img_FISH = imread(figName_FISH)
# this image has dimension [Z,Y,X]
print(img_FISH.shape)
max_val = np.percentile(img_FISH, 99)
img_FISH [img_FISH> max_val] = max_val
# Plotting the FISH image
fig, ax = plt.subplots(1,img_FISH.shape[0], figsize=(30, 10))
for i in range (0,img_FISH.shape[0]):
ax[i].imshow(img_FISH[i,:,:],cmap='gray')
#ax[i].set(title= ['Z=',str(i)])
ax[i].axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Moving in and out of focus
###Code
def FISH_viewer( z_value):
'''
This function is intended to display an image from an array of images (specifically, video: img_int8). img_int8 is a numpy array with dimension [T,Y,X,C].
drop_channel : str with options 'Ch_0', 'Ch_1', 'Ch_2', 'All'
time: int with range 0 to the number of frames in video.
'''
plt.figure(1)
temp_FISH_image = img_FISH[z_value,:,:]
plt.imshow(temp_FISH_image,cmap='gray')
plt.show()
# Defining an interactive plot
interactive_plot = interactive(FISH_viewer,
z_value = widgets.IntSlider(min=0,max=img_FISH.shape[0]-1,step=1,value=0,description='z-value')) # time slider parameters
# Creates the controls
controls = HBox(interactive_plot.children[:-1], layout = Layout(flex_flow='row wrap'))
# Creates the outputs
output = interactive_plot.children[-1]
# Display the controls and output as an interactive widget
display(VBox([controls, output]))
###Output
_____no_output_____
###Markdown
Operations on multiple images Maximum projections
###Code
# Making a copy of our sequence of images
img_FISH_copy = img_FISH.copy() # making a copy of our img
# applying a maximum projection
img_max_z_projection = np.max(img_FISH, axis=0)
# Plotting
plt.figure(figsize=(7,7))
plt.imshow(img_max_z_projection,cmap='gray')
plt.axis('off')
plt.show()
# Printing results
print('Dimensions on the original sequence of images :', img_FISH.shape, '\n')
print('Dimensions on the maximum projection :', img_max_z_projection.shape)
###Output
_____no_output_____
###Markdown
Normalizing intensity for every channel and time point.
###Code
img_normalized = np.zeros_like(img) # prealocating memory
number_timepoints, y_dim, x_dim, number_channels = img.shape[0], img.shape[1], img.shape[2], img.shape[3] # obtaining the dimensions size
# Normalization using a nested for-loop
for index_channels in range (number_channels): # iteration for every channel
for index_time in range (number_timepoints): # iterating for every time point
max_val = np.amax(img[index_time,:,:,index_channels])
min_val = np.amin(img[index_time,:,:,index_channels])
img_normalized[index_time,:,:,index_channels] = (img[index_time,:,:,index_channels]-min_val) / (max_val-min_val) # normalization
# Printing the output
print('Range velues in the original sequence of images: (' , np.amin(img) ,',', np.amax(img) ,')\n' )
print('Range velues in the normalized sequence of images: (' , np.amin(img_normalized) ,',', np.amax(img_normalized) ,')\n' )
###Output
_____no_output_____
###Markdown
Transposing dimensions
###Code
# Making a copy of our sequence of images
img_int8_copy = img_int8.copy() # making a copy of our img
# reshaping the video. Changing the Time position (0) to the last place (3).
img_transposed = np.transpose(img_int8_copy, (3, 1, 2, 0))
# Printing results
print('Dimensions on the original sequence of images :', img_int8_copy.shape, '\n')
print('Dimensions on the maximum projection :', img_transposed.shape)
###Output
_____no_output_____ |
notebook/ML_PR.ipynb | ###Markdown
**Atenção: Devido a natureza dos dados, para correto funcionamento, este notebook precisa ser executado em ambiente com memória RAM maior ou igual a 25 Gb.** **Ciência e Visualização de Dados****Projeto Final - Entrega 03**Alunos: Gleyson Roberto do Nascimento. RA: 043801. Elétrica.Negli René Gallardo Alvarado. RA: 234066. Saúde.Rafael Vinícius da Silveira. RA: 137382. Física.Sérgio Sevileanu. RA: 941095. Elétrica. Neste notebook do Google Colaboratory será realzada a aprendizagem de máquina para os dados do Estado do Paraná durante os anos de 2008 a 2018 segundo o banco de dados [SIHSUS](https://bigdata-metadados.icict.fiocruz.br/dataset/sistema-de-informacoes-hospitalares-do-sus-sihsus/resource/ae85ac54-6734-43b8-a820-6129a854e1ff).Desta forma, algumas definições iniciais e um disclaimer se fazem necessários para este projeto:Será definido como **diagnóstico equivocado (categoria 0 da variável v258)** aquele em que houve mais de um diagnóstico de CID10, contudo, eles fazem parte do mesmo grupo, de forma que é plausível o equívoco dada a semelhança de sintomas entre os CID10;Será definido como **falha de diagnóstico (categoria 1 da variável v258)** aquele em que houve mais de um diagnóstico de CID10, contudo, eles fazem parte de grupos distintos, de forma que embora possam existir sintomas semelhantes entre os CID10, caberia ao profissional uma análise mais aprofundada antes do diagnóstico.O **diagnóstico correto** (aquele em que houve apenas um diagnóstico de CID10, sem alterações durante o período até a alta) foi suprimido da análise devido ao fato de que evisiesava resultados dado o altíssimo percentual de ocorrência; **Disclaimer**: Considerando a natureza do banco de dados do SIHSUS, isto é, um Big Data em que inúmeros funcionários do Sistema Único de Saúde possuem acesso e inserem os dados de forma manual em realdades e condições bastante distintas, existe a séria possibildade de erro sistemático, desta forma, a acurácia deste trabalho deve ser considerada com ressalvas. Instalando o RAPIDS no Google Colab Verificando se há GPU disponível
###Code
!nvidia-smi
###Output
Wed Jun 23 18:26:59 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 465.27 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla P100-PCIE... Off | 00000000:00:04.0 Off | 0 |
| N/A 37C P0 26W / 250W | 0MiB / 16280MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
###Markdown
Setup:Set up script installs1. Updates gcc in Colab1. Installs Conda1. Install RAPIDS' current stable version of its libraries, as well as some external libraries including: 1. cuDF 1. cuML 1. cuGraph 1. cuSpatial 1. cuSignal 1. BlazingSQL 1. xgboost1. Copy RAPIDS .so files into current working directory, a neccessary workaround for RAPIDS+Colab integration.
###Code
# This get the RAPIDS-Colab install files and test check your GPU. Run this and the next cell only.
# Please read the output of this cell. If your Colab Instance is not RAPIDS compatible, it will warn you and give you remediation steps.
!git clone https://github.com/rapidsai/rapidsai-csp-utils.git
!python rapidsai-csp-utils/colab/env-check.py
# This will update the Colab environment and restart the kernel. Don't run the next cell until you see the session crash.
!bash rapidsai-csp-utils/colab/update_gcc.sh
import os
os._exit(00)
# This will install CondaColab. This will restart your kernel one last time. Run this cell by itself and only run the next cell once you see the session crash.
import condacolab
condacolab.install()
# you can now run the rest of the cells as normal
import condacolab
condacolab.check()
# Installing RAPIDS is now 'python rapidsai-csp-utils/colab/install_rapids.py <release> <packages>'
# The <release> options are 'stable' and 'nightly'. Leaving it blank or adding any other words will default to stable.
# The <packages> option are default blank or 'core'. By default, we install RAPIDSAI and BlazingSQL. The 'core' option will install only RAPIDSAI and not include BlazingSQL,
!python rapidsai-csp-utils/colab/install_rapids.py stable
###Output
Installing RAPIDS Stable 21.06
Starting the RAPIDS+BlazingSQL install on Colab. This will take about 15 minutes.
Collecting package metadata (current_repodata.json): ...working... done
Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve.
Solving environment: ...working... failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... done
## Package Plan ##
environment location: /usr/local
added / updated specs:
- cudatoolkit=11.0
- gcsfs
- llvmlite
- openssl
- python=3.7
- rapids-blazing=21.06
The following packages will be downloaded:
package | build
---------------------------|-----------------
abseil-cpp-20210324.1 | h9c3ff4c_0 1015 KB conda-forge
aiohttp-3.7.4.post0 | py37h5e8e339_0 625 KB conda-forge
anyio-3.2.0 | py37h89c1867_0 138 KB conda-forge
appdirs-1.4.4 | pyh9f0ad1d_0 13 KB conda-forge
argon2-cffi-20.1.0 | py37h5e8e339_2 47 KB conda-forge
arrow-cpp-1.0.1 |py37haa335b2_40_cuda 21.1 MB conda-forge
arrow-cpp-proc-3.0.0 | cuda 24 KB conda-forge
async-timeout-3.0.1 | py_1000 11 KB conda-forge
async_generator-1.10 | py_0 18 KB conda-forge
attrs-21.2.0 | pyhd8ed1ab_0 44 KB conda-forge
aws-c-cal-0.5.11 | h95a6274_0 37 KB conda-forge
aws-c-common-0.6.2 | h7f98852_0 168 KB conda-forge
aws-c-event-stream-0.2.7 | h3541f99_13 47 KB conda-forge
aws-c-io-0.10.5 | hfb6a706_0 121 KB conda-forge
aws-checksums-0.1.11 | ha31a3da_7 50 KB conda-forge
aws-sdk-cpp-1.8.186 | hb4091e7_3 4.6 MB conda-forge
backcall-0.2.0 | pyh9f0ad1d_0 13 KB conda-forge
backports-1.0 | py_2 4 KB conda-forge
backports.functools_lru_cache-1.6.4| pyhd8ed1ab_0 9 KB conda-forge
blazingsql-21.06.00 |cuda_11.0_py37_g95ff589f8_0 190.2 MB rapidsai
bleach-3.3.0 | pyh44b312d_0 111 KB conda-forge
blinker-1.4 | py_1 13 KB conda-forge
bokeh-2.2.3 | py37h89c1867_0 7.0 MB conda-forge
boost-1.72.0 | py37h48f8a5e_1 339 KB conda-forge
boost-cpp-1.72.0 | h9d3c048_4 16.3 MB conda-forge
brotli-1.0.9 | h9c3ff4c_4 389 KB conda-forge
ca-certificates-2021.5.30 | ha878542_0 136 KB conda-forge
cachetools-4.2.2 | pyhd8ed1ab_0 12 KB conda-forge
cairo-1.16.0 | h6cf1ce9_1008 1.5 MB conda-forge
certifi-2021.5.30 | py37h89c1867_0 141 KB conda-forge
cfitsio-3.470 | hb418390_7 1.3 MB conda-forge
click-7.1.2 | pyh9f0ad1d_0 64 KB conda-forge
click-plugins-1.1.1 | py_0 9 KB conda-forge
cligj-0.7.2 | pyhd8ed1ab_0 10 KB conda-forge
cloudpickle-1.6.0 | py_0 22 KB conda-forge
colorcet-2.0.6 | pyhd8ed1ab_0 1.5 MB conda-forge
conda-4.10.1 | py37h89c1867_0 3.1 MB conda-forge
cudatoolkit-11.0.221 | h6bb024c_0 953.0 MB nvidia
cudf-21.06.01 |cuda_11.0_py37_g101fc0fda4_2 108.4 MB rapidsai
cudf_kafka-21.06.01 |py37_g101fc0fda4_2 1.7 MB rapidsai
cugraph-21.06.00 | py37_gf9ffd2de_0 65.0 MB rapidsai
cuml-21.06.02 |cuda11.0_py37_g7dfbf8d9e_0 78.9 MB rapidsai
cupy-9.0.0 | py37h4fdb0f7_0 50.3 MB conda-forge
curl-7.77.0 | hea6ffbf_0 149 KB conda-forge
cusignal-21.06.00 | py38_ga78207b_0 1.0 MB rapidsai
cuspatial-21.06.00 | py37_g37798cd_0 15.2 MB rapidsai
custreamz-21.06.01 |py37_g101fc0fda4_2 32 KB rapidsai
cuxfilter-21.06.00 | py37_g9459467_0 136 KB rapidsai
cycler-0.10.0 | py_2 9 KB conda-forge
cyrus-sasl-2.1.27 | h230043b_2 224 KB conda-forge
cytoolz-0.11.0 | py37h5e8e339_3 403 KB conda-forge
dask-2021.5.0 | pyhd8ed1ab_0 4 KB conda-forge
dask-core-2021.5.0 | pyhd8ed1ab_0 735 KB conda-forge
dask-cuda-21.06.00 | py37_0 110 KB rapidsai
dask-cudf-21.06.01 |py37_g101fc0fda4_2 103 KB rapidsai
datashader-0.11.1 | pyh9f0ad1d_0 14.0 MB conda-forge
datashape-0.5.4 | py_1 49 KB conda-forge
decorator-4.4.2 | py_0 11 KB conda-forge
defusedxml-0.7.1 | pyhd8ed1ab_0 23 KB conda-forge
distributed-2021.5.0 | py37h89c1867_0 1.1 MB conda-forge
dlpack-0.5 | h9c3ff4c_0 12 KB conda-forge
entrypoints-0.3 | pyhd8ed1ab_1003 8 KB conda-forge
expat-2.4.1 | h9c3ff4c_0 182 KB conda-forge
faiss-proc-1.0.0 | cuda 24 KB rapidsai
fastavro-1.4.1 | py37h5e8e339_0 496 KB conda-forge
fastrlock-0.6 | py37hcd2ae1e_0 31 KB conda-forge
fiona-1.8.20 | py37ha0cc35a_0 1.1 MB conda-forge
fontconfig-2.13.1 | hba837de_1005 357 KB conda-forge
freetype-2.10.4 | h0708190_1 890 KB conda-forge
freexl-1.0.6 | h7f98852_0 48 KB conda-forge
fsspec-2021.6.0 | pyhd8ed1ab_0 79 KB conda-forge
future-0.18.2 | py37h89c1867_3 714 KB conda-forge
gcsfs-2021.6.0 | pyhd8ed1ab_0 23 KB conda-forge
gdal-3.2.2 | py37hb0e9ad2_0 1.5 MB conda-forge
geopandas-0.9.0 | pyhd8ed1ab_1 5 KB conda-forge
geopandas-base-0.9.0 | pyhd8ed1ab_1 950 KB conda-forge
geos-3.9.1 | h9c3ff4c_2 1.1 MB conda-forge
geotiff-1.6.0 | hcf90da6_5 296 KB conda-forge
gettext-0.19.8.1 | h0b5b191_1005 3.6 MB conda-forge
gflags-2.2.2 | he1b5a44_1004 114 KB conda-forge
giflib-5.2.1 | h36c2ea0_2 77 KB conda-forge
glog-0.5.0 | h48cff8f_0 104 KB conda-forge
google-auth-1.30.2 | pyh6c4a22f_0 77 KB conda-forge
google-auth-oauthlib-0.4.4 | pyhd8ed1ab_0 19 KB conda-forge
google-cloud-cpp-1.28.0 | hbd34f9f_0 9.3 MB conda-forge
greenlet-1.1.0 | py37hcd2ae1e_0 83 KB conda-forge
grpc-cpp-1.38.0 | h2519f57_0 3.6 MB conda-forge
hdf4-4.2.15 | h10796ff_3 950 KB conda-forge
hdf5-1.10.6 |nompi_h6a2412b_1114 3.1 MB conda-forge
heapdict-1.0.1 | py_0 7 KB conda-forge
importlib-metadata-4.5.0 | py37h89c1867_0 31 KB conda-forge
ipykernel-5.5.5 | py37h085eea5_0 167 KB conda-forge
ipython-7.24.1 | py37h085eea5_0 1.1 MB conda-forge
ipython_genutils-0.2.0 | py_1 21 KB conda-forge
ipywidgets-7.6.3 | pyhd3deb0d_0 101 KB conda-forge
jedi-0.18.0 | py37h89c1867_2 923 KB conda-forge
jinja2-3.0.1 | pyhd8ed1ab_0 99 KB conda-forge
joblib-1.0.1 | pyhd8ed1ab_0 206 KB conda-forge
jpeg-9d | h36c2ea0_0 264 KB conda-forge
jpype1-1.3.0 | py37h2527ec5_0 482 KB conda-forge
json-c-0.15 | h98cffda_0 274 KB conda-forge
jsonschema-3.2.0 | pyhd8ed1ab_3 45 KB conda-forge
jupyter-server-proxy-3.0.2 | pyhd8ed1ab_0 27 KB conda-forge
jupyter_client-6.1.12 | pyhd8ed1ab_0 79 KB conda-forge
jupyter_core-4.7.1 | py37h89c1867_0 72 KB conda-forge
jupyter_server-1.8.0 | pyhd8ed1ab_0 255 KB conda-forge
jupyterlab_pygments-0.1.2 | pyh9f0ad1d_0 8 KB conda-forge
jupyterlab_widgets-1.0.0 | pyhd8ed1ab_1 130 KB conda-forge
kealib-1.4.14 | hcc255d8_2 186 KB conda-forge
kiwisolver-1.3.1 | py37h2527ec5_1 78 KB conda-forge
krb5-1.19.1 | hcc1bbae_0 1.4 MB conda-forge
lcms2-2.12 | hddcbb42_0 443 KB conda-forge
libblas-3.9.0 | 9_openblas 11 KB conda-forge
libcblas-3.9.0 | 9_openblas 11 KB conda-forge
libcrc32c-1.1.1 | h9c3ff4c_2 20 KB conda-forge
libcudf-21.06.01 |cuda11.0_g101fc0fda4_2 187.7 MB rapidsai
libcudf_kafka-21.06.01 | g101fc0fda4_2 125 KB rapidsai
libcugraph-21.06.00 |cuda11.0_gf9ffd2de_0 213.6 MB rapidsai
libcuml-21.06.02 |cuda11.0_g7dfbf8d9e_0 95.2 MB rapidsai
libcumlprims-21.06.00 |cuda11.0_gfda2e6c_0 1.1 MB nvidia
libcurl-7.77.0 | h2574ce0_0 334 KB conda-forge
libcuspatial-21.06.00 |cuda11.0_g37798cd_0 7.6 MB rapidsai
libdap4-3.20.6 | hd7c4107_2 11.3 MB conda-forge
libevent-2.1.10 | hcdb4288_3 1.1 MB conda-forge
libfaiss-1.7.0 |cuda110h8045045_8_cuda 67.0 MB conda-forge
libgcrypt-1.9.3 | h7f98852_1 677 KB conda-forge
libgdal-3.2.2 | h804b7da_0 13.2 MB conda-forge
libgfortran-ng-9.3.0 | hff62375_19 22 KB conda-forge
libgfortran5-9.3.0 | hff62375_19 2.0 MB conda-forge
libglib-2.68.3 | h3e27bee_0 3.1 MB conda-forge
libgpg-error-1.42 | h9c3ff4c_0 278 KB conda-forge
libgsasl-1.8.0 | 2 125 KB conda-forge
libhwloc-2.3.0 | h5e5b7d1_1 2.7 MB conda-forge
libkml-1.3.0 | hd79254b_1012 640 KB conda-forge
liblapack-3.9.0 | 9_openblas 11 KB conda-forge
libllvm10-10.0.1 | he513fc3_3 26.4 MB conda-forge
libnetcdf-4.7.4 |nompi_h56d31a8_107 1.3 MB conda-forge
libntlm-1.4 | h7f98852_1002 32 KB conda-forge
libopenblas-0.3.15 |pthreads_h8fe5266_1 9.2 MB conda-forge
libpng-1.6.37 | h21135ba_2 306 KB conda-forge
libpq-13.3 | hd57d9b9_0 2.7 MB conda-forge
libprotobuf-3.16.0 | h780b84a_0 2.5 MB conda-forge
librdkafka-1.5.3 | hc49e61c_1 11.2 MB conda-forge
librmm-21.06.00 |cuda11.0_gee432a0_0 57 KB rapidsai
librttopo-1.1.0 | h1185371_6 235 KB conda-forge
libsodium-1.0.18 | h36c2ea0_1 366 KB conda-forge
libspatialindex-1.9.3 | h9c3ff4c_3 4.6 MB conda-forge
libspatialite-5.0.1 | h20cb978_4 4.4 MB conda-forge
libthrift-0.14.1 | he6d91bd_2 4.5 MB conda-forge
libtiff-4.2.0 | hbd63e13_2 639 KB conda-forge
libutf8proc-2.6.1 | h7f98852_0 95 KB conda-forge
libuuid-2.32.1 | h7f98852_1000 28 KB conda-forge
libuv-1.41.0 | h7f98852_0 1.0 MB conda-forge
libwebp-1.2.0 | h3452ae3_0 85 KB conda-forge
libwebp-base-1.2.0 | h7f98852_2 815 KB conda-forge
libxcb-1.13 | h7f98852_1003 395 KB conda-forge
libxgboost-1.4.2dev.rapidsai21.06| cuda11.0_0 115.3 MB rapidsai
libxml2-2.9.12 | h72842e0_0 772 KB conda-forge
llvmlite-0.36.0 | py37h9d7f4d0_0 2.7 MB conda-forge
locket-0.2.0 | py_2 6 KB conda-forge
mapclassify-2.4.2 | pyhd8ed1ab_0 36 KB conda-forge
markdown-3.3.4 | pyhd8ed1ab_0 67 KB conda-forge
markupsafe-2.0.1 | py37h5e8e339_0 22 KB conda-forge
matplotlib-base-3.4.2 | py37hdd32ed1_0 7.2 MB conda-forge
matplotlib-inline-0.1.2 | pyhd8ed1ab_2 11 KB conda-forge
mistune-0.8.4 |py37h5e8e339_1003 54 KB conda-forge
msgpack-python-1.0.2 | py37h2527ec5_1 91 KB conda-forge
multidict-5.1.0 | py37h5e8e339_1 67 KB conda-forge
multipledispatch-0.6.0 | py_0 12 KB conda-forge
munch-2.5.0 | py_0 12 KB conda-forge
nbclient-0.5.3 | pyhd8ed1ab_0 67 KB conda-forge
nbconvert-6.0.7 | py37h89c1867_3 535 KB conda-forge
nbformat-5.1.3 | pyhd8ed1ab_0 47 KB conda-forge
nccl-2.9.9.1 | h96e36e3_0 82.3 MB conda-forge
nest-asyncio-1.5.1 | pyhd8ed1ab_0 9 KB conda-forge
netifaces-0.10.9 |py37h5e8e339_1003 17 KB conda-forge
networkx-2.5.1 | pyhd8ed1ab_0 1.2 MB conda-forge
nlohmann_json-3.9.1 | h9c3ff4c_1 122 KB conda-forge
nodejs-14.15.4 | h92b4a50_1 15.7 MB conda-forge
notebook-6.4.0 | pyha770c72_0 6.1 MB conda-forge
numba-0.53.1 | py37hb11d6e1_1 3.7 MB conda-forge
numpy-1.21.0 | py37h038b26d_0 6.1 MB conda-forge
nvtx-0.2.3 | py37h5e8e339_0 55 KB conda-forge
oauthlib-3.1.1 | pyhd8ed1ab_0 87 KB conda-forge
olefile-0.46 | pyh9f0ad1d_1 32 KB conda-forge
openjdk-8.0.282 | h7f98852_0 99.3 MB conda-forge
openjpeg-2.4.0 | hb52868f_1 444 KB conda-forge
openssl-1.1.1k | h7f98852_0 2.1 MB conda-forge
orc-1.6.7 | h89a63ab_2 751 KB conda-forge
packaging-20.9 | pyh44b312d_0 35 KB conda-forge
pandas-1.2.5 | py37h219a48f_0 11.8 MB conda-forge
pandoc-2.14.0.2 | h7f98852_0 12.0 MB conda-forge
pandocfilters-1.4.2 | py_1 9 KB conda-forge
panel-0.10.3 | pyhd8ed1ab_0 6.1 MB conda-forge
param-1.10.1 | pyhd3deb0d_0 64 KB conda-forge
parquet-cpp-1.5.1 | 2 3 KB conda-forge
parso-0.8.2 | pyhd8ed1ab_0 68 KB conda-forge
partd-1.2.0 | pyhd8ed1ab_0 18 KB conda-forge
pcre-8.45 | h9c3ff4c_0 253 KB conda-forge
pexpect-4.8.0 | pyh9f0ad1d_2 47 KB conda-forge
pickle5-0.0.11 | py37h5e8e339_0 173 KB conda-forge
pickleshare-0.7.5 | py_1003 9 KB conda-forge
pillow-8.2.0 | py37h4600e1f_1 684 KB conda-forge
pixman-0.40.0 | h36c2ea0_0 627 KB conda-forge
poppler-21.03.0 | h93df280_0 15.9 MB conda-forge
poppler-data-0.4.10 | 0 3.8 MB conda-forge
postgresql-13.3 | h2510834_0 5.3 MB conda-forge
proj-8.0.0 | h277dcde_0 3.1 MB conda-forge
prometheus_client-0.11.0 | pyhd8ed1ab_0 46 KB conda-forge
prompt-toolkit-3.0.19 | pyha770c72_0 244 KB conda-forge
protobuf-3.16.0 | py37hcd2ae1e_0 342 KB conda-forge
psutil-5.8.0 | py37h5e8e339_1 342 KB conda-forge
pthread-stubs-0.4 | h36c2ea0_1001 5 KB conda-forge
ptyprocess-0.7.0 | pyhd3deb0d_0 16 KB conda-forge
py-xgboost-1.4.2dev.rapidsai21.06| cuda11.0py37_0 151 KB rapidsai
pyarrow-1.0.1 |py37hb63ea2f_40_cuda 2.4 MB conda-forge
pyasn1-0.4.8 | py_0 53 KB conda-forge
pyasn1-modules-0.2.7 | py_0 60 KB conda-forge
pyct-0.4.6 | py_0 3 KB conda-forge
pyct-core-0.4.6 | py_0 13 KB conda-forge
pydeck-0.5.0 | pyh9f0ad1d_0 3.6 MB conda-forge
pyee-7.0.4 | pyh9f0ad1d_0 14 KB conda-forge
pygments-2.9.0 | pyhd8ed1ab_0 754 KB conda-forge
pyhive-0.6.4 | pyhd8ed1ab_0 39 KB conda-forge
pyjwt-2.1.0 | pyhd8ed1ab_0 17 KB conda-forge
pynvml-11.0.0 | pyhd8ed1ab_0 39 KB conda-forge
pyparsing-2.4.7 | pyh9f0ad1d_0 60 KB conda-forge
pyppeteer-0.2.2 | py_1 104 KB conda-forge
pyproj-3.0.1 | py37h2bb2a07_1 484 KB conda-forge
pyrsistent-0.17.3 | py37h5e8e339_2 89 KB conda-forge
python-confluent-kafka-1.5.0| py37h8f50634_0 122 KB conda-forge
python-dateutil-2.8.1 | py_0 220 KB conda-forge
pytz-2021.1 | pyhd8ed1ab_0 239 KB conda-forge
pyu2f-0.1.5 | pyhd8ed1ab_0 31 KB conda-forge
pyviz_comms-2.0.2 | pyhd8ed1ab_0 25 KB conda-forge
pyyaml-5.4.1 | py37h5e8e339_0 189 KB conda-forge
pyzmq-22.1.0 | py37h336d617_0 500 KB conda-forge
rapids-21.06.00 |cuda11.0_py37_ge3c8282_427 5 KB rapidsai
rapids-blazing-21.06.00 |cuda11.0_py37_ge3c8282_427 5 KB rapidsai
rapids-xgboost-21.06.00 |cuda11.0_py37_ge3c8282_427 4 KB rapidsai
re2-2021.04.01 | h9c3ff4c_0 218 KB conda-forge
readline-8.1 | h46c0cb4_0 295 KB conda-forge
requests-oauthlib-1.3.0 | pyh9f0ad1d_0 21 KB conda-forge
rmm-21.06.00 |cuda_11.0_py37_gee432a0_0 7.0 MB rapidsai
rsa-4.7.2 | pyh44b312d_0 28 KB conda-forge
rtree-0.9.7 | py37h0b55af0_1 45 KB conda-forge
s2n-1.0.10 | h9b69904_0 442 KB conda-forge
sasl-0.3a1 | py37hcd2ae1e_0 74 KB conda-forge
scikit-learn-0.24.2 | py37h18a542f_0 7.5 MB conda-forge
scipy-1.6.3 | py37h29e03ee_0 20.5 MB conda-forge
send2trash-1.7.1 | pyhd8ed1ab_0 17 KB conda-forge
shapely-1.7.1 | py37h2d1e849_5 438 KB conda-forge
simpervisor-0.4 | pyhd8ed1ab_0 9 KB conda-forge
snappy-1.1.8 | he1b5a44_3 32 KB conda-forge
sniffio-1.2.0 | py37h89c1867_1 15 KB conda-forge
sortedcontainers-2.4.0 | pyhd8ed1ab_0 26 KB conda-forge
spdlog-1.8.5 | h4bd325d_0 353 KB conda-forge
sqlalchemy-1.4.19 | py37h5e8e339_0 2.3 MB conda-forge
streamz-0.6.2 | pyh44b312d_0 59 KB conda-forge
tblib-1.7.0 | pyhd8ed1ab_0 15 KB conda-forge
terminado-0.10.1 | py37h89c1867_0 26 KB conda-forge
testpath-0.5.0 | pyhd8ed1ab_0 86 KB conda-forge
threadpoolctl-2.1.0 | pyh5ca1d4c_0 15 KB conda-forge
thrift-0.13.0 | py37hcd2ae1e_2 120 KB conda-forge
thrift_sasl-0.4.2 | py37h8f50634_0 14 KB conda-forge
tiledb-2.2.9 | h91fcb0e_0 4.0 MB conda-forge
toolz-0.11.1 | py_0 46 KB conda-forge
tornado-6.1 | py37h5e8e339_1 646 KB conda-forge
traitlets-5.0.5 | py_0 81 KB conda-forge
treelite-1.3.0 | py37hfdac9b6_0 2.7 MB conda-forge
typing-extensions-3.10.0.0 | hd8ed1ab_0 8 KB conda-forge
typing_extensions-3.10.0.0 | pyha770c72_0 28 KB conda-forge
tzcode-2021a | h7f98852_1 68 KB conda-forge
tzdata-2021a | he74cb21_0 121 KB conda-forge
ucx-1.9.0+gcd9efd3 | cuda11.0_0 8.2 MB rapidsai
ucx-proc-1.0.0 | gpu 9 KB rapidsai
ucx-py-0.20.0 | py37_gcd9efd3_0 294 KB rapidsai
wcwidth-0.2.5 | pyh9f0ad1d_2 33 KB conda-forge
webencodings-0.5.1 | py_1 12 KB conda-forge
websocket-client-0.57.0 | py37h89c1867_4 59 KB conda-forge
websockets-8.1 | py37h5e8e339_3 90 KB conda-forge
widgetsnbextension-3.5.1 | py37h89c1867_4 1.8 MB conda-forge
xarray-0.18.2 | pyhd8ed1ab_0 599 KB conda-forge
xerces-c-3.2.3 | h9d8b166_2 1.8 MB conda-forge
xgboost-1.4.2dev.rapidsai21.06| cuda11.0py37_0 17 KB rapidsai
xorg-kbproto-1.0.7 | h7f98852_1002 27 KB conda-forge
xorg-libice-1.0.10 | h7f98852_0 58 KB conda-forge
xorg-libsm-1.2.3 | hd9c2040_1000 26 KB conda-forge
xorg-libx11-1.7.2 | h7f98852_0 941 KB conda-forge
xorg-libxau-1.0.9 | h7f98852_0 13 KB conda-forge
xorg-libxdmcp-1.1.3 | h7f98852_0 19 KB conda-forge
xorg-libxext-1.3.4 | h7f98852_1 54 KB conda-forge
xorg-libxrender-0.9.10 | h7f98852_1003 32 KB conda-forge
xorg-renderproto-0.11.1 | h7f98852_1002 9 KB conda-forge
xorg-xextproto-7.3.0 | h7f98852_1002 28 KB conda-forge
xorg-xproto-7.0.31 | h7f98852_1007 73 KB conda-forge
yarl-1.6.3 | py37h5e8e339_1 141 KB conda-forge
zeromq-4.3.4 | h9c3ff4c_0 352 KB conda-forge
zict-2.0.0 | py_0 10 KB conda-forge
zipp-3.4.1 | pyhd8ed1ab_0 11 KB conda-forge
------------------------------------------------------------
Total: 2.67 GB
The following NEW packages will be INSTALLED:
abseil-cpp conda-forge/linux-64::abseil-cpp-20210324.1-h9c3ff4c_0
aiohttp conda-forge/linux-64::aiohttp-3.7.4.post0-py37h5e8e339_0
anyio conda-forge/linux-64::anyio-3.2.0-py37h89c1867_0
appdirs conda-forge/noarch::appdirs-1.4.4-pyh9f0ad1d_0
argon2-cffi conda-forge/linux-64::argon2-cffi-20.1.0-py37h5e8e339_2
arrow-cpp conda-forge/linux-64::arrow-cpp-1.0.1-py37haa335b2_40_cuda
arrow-cpp-proc conda-forge/linux-64::arrow-cpp-proc-3.0.0-cuda
async-timeout conda-forge/noarch::async-timeout-3.0.1-py_1000
async_generator conda-forge/noarch::async_generator-1.10-py_0
attrs conda-forge/noarch::attrs-21.2.0-pyhd8ed1ab_0
aws-c-cal conda-forge/linux-64::aws-c-cal-0.5.11-h95a6274_0
aws-c-common conda-forge/linux-64::aws-c-common-0.6.2-h7f98852_0
aws-c-event-stream conda-forge/linux-64::aws-c-event-stream-0.2.7-h3541f99_13
aws-c-io conda-forge/linux-64::aws-c-io-0.10.5-hfb6a706_0
aws-checksums conda-forge/linux-64::aws-checksums-0.1.11-ha31a3da_7
aws-sdk-cpp conda-forge/linux-64::aws-sdk-cpp-1.8.186-hb4091e7_3
backcall conda-forge/noarch::backcall-0.2.0-pyh9f0ad1d_0
backports conda-forge/noarch::backports-1.0-py_2
backports.functoo~ conda-forge/noarch::backports.functools_lru_cache-1.6.4-pyhd8ed1ab_0
blazingsql rapidsai/linux-64::blazingsql-21.06.00-cuda_11.0_py37_g95ff589f8_0
bleach conda-forge/noarch::bleach-3.3.0-pyh44b312d_0
blinker conda-forge/noarch::blinker-1.4-py_1
bokeh conda-forge/linux-64::bokeh-2.2.3-py37h89c1867_0
boost conda-forge/linux-64::boost-1.72.0-py37h48f8a5e_1
boost-cpp conda-forge/linux-64::boost-cpp-1.72.0-h9d3c048_4
brotli conda-forge/linux-64::brotli-1.0.9-h9c3ff4c_4
cachetools conda-forge/noarch::cachetools-4.2.2-pyhd8ed1ab_0
cairo conda-forge/linux-64::cairo-1.16.0-h6cf1ce9_1008
cfitsio conda-forge/linux-64::cfitsio-3.470-hb418390_7
click conda-forge/noarch::click-7.1.2-pyh9f0ad1d_0
click-plugins conda-forge/noarch::click-plugins-1.1.1-py_0
cligj conda-forge/noarch::cligj-0.7.2-pyhd8ed1ab_0
cloudpickle conda-forge/noarch::cloudpickle-1.6.0-py_0
colorcet conda-forge/noarch::colorcet-2.0.6-pyhd8ed1ab_0
cudatoolkit nvidia/linux-64::cudatoolkit-11.0.221-h6bb024c_0
cudf rapidsai/linux-64::cudf-21.06.01-cuda_11.0_py37_g101fc0fda4_2
cudf_kafka rapidsai/linux-64::cudf_kafka-21.06.01-py37_g101fc0fda4_2
cugraph rapidsai/linux-64::cugraph-21.06.00-py37_gf9ffd2de_0
cuml rapidsai/linux-64::cuml-21.06.02-cuda11.0_py37_g7dfbf8d9e_0
cupy conda-forge/linux-64::cupy-9.0.0-py37h4fdb0f7_0
curl conda-forge/linux-64::curl-7.77.0-hea6ffbf_0
cusignal rapidsai/noarch::cusignal-21.06.00-py38_ga78207b_0
cuspatial rapidsai/linux-64::cuspatial-21.06.00-py37_g37798cd_0
custreamz rapidsai/linux-64::custreamz-21.06.01-py37_g101fc0fda4_2
cuxfilter rapidsai/linux-64::cuxfilter-21.06.00-py37_g9459467_0
cycler conda-forge/noarch::cycler-0.10.0-py_2
cyrus-sasl conda-forge/linux-64::cyrus-sasl-2.1.27-h230043b_2
cytoolz conda-forge/linux-64::cytoolz-0.11.0-py37h5e8e339_3
dask conda-forge/noarch::dask-2021.5.0-pyhd8ed1ab_0
dask-core conda-forge/noarch::dask-core-2021.5.0-pyhd8ed1ab_0
dask-cuda rapidsai/linux-64::dask-cuda-21.06.00-py37_0
dask-cudf rapidsai/linux-64::dask-cudf-21.06.01-py37_g101fc0fda4_2
datashader conda-forge/noarch::datashader-0.11.1-pyh9f0ad1d_0
datashape conda-forge/noarch::datashape-0.5.4-py_1
decorator conda-forge/noarch::decorator-4.4.2-py_0
defusedxml conda-forge/noarch::defusedxml-0.7.1-pyhd8ed1ab_0
distributed conda-forge/linux-64::distributed-2021.5.0-py37h89c1867_0
dlpack conda-forge/linux-64::dlpack-0.5-h9c3ff4c_0
entrypoints conda-forge/noarch::entrypoints-0.3-pyhd8ed1ab_1003
expat conda-forge/linux-64::expat-2.4.1-h9c3ff4c_0
faiss-proc rapidsai/linux-64::faiss-proc-1.0.0-cuda
fastavro conda-forge/linux-64::fastavro-1.4.1-py37h5e8e339_0
fastrlock conda-forge/linux-64::fastrlock-0.6-py37hcd2ae1e_0
fiona conda-forge/linux-64::fiona-1.8.20-py37ha0cc35a_0
fontconfig conda-forge/linux-64::fontconfig-2.13.1-hba837de_1005
freetype conda-forge/linux-64::freetype-2.10.4-h0708190_1
freexl conda-forge/linux-64::freexl-1.0.6-h7f98852_0
fsspec conda-forge/noarch::fsspec-2021.6.0-pyhd8ed1ab_0
future conda-forge/linux-64::future-0.18.2-py37h89c1867_3
gcsfs conda-forge/noarch::gcsfs-2021.6.0-pyhd8ed1ab_0
gdal conda-forge/linux-64::gdal-3.2.2-py37hb0e9ad2_0
geopandas conda-forge/noarch::geopandas-0.9.0-pyhd8ed1ab_1
geopandas-base conda-forge/noarch::geopandas-base-0.9.0-pyhd8ed1ab_1
geos conda-forge/linux-64::geos-3.9.1-h9c3ff4c_2
geotiff conda-forge/linux-64::geotiff-1.6.0-hcf90da6_5
gettext conda-forge/linux-64::gettext-0.19.8.1-h0b5b191_1005
gflags conda-forge/linux-64::gflags-2.2.2-he1b5a44_1004
giflib conda-forge/linux-64::giflib-5.2.1-h36c2ea0_2
glog conda-forge/linux-64::glog-0.5.0-h48cff8f_0
google-auth conda-forge/noarch::google-auth-1.30.2-pyh6c4a22f_0
google-auth-oauth~ conda-forge/noarch::google-auth-oauthlib-0.4.4-pyhd8ed1ab_0
google-cloud-cpp conda-forge/linux-64::google-cloud-cpp-1.28.0-hbd34f9f_0
greenlet conda-forge/linux-64::greenlet-1.1.0-py37hcd2ae1e_0
grpc-cpp conda-forge/linux-64::grpc-cpp-1.38.0-h2519f57_0
hdf4 conda-forge/linux-64::hdf4-4.2.15-h10796ff_3
hdf5 conda-forge/linux-64::hdf5-1.10.6-nompi_h6a2412b_1114
heapdict conda-forge/noarch::heapdict-1.0.1-py_0
importlib-metadata conda-forge/linux-64::importlib-metadata-4.5.0-py37h89c1867_0
ipykernel conda-forge/linux-64::ipykernel-5.5.5-py37h085eea5_0
ipython conda-forge/linux-64::ipython-7.24.1-py37h085eea5_0
ipython_genutils conda-forge/noarch::ipython_genutils-0.2.0-py_1
ipywidgets conda-forge/noarch::ipywidgets-7.6.3-pyhd3deb0d_0
jedi conda-forge/linux-64::jedi-0.18.0-py37h89c1867_2
jinja2 conda-forge/noarch::jinja2-3.0.1-pyhd8ed1ab_0
joblib conda-forge/noarch::joblib-1.0.1-pyhd8ed1ab_0
jpeg conda-forge/linux-64::jpeg-9d-h36c2ea0_0
jpype1 conda-forge/linux-64::jpype1-1.3.0-py37h2527ec5_0
json-c conda-forge/linux-64::json-c-0.15-h98cffda_0
jsonschema conda-forge/noarch::jsonschema-3.2.0-pyhd8ed1ab_3
jupyter-server-pr~ conda-forge/noarch::jupyter-server-proxy-3.0.2-pyhd8ed1ab_0
jupyter_client conda-forge/noarch::jupyter_client-6.1.12-pyhd8ed1ab_0
jupyter_core conda-forge/linux-64::jupyter_core-4.7.1-py37h89c1867_0
jupyter_server conda-forge/noarch::jupyter_server-1.8.0-pyhd8ed1ab_0
jupyterlab_pygmen~ conda-forge/noarch::jupyterlab_pygments-0.1.2-pyh9f0ad1d_0
jupyterlab_widgets conda-forge/noarch::jupyterlab_widgets-1.0.0-pyhd8ed1ab_1
kealib conda-forge/linux-64::kealib-1.4.14-hcc255d8_2
kiwisolver conda-forge/linux-64::kiwisolver-1.3.1-py37h2527ec5_1
lcms2 conda-forge/linux-64::lcms2-2.12-hddcbb42_0
libblas conda-forge/linux-64::libblas-3.9.0-9_openblas
libcblas conda-forge/linux-64::libcblas-3.9.0-9_openblas
libcrc32c conda-forge/linux-64::libcrc32c-1.1.1-h9c3ff4c_2
libcudf rapidsai/linux-64::libcudf-21.06.01-cuda11.0_g101fc0fda4_2
libcudf_kafka rapidsai/linux-64::libcudf_kafka-21.06.01-g101fc0fda4_2
libcugraph rapidsai/linux-64::libcugraph-21.06.00-cuda11.0_gf9ffd2de_0
libcuml rapidsai/linux-64::libcuml-21.06.02-cuda11.0_g7dfbf8d9e_0
libcumlprims nvidia/linux-64::libcumlprims-21.06.00-cuda11.0_gfda2e6c_0
libcuspatial rapidsai/linux-64::libcuspatial-21.06.00-cuda11.0_g37798cd_0
libdap4 conda-forge/linux-64::libdap4-3.20.6-hd7c4107_2
libevent conda-forge/linux-64::libevent-2.1.10-hcdb4288_3
libfaiss conda-forge/linux-64::libfaiss-1.7.0-cuda110h8045045_8_cuda
libgcrypt conda-forge/linux-64::libgcrypt-1.9.3-h7f98852_1
libgdal conda-forge/linux-64::libgdal-3.2.2-h804b7da_0
libgfortran-ng conda-forge/linux-64::libgfortran-ng-9.3.0-hff62375_19
libgfortran5 conda-forge/linux-64::libgfortran5-9.3.0-hff62375_19
libglib conda-forge/linux-64::libglib-2.68.3-h3e27bee_0
libgpg-error conda-forge/linux-64::libgpg-error-1.42-h9c3ff4c_0
libgsasl conda-forge/linux-64::libgsasl-1.8.0-2
libhwloc conda-forge/linux-64::libhwloc-2.3.0-h5e5b7d1_1
libkml conda-forge/linux-64::libkml-1.3.0-hd79254b_1012
liblapack conda-forge/linux-64::liblapack-3.9.0-9_openblas
libllvm10 conda-forge/linux-64::libllvm10-10.0.1-he513fc3_3
libnetcdf conda-forge/linux-64::libnetcdf-4.7.4-nompi_h56d31a8_107
libntlm conda-forge/linux-64::libntlm-1.4-h7f98852_1002
libopenblas conda-forge/linux-64::libopenblas-0.3.15-pthreads_h8fe5266_1
libpng conda-forge/linux-64::libpng-1.6.37-h21135ba_2
libpq conda-forge/linux-64::libpq-13.3-hd57d9b9_0
libprotobuf conda-forge/linux-64::libprotobuf-3.16.0-h780b84a_0
librdkafka conda-forge/linux-64::librdkafka-1.5.3-hc49e61c_1
librmm rapidsai/linux-64::librmm-21.06.00-cuda11.0_gee432a0_0
librttopo conda-forge/linux-64::librttopo-1.1.0-h1185371_6
libsodium conda-forge/linux-64::libsodium-1.0.18-h36c2ea0_1
libspatialindex conda-forge/linux-64::libspatialindex-1.9.3-h9c3ff4c_3
libspatialite conda-forge/linux-64::libspatialite-5.0.1-h20cb978_4
libthrift conda-forge/linux-64::libthrift-0.14.1-he6d91bd_2
libtiff conda-forge/linux-64::libtiff-4.2.0-hbd63e13_2
libutf8proc conda-forge/linux-64::libutf8proc-2.6.1-h7f98852_0
libuuid conda-forge/linux-64::libuuid-2.32.1-h7f98852_1000
libuv conda-forge/linux-64::libuv-1.41.0-h7f98852_0
libwebp conda-forge/linux-64::libwebp-1.2.0-h3452ae3_0
libwebp-base conda-forge/linux-64::libwebp-base-1.2.0-h7f98852_2
libxcb conda-forge/linux-64::libxcb-1.13-h7f98852_1003
libxgboost rapidsai/linux-64::libxgboost-1.4.2dev.rapidsai21.06-cuda11.0_0
llvmlite conda-forge/linux-64::llvmlite-0.36.0-py37h9d7f4d0_0
locket conda-forge/noarch::locket-0.2.0-py_2
mapclassify conda-forge/noarch::mapclassify-2.4.2-pyhd8ed1ab_0
markdown conda-forge/noarch::markdown-3.3.4-pyhd8ed1ab_0
markupsafe conda-forge/linux-64::markupsafe-2.0.1-py37h5e8e339_0
matplotlib-base conda-forge/linux-64::matplotlib-base-3.4.2-py37hdd32ed1_0
matplotlib-inline conda-forge/noarch::matplotlib-inline-0.1.2-pyhd8ed1ab_2
mistune conda-forge/linux-64::mistune-0.8.4-py37h5e8e339_1003
msgpack-python conda-forge/linux-64::msgpack-python-1.0.2-py37h2527ec5_1
multidict conda-forge/linux-64::multidict-5.1.0-py37h5e8e339_1
multipledispatch conda-forge/noarch::multipledispatch-0.6.0-py_0
munch conda-forge/noarch::munch-2.5.0-py_0
nbclient conda-forge/noarch::nbclient-0.5.3-pyhd8ed1ab_0
nbconvert conda-forge/linux-64::nbconvert-6.0.7-py37h89c1867_3
nbformat conda-forge/noarch::nbformat-5.1.3-pyhd8ed1ab_0
nccl conda-forge/linux-64::nccl-2.9.9.1-h96e36e3_0
nest-asyncio conda-forge/noarch::nest-asyncio-1.5.1-pyhd8ed1ab_0
netifaces conda-forge/linux-64::netifaces-0.10.9-py37h5e8e339_1003
networkx conda-forge/noarch::networkx-2.5.1-pyhd8ed1ab_0
nlohmann_json conda-forge/linux-64::nlohmann_json-3.9.1-h9c3ff4c_1
nodejs conda-forge/linux-64::nodejs-14.15.4-h92b4a50_1
notebook conda-forge/noarch::notebook-6.4.0-pyha770c72_0
numba conda-forge/linux-64::numba-0.53.1-py37hb11d6e1_1
numpy conda-forge/linux-64::numpy-1.21.0-py37h038b26d_0
nvtx conda-forge/linux-64::nvtx-0.2.3-py37h5e8e339_0
oauthlib conda-forge/noarch::oauthlib-3.1.1-pyhd8ed1ab_0
olefile conda-forge/noarch::olefile-0.46-pyh9f0ad1d_1
openjdk conda-forge/linux-64::openjdk-8.0.282-h7f98852_0
openjpeg conda-forge/linux-64::openjpeg-2.4.0-hb52868f_1
orc conda-forge/linux-64::orc-1.6.7-h89a63ab_2
packaging conda-forge/noarch::packaging-20.9-pyh44b312d_0
pandas conda-forge/linux-64::pandas-1.2.5-py37h219a48f_0
pandoc conda-forge/linux-64::pandoc-2.14.0.2-h7f98852_0
pandocfilters conda-forge/noarch::pandocfilters-1.4.2-py_1
panel conda-forge/noarch::panel-0.10.3-pyhd8ed1ab_0
param conda-forge/noarch::param-1.10.1-pyhd3deb0d_0
parquet-cpp conda-forge/noarch::parquet-cpp-1.5.1-2
parso conda-forge/noarch::parso-0.8.2-pyhd8ed1ab_0
partd conda-forge/noarch::partd-1.2.0-pyhd8ed1ab_0
pcre conda-forge/linux-64::pcre-8.45-h9c3ff4c_0
pexpect conda-forge/noarch::pexpect-4.8.0-pyh9f0ad1d_2
pickle5 conda-forge/linux-64::pickle5-0.0.11-py37h5e8e339_0
pickleshare conda-forge/noarch::pickleshare-0.7.5-py_1003
pillow conda-forge/linux-64::pillow-8.2.0-py37h4600e1f_1
pixman conda-forge/linux-64::pixman-0.40.0-h36c2ea0_0
poppler conda-forge/linux-64::poppler-21.03.0-h93df280_0
poppler-data conda-forge/noarch::poppler-data-0.4.10-0
postgresql conda-forge/linux-64::postgresql-13.3-h2510834_0
proj conda-forge/linux-64::proj-8.0.0-h277dcde_0
prometheus_client conda-forge/noarch::prometheus_client-0.11.0-pyhd8ed1ab_0
prompt-toolkit conda-forge/noarch::prompt-toolkit-3.0.19-pyha770c72_0
protobuf conda-forge/linux-64::protobuf-3.16.0-py37hcd2ae1e_0
psutil conda-forge/linux-64::psutil-5.8.0-py37h5e8e339_1
pthread-stubs conda-forge/linux-64::pthread-stubs-0.4-h36c2ea0_1001
ptyprocess conda-forge/noarch::ptyprocess-0.7.0-pyhd3deb0d_0
py-xgboost rapidsai/linux-64::py-xgboost-1.4.2dev.rapidsai21.06-cuda11.0py37_0
pyarrow conda-forge/linux-64::pyarrow-1.0.1-py37hb63ea2f_40_cuda
pyasn1 conda-forge/noarch::pyasn1-0.4.8-py_0
pyasn1-modules conda-forge/noarch::pyasn1-modules-0.2.7-py_0
pyct conda-forge/noarch::pyct-0.4.6-py_0
pyct-core conda-forge/noarch::pyct-core-0.4.6-py_0
pydeck conda-forge/noarch::pydeck-0.5.0-pyh9f0ad1d_0
pyee conda-forge/noarch::pyee-7.0.4-pyh9f0ad1d_0
pygments conda-forge/noarch::pygments-2.9.0-pyhd8ed1ab_0
pyhive conda-forge/noarch::pyhive-0.6.4-pyhd8ed1ab_0
pyjwt conda-forge/noarch::pyjwt-2.1.0-pyhd8ed1ab_0
pynvml conda-forge/noarch::pynvml-11.0.0-pyhd8ed1ab_0
pyparsing conda-forge/noarch::pyparsing-2.4.7-pyh9f0ad1d_0
pyppeteer conda-forge/noarch::pyppeteer-0.2.2-py_1
pyproj conda-forge/linux-64::pyproj-3.0.1-py37h2bb2a07_1
pyrsistent conda-forge/linux-64::pyrsistent-0.17.3-py37h5e8e339_2
python-confluent-~ conda-forge/linux-64::python-confluent-kafka-1.5.0-py37h8f50634_0
python-dateutil conda-forge/noarch::python-dateutil-2.8.1-py_0
pytz conda-forge/noarch::pytz-2021.1-pyhd8ed1ab_0
pyu2f conda-forge/noarch::pyu2f-0.1.5-pyhd8ed1ab_0
pyviz_comms conda-forge/noarch::pyviz_comms-2.0.2-pyhd8ed1ab_0
pyyaml conda-forge/linux-64::pyyaml-5.4.1-py37h5e8e339_0
pyzmq conda-forge/linux-64::pyzmq-22.1.0-py37h336d617_0
rapids rapidsai/linux-64::rapids-21.06.00-cuda11.0_py37_ge3c8282_427
rapids-blazing rapidsai/linux-64::rapids-blazing-21.06.00-cuda11.0_py37_ge3c8282_427
rapids-xgboost rapidsai/linux-64::rapids-xgboost-21.06.00-cuda11.0_py37_ge3c8282_427
re2 conda-forge/linux-64::re2-2021.04.01-h9c3ff4c_0
requests-oauthlib conda-forge/noarch::requests-oauthlib-1.3.0-pyh9f0ad1d_0
rmm rapidsai/linux-64::rmm-21.06.00-cuda_11.0_py37_gee432a0_0
rsa conda-forge/noarch::rsa-4.7.2-pyh44b312d_0
rtree conda-forge/linux-64::rtree-0.9.7-py37h0b55af0_1
s2n conda-forge/linux-64::s2n-1.0.10-h9b69904_0
sasl conda-forge/linux-64::sasl-0.3a1-py37hcd2ae1e_0
scikit-learn conda-forge/linux-64::scikit-learn-0.24.2-py37h18a542f_0
scipy conda-forge/linux-64::scipy-1.6.3-py37h29e03ee_0
send2trash conda-forge/noarch::send2trash-1.7.1-pyhd8ed1ab_0
shapely conda-forge/linux-64::shapely-1.7.1-py37h2d1e849_5
simpervisor conda-forge/noarch::simpervisor-0.4-pyhd8ed1ab_0
snappy conda-forge/linux-64::snappy-1.1.8-he1b5a44_3
sniffio conda-forge/linux-64::sniffio-1.2.0-py37h89c1867_1
sortedcontainers conda-forge/noarch::sortedcontainers-2.4.0-pyhd8ed1ab_0
spdlog conda-forge/linux-64::spdlog-1.8.5-h4bd325d_0
sqlalchemy conda-forge/linux-64::sqlalchemy-1.4.19-py37h5e8e339_0
streamz conda-forge/noarch::streamz-0.6.2-pyh44b312d_0
tblib conda-forge/noarch::tblib-1.7.0-pyhd8ed1ab_0
terminado conda-forge/linux-64::terminado-0.10.1-py37h89c1867_0
testpath conda-forge/noarch::testpath-0.5.0-pyhd8ed1ab_0
threadpoolctl conda-forge/noarch::threadpoolctl-2.1.0-pyh5ca1d4c_0
thrift conda-forge/linux-64::thrift-0.13.0-py37hcd2ae1e_2
thrift_sasl conda-forge/linux-64::thrift_sasl-0.4.2-py37h8f50634_0
tiledb conda-forge/linux-64::tiledb-2.2.9-h91fcb0e_0
toolz conda-forge/noarch::toolz-0.11.1-py_0
tornado conda-forge/linux-64::tornado-6.1-py37h5e8e339_1
traitlets conda-forge/noarch::traitlets-5.0.5-py_0
treelite conda-forge/linux-64::treelite-1.3.0-py37hfdac9b6_0
typing-extensions conda-forge/noarch::typing-extensions-3.10.0.0-hd8ed1ab_0
typing_extensions conda-forge/noarch::typing_extensions-3.10.0.0-pyha770c72_0
tzcode conda-forge/linux-64::tzcode-2021a-h7f98852_1
tzdata conda-forge/noarch::tzdata-2021a-he74cb21_0
ucx rapidsai/linux-64::ucx-1.9.0+gcd9efd3-cuda11.0_0
ucx-proc rapidsai/linux-64::ucx-proc-1.0.0-gpu
ucx-py rapidsai/linux-64::ucx-py-0.20.0-py37_gcd9efd3_0
wcwidth conda-forge/noarch::wcwidth-0.2.5-pyh9f0ad1d_2
webencodings conda-forge/noarch::webencodings-0.5.1-py_1
websocket-client conda-forge/linux-64::websocket-client-0.57.0-py37h89c1867_4
websockets conda-forge/linux-64::websockets-8.1-py37h5e8e339_3
widgetsnbextension conda-forge/linux-64::widgetsnbextension-3.5.1-py37h89c1867_4
xarray conda-forge/noarch::xarray-0.18.2-pyhd8ed1ab_0
xerces-c conda-forge/linux-64::xerces-c-3.2.3-h9d8b166_2
xgboost rapidsai/linux-64::xgboost-1.4.2dev.rapidsai21.06-cuda11.0py37_0
xorg-kbproto conda-forge/linux-64::xorg-kbproto-1.0.7-h7f98852_1002
xorg-libice conda-forge/linux-64::xorg-libice-1.0.10-h7f98852_0
xorg-libsm conda-forge/linux-64::xorg-libsm-1.2.3-hd9c2040_1000
xorg-libx11 conda-forge/linux-64::xorg-libx11-1.7.2-h7f98852_0
xorg-libxau conda-forge/linux-64::xorg-libxau-1.0.9-h7f98852_0
xorg-libxdmcp conda-forge/linux-64::xorg-libxdmcp-1.1.3-h7f98852_0
xorg-libxext conda-forge/linux-64::xorg-libxext-1.3.4-h7f98852_1
xorg-libxrender conda-forge/linux-64::xorg-libxrender-0.9.10-h7f98852_1003
xorg-renderproto conda-forge/linux-64::xorg-renderproto-0.11.1-h7f98852_1002
xorg-xextproto conda-forge/linux-64::xorg-xextproto-7.3.0-h7f98852_1002
xorg-xproto conda-forge/linux-64::xorg-xproto-7.0.31-h7f98852_1007
yarl conda-forge/linux-64::yarl-1.6.3-py37h5e8e339_1
zeromq conda-forge/linux-64::zeromq-4.3.4-h9c3ff4c_0
zict conda-forge/noarch::zict-2.0.0-py_0
zipp conda-forge/noarch::zipp-3.4.1-pyhd8ed1ab_0
The following packages will be UPDATED:
ca-certificates 2020.12.5-ha878542_0 --> 2021.5.30-ha878542_0
certifi 2020.12.5-py37h89c1867_1 --> 2021.5.30-py37h89c1867_0
conda 4.9.2-py37h89c1867_0 --> 4.10.1-py37h89c1867_0
krb5 1.17.2-h926e7f8_0 --> 1.19.1-hcc1bbae_0
libcurl 7.75.0-hc4aaa36_0 --> 7.77.0-h2574ce0_0
libxml2 2.9.10-h72842e0_3 --> 2.9.12-h72842e0_0
openssl 1.1.1j-h7f98852_0 --> 1.1.1k-h7f98852_0
readline 8.0-he28a2e2_2 --> 8.1-h46c0cb4_0
Downloading and Extracting Packages
xorg-xproto-7.0.31 | 73 KB | | 0%
xorg-xproto-7.0.31 | 73 KB | ########## | 100%
faiss-proc-1.0.0 | 24 KB | | 0%
faiss-proc-1.0.0 | 24 KB | ######7 | 68%
faiss-proc-1.0.0 | 24 KB | ########## | 100%
giflib-5.2.1 | 77 KB | | 0%
giflib-5.2.1 | 77 KB | ########## | 100%
nlohmann_json-3.9.1 | 122 KB | | 0%
nlohmann_json-3.9.1 | 122 KB | ########## | 100%
libgfortran5-9.3.0 | 2.0 MB | | 0%
libgfortran5-9.3.0 | 2.0 MB | ########## | 100%
libgfortran5-9.3.0 | 2.0 MB | ########## | 100%
importlib-metadata-4 | 31 KB | | 0%
importlib-metadata-4 | 31 KB | ########## | 100%
xorg-libxdmcp-1.1.3 | 19 KB | | 0%
xorg-libxdmcp-1.1.3 | 19 KB | ########## | 100%
cuspatial-21.06.00 | 15.2 MB | | 0%
cuspatial-21.06.00 | 15.2 MB | | 0%
cuspatial-21.06.00 | 15.2 MB | | 1%
cuspatial-21.06.00 | 15.2 MB | 3 | 3%
cuspatial-21.06.00 | 15.2 MB | #4 | 14%
cuspatial-21.06.00 | 15.2 MB | ####3 | 44%
cuspatial-21.06.00 | 15.2 MB | #######8 | 79%
cuspatial-21.06.00 | 15.2 MB | #########8 | 98%
cuspatial-21.06.00 | 15.2 MB | ########## | 100%
libpq-13.3 | 2.7 MB | | 0%
libpq-13.3 | 2.7 MB | ########## | 100%
libpq-13.3 | 2.7 MB | ########## | 100%
jupyterlab_pygments- | 8 KB | | 0%
jupyterlab_pygments- | 8 KB | ########## | 100%
libuv-1.41.0 | 1.0 MB | | 0%
libuv-1.41.0 | 1.0 MB | ########## | 100%
libuv-1.41.0 | 1.0 MB | ########## | 100%
lcms2-2.12 | 443 KB | | 0%
lcms2-2.12 | 443 KB | ########## | 100%
lcms2-2.12 | 443 KB | ########## | 100%
cusignal-21.06.00 | 1.0 MB | | 0%
cusignal-21.06.00 | 1.0 MB | 1 | 2%
cusignal-21.06.00 | 1.0 MB | ########## | 100%
cusignal-21.06.00 | 1.0 MB | ########## | 100%
typing_extensions-3. | 28 KB | | 0%
typing_extensions-3. | 28 KB | ########## | 100%
geopandas-base-0.9.0 | 950 KB | | 0%
geopandas-base-0.9.0 | 950 KB | 1 | 2%
geopandas-base-0.9.0 | 950 KB | ########## | 100%
cuxfilter-21.06.00 | 136 KB | | 0%
cuxfilter-21.06.00 | 136 KB | #1 | 12%
cuxfilter-21.06.00 | 136 KB | ########## | 100%
oauthlib-3.1.1 | 87 KB | | 0%
oauthlib-3.1.1 | 87 KB | ########## | 100%
pyee-7.0.4 | 14 KB | | 0%
pyee-7.0.4 | 14 KB | ########## | 100%
gcsfs-2021.6.0 | 23 KB | | 0%
gcsfs-2021.6.0 | 23 KB | ########## | 100%
blazingsql-21.06.00 | 190.2 MB | | 0%
blazingsql-21.06.00 | 190.2 MB | | 0%
blazingsql-21.06.00 | 190.2 MB | 2 | 2%
blazingsql-21.06.00 | 190.2 MB | 3 | 4%
blazingsql-21.06.00 | 190.2 MB | 5 | 6%
blazingsql-21.06.00 | 190.2 MB | 7 | 8%
blazingsql-21.06.00 | 190.2 MB | # | 10%
blazingsql-21.06.00 | 190.2 MB | #1 | 12%
blazingsql-21.06.00 | 190.2 MB | #3 | 13%
blazingsql-21.06.00 | 190.2 MB | #5 | 16%
blazingsql-21.06.00 | 190.2 MB | #7 | 18%
blazingsql-21.06.00 | 190.2 MB | #9 | 19%
blazingsql-21.06.00 | 190.2 MB | ##1 | 22%
blazingsql-21.06.00 | 190.2 MB | ##3 | 24%
blazingsql-21.06.00 | 190.2 MB | ##5 | 26%
blazingsql-21.06.00 | 190.2 MB | ##8 | 28%
blazingsql-21.06.00 | 190.2 MB | ### | 30%
blazingsql-21.06.00 | 190.2 MB | ###2 | 32%
blazingsql-21.06.00 | 190.2 MB | ###4 | 34%
blazingsql-21.06.00 | 190.2 MB | ###5 | 36%
blazingsql-21.06.00 | 190.2 MB | ###7 | 38%
blazingsql-21.06.00 | 190.2 MB | ###9 | 40%
blazingsql-21.06.00 | 190.2 MB | ####1 | 41%
blazingsql-21.06.00 | 190.2 MB | ####2 | 43%
blazingsql-21.06.00 | 190.2 MB | ####4 | 44%
blazingsql-21.06.00 | 190.2 MB | ####5 | 45%
blazingsql-21.06.00 | 190.2 MB | ####6 | 47%
blazingsql-21.06.00 | 190.2 MB | ####7 | 48%
blazingsql-21.06.00 | 190.2 MB | ####9 | 49%
blazingsql-21.06.00 | 190.2 MB | ##### | 50%
blazingsql-21.06.00 | 190.2 MB | #####1 | 52%
blazingsql-21.06.00 | 190.2 MB | #####2 | 53%
blazingsql-21.06.00 | 190.2 MB | #####4 | 54%
blazingsql-21.06.00 | 190.2 MB | #####5 | 55%
blazingsql-21.06.00 | 190.2 MB | #####6 | 57%
blazingsql-21.06.00 | 190.2 MB | #####7 | 58%
blazingsql-21.06.00 | 190.2 MB | #####8 | 59%
blazingsql-21.06.00 | 190.2 MB | #####9 | 60%
blazingsql-21.06.00 | 190.2 MB | ###### | 61%
blazingsql-21.06.00 | 190.2 MB | ######1 | 62%
blazingsql-21.06.00 | 190.2 MB | ######2 | 63%
blazingsql-21.06.00 | 190.2 MB | ######3 | 64%
blazingsql-21.06.00 | 190.2 MB | ######4 | 65%
blazingsql-21.06.00 | 190.2 MB | ######5 | 66%
blazingsql-21.06.00 | 190.2 MB | ######6 | 67%
blazingsql-21.06.00 | 190.2 MB | ######7 | 68%
blazingsql-21.06.00 | 190.2 MB | ######8 | 69%
blazingsql-21.06.00 | 190.2 MB | ######9 | 70%
blazingsql-21.06.00 | 190.2 MB | ####### | 71%
blazingsql-21.06.00 | 190.2 MB | #######1 | 72%
blazingsql-21.06.00 | 190.2 MB | #######2 | 73%
blazingsql-21.06.00 | 190.2 MB | #######3 | 74%
blazingsql-21.06.00 | 190.2 MB | #######4 | 75%
blazingsql-21.06.00 | 190.2 MB | #######5 | 76%
blazingsql-21.06.00 | 190.2 MB | #######6 | 76%
blazingsql-21.06.00 | 190.2 MB | #######7 | 77%
blazingsql-21.06.00 | 190.2 MB | #######7 | 78%
blazingsql-21.06.00 | 190.2 MB | #######8 | 79%
blazingsql-21.06.00 | 190.2 MB | #######9 | 79%
blazingsql-21.06.00 | 190.2 MB | #######9 | 80%
blazingsql-21.06.00 | 190.2 MB | ######## | 80%
blazingsql-21.06.00 | 190.2 MB | ######## | 81%
blazingsql-21.06.00 | 190.2 MB | ########1 | 81%
blazingsql-21.06.00 | 190.2 MB | ########1 | 82%
blazingsql-21.06.00 | 190.2 MB | ########1 | 82%
blazingsql-21.06.00 | 190.2 MB | ########2 | 82%
blazingsql-21.06.00 | 190.2 MB | ########2 | 83%
blazingsql-21.06.00 | 190.2 MB | ########3 | 83%
blazingsql-21.06.00 | 190.2 MB | ########3 | 84%
blazingsql-21.06.00 | 190.2 MB | ########4 | 84%
blazingsql-21.06.00 | 190.2 MB | ########4 | 84%
blazingsql-21.06.00 | 190.2 MB | ########4 | 85%
blazingsql-21.06.00 | 190.2 MB | ########5 | 85%
blazingsql-21.06.00 | 190.2 MB | ########5 | 86%
blazingsql-21.06.00 | 190.2 MB | ########5 | 86%
blazingsql-21.06.00 | 190.2 MB | ########6 | 86%
blazingsql-21.06.00 | 190.2 MB | ########6 | 86%
blazingsql-21.06.00 | 190.2 MB | ########6 | 87%
blazingsql-21.06.00 | 190.2 MB | ########6 | 87%
blazingsql-21.06.00 | 190.2 MB | ########7 | 87%
blazingsql-21.06.00 | 190.2 MB | ########7 | 87%
blazingsql-21.06.00 | 190.2 MB | ########7 | 88%
blazingsql-21.06.00 | 190.2 MB | ########7 | 88%
blazingsql-21.06.00 | 190.2 MB | ########7 | 88%
blazingsql-21.06.00 | 190.2 MB | ########8 | 88%
blazingsql-21.06.00 | 190.2 MB | ########8 | 88%
blazingsql-21.06.00 | 190.2 MB | ########8 | 88%
blazingsql-21.06.00 | 190.2 MB | ########8 | 89%
blazingsql-21.06.00 | 190.2 MB | ########8 | 89%
blazingsql-21.06.00 | 190.2 MB | ########9 | 89%
blazingsql-21.06.00 | 190.2 MB | ########9 | 89%
blazingsql-21.06.00 | 190.2 MB | ########9 | 89%
blazingsql-21.06.00 | 190.2 MB | ########9 | 90%
blazingsql-21.06.00 | 190.2 MB | ########9 | 90%
blazingsql-21.06.00 | 190.2 MB | ########9 | 90%
blazingsql-21.06.00 | 190.2 MB | ######### | 90%
blazingsql-21.06.00 | 190.2 MB | ######### | 90%
blazingsql-21.06.00 | 190.2 MB | ######### | 91%
blazingsql-21.06.00 | 190.2 MB | ######### | 91%
blazingsql-21.06.00 | 190.2 MB | ######### | 91%
blazingsql-21.06.00 | 190.2 MB | #########1 | 91%
blazingsql-21.06.00 | 190.2 MB | #########1 | 91%
blazingsql-21.06.00 | 190.2 MB | #########1 | 92%
blazingsql-21.06.00 | 190.2 MB | #########1 | 92%
blazingsql-21.06.00 | 190.2 MB | #########1 | 92%
blazingsql-21.06.00 | 190.2 MB | #########2 | 92%
blazingsql-21.06.00 | 190.2 MB | #########2 | 92%
blazingsql-21.06.00 | 190.2 MB | #########2 | 92%
blazingsql-21.06.00 | 190.2 MB | #########2 | 93%
blazingsql-21.06.00 | 190.2 MB | #########2 | 93%
blazingsql-21.06.00 | 190.2 MB | #########2 | 93%
blazingsql-21.06.00 | 190.2 MB | #########3 | 93%
blazingsql-21.06.00 | 190.2 MB | #########3 | 93%
blazingsql-21.06.00 | 190.2 MB | #########3 | 94%
blazingsql-21.06.00 | 190.2 MB | #########3 | 94%
blazingsql-21.06.00 | 190.2 MB | #########3 | 94%
blazingsql-21.06.00 | 190.2 MB | #########4 | 94%
blazingsql-21.06.00 | 190.2 MB | #########4 | 94%
blazingsql-21.06.00 | 190.2 MB | #########4 | 95%
blazingsql-21.06.00 | 190.2 MB | #########4 | 95%
blazingsql-21.06.00 | 190.2 MB | #########4 | 95%
blazingsql-21.06.00 | 190.2 MB | #########5 | 95%
blazingsql-21.06.00 | 190.2 MB | #########5 | 95%
blazingsql-21.06.00 | 190.2 MB | #########5 | 95%
blazingsql-21.06.00 | 190.2 MB | #########5 | 95%
blazingsql-21.06.00 | 190.2 MB | #########5 | 96%
blazingsql-21.06.00 | 190.2 MB | #########5 | 96%
blazingsql-21.06.00 | 190.2 MB | #########5 | 96%
blazingsql-21.06.00 | 190.2 MB | #########5 | 96%
blazingsql-21.06.00 | 190.2 MB | #########6 | 96%
blazingsql-21.06.00 | 190.2 MB | #########6 | 96%
blazingsql-21.06.00 | 190.2 MB | #########6 | 96%
blazingsql-21.06.00 | 190.2 MB | #########6 | 96%
blazingsql-21.06.00 | 190.2 MB | #########6 | 96%
blazingsql-21.06.00 | 190.2 MB | #########6 | 97%
blazingsql-21.06.00 | 190.2 MB | #########6 | 97%
blazingsql-21.06.00 | 190.2 MB | #########6 | 97%
blazingsql-21.06.00 | 190.2 MB | #########6 | 97%
blazingsql-21.06.00 | 190.2 MB | #########7 | 97%
blazingsql-21.06.00 | 190.2 MB | #########7 | 97%
blazingsql-21.06.00 | 190.2 MB | #########7 | 97%
blazingsql-21.06.00 | 190.2 MB | #########7 | 97%
blazingsql-21.06.00 | 190.2 MB | #########7 | 97%
blazingsql-21.06.00 | 190.2 MB | #########7 | 98%
blazingsql-21.06.00 | 190.2 MB | #########7 | 98%
blazingsql-21.06.00 | 190.2 MB | #########7 | 98%
blazingsql-21.06.00 | 190.2 MB | #########7 | 98%
blazingsql-21.06.00 | 190.2 MB | #########8 | 98%
blazingsql-21.06.00 | 190.2 MB | #########8 | 98%
blazingsql-21.06.00 | 190.2 MB | #########8 | 98%
blazingsql-21.06.00 | 190.2 MB | #########8 | 98%
blazingsql-21.06.00 | 190.2 MB | #########8 | 99%
blazingsql-21.06.00 | 190.2 MB | #########8 | 99%
blazingsql-21.06.00 | 190.2 MB | #########8 | 99%
blazingsql-21.06.00 | 190.2 MB | #########8 | 99%
blazingsql-21.06.00 | 190.2 MB | #########9 | 99%
blazingsql-21.06.00 | 190.2 MB | #########9 | 99%
blazingsql-21.06.00 | 190.2 MB | #########9 | 99%
blazingsql-21.06.00 | 190.2 MB | #########9 | 99%
blazingsql-21.06.00 | 190.2 MB | #########9 | 100%
blazingsql-21.06.00 | 190.2 MB | #########9 | 100%
blazingsql-21.06.00 | 190.2 MB | #########9 | 100%
blazingsql-21.06.00 | 190.2 MB | #########9 | 100%
blazingsql-21.06.00 | 190.2 MB | ########## | 100%
freetype-2.10.4 | 890 KB | | 0%
freetype-2.10.4 | 890 KB | ########## | 100%
freetype-2.10.4 | 890 KB | ########## | 100%
pixman-0.40.0 | 627 KB | | 0%
pixman-0.40.0 | 627 KB | ########## | 100%
pixman-0.40.0 | 627 KB | ########## | 100%
xorg-libxrender-0.9. | 32 KB | | 0%
xorg-libxrender-0.9. | 32 KB | ########## | 100%
libhwloc-2.3.0 | 2.7 MB | | 0%
libhwloc-2.3.0 | 2.7 MB | ########## | 100%
libhwloc-2.3.0 | 2.7 MB | ########## | 100%
libnetcdf-4.7.4 | 1.3 MB | | 0%
libnetcdf-4.7.4 | 1.3 MB | ########## | 100%
libnetcdf-4.7.4 | 1.3 MB | ########## | 100%
s2n-1.0.10 | 442 KB | | 0%
s2n-1.0.10 | 442 KB | ########## | 100%
poppler-data-0.4.10 | 3.8 MB | | 0%
poppler-data-0.4.10 | 3.8 MB | ########## | 100%
poppler-data-0.4.10 | 3.8 MB | ########## | 100%
ucx-py-0.20.0 | 294 KB | | 0%
ucx-py-0.20.0 | 294 KB | 5 | 5%
ucx-py-0.20.0 | 294 KB | ###2 | 33%
ucx-py-0.20.0 | 294 KB | ########## | 100%
ucx-py-0.20.0 | 294 KB | ########## | 100%
readline-8.1 | 295 KB | | 0%
readline-8.1 | 295 KB | ########## | 100%
cloudpickle-1.6.0 | 22 KB | | 0%
cloudpickle-1.6.0 | 22 KB | ########## | 100%
pandocfilters-1.4.2 | 9 KB | | 0%
pandocfilters-1.4.2 | 9 KB | ########## | 100%
libprotobuf-3.16.0 | 2.5 MB | | 0%
libprotobuf-3.16.0 | 2.5 MB | ########## | 100%
libprotobuf-3.16.0 | 2.5 MB | ########## | 100%
kealib-1.4.14 | 186 KB | | 0%
kealib-1.4.14 | 186 KB | ########## | 100%
libgpg-error-1.42 | 278 KB | | 0%
libgpg-error-1.42 | 278 KB | ########## | 100%
libgpg-error-1.42 | 278 KB | ########## | 100%
pyppeteer-0.2.2 | 104 KB | | 0%
pyppeteer-0.2.2 | 104 KB | ########## | 100%
locket-0.2.0 | 6 KB | | 0%
locket-0.2.0 | 6 KB | ########## | 100%
matplotlib-base-3.4. | 7.2 MB | | 0%
matplotlib-base-3.4. | 7.2 MB | ########## | 100%
matplotlib-base-3.4. | 7.2 MB | ########## | 100%
kiwisolver-1.3.1 | 78 KB | | 0%
kiwisolver-1.3.1 | 78 KB | ########## | 100%
pthread-stubs-0.4 | 5 KB | | 0%
pthread-stubs-0.4 | 5 KB | ########## | 100%
tiledb-2.2.9 | 4.0 MB | | 0%
tiledb-2.2.9 | 4.0 MB | ########## | 100%
tiledb-2.2.9 | 4.0 MB | ########## | 100%
expat-2.4.1 | 182 KB | | 0%
expat-2.4.1 | 182 KB | ########## | 100%
prompt-toolkit-3.0.1 | 244 KB | | 0%
prompt-toolkit-3.0.1 | 244 KB | ########## | 100%
pyyaml-5.4.1 | 189 KB | | 0%
pyyaml-5.4.1 | 189 KB | ########## | 100%
pyzmq-22.1.0 | 500 KB | | 0%
pyzmq-22.1.0 | 500 KB | ########## | 100%
pyzmq-22.1.0 | 500 KB | ########## | 100%
jupyter_client-6.1.1 | 79 KB | | 0%
jupyter_client-6.1.1 | 79 KB | ########## | 100%
pyparsing-2.4.7 | 60 KB | | 0%
pyparsing-2.4.7 | 60 KB | ########## | 100%
nvtx-0.2.3 | 55 KB | | 0%
nvtx-0.2.3 | 55 KB | ########## | 100%
google-cloud-cpp-1.2 | 9.3 MB | | 0%
google-cloud-cpp-1.2 | 9.3 MB | ########## | 100%
google-cloud-cpp-1.2 | 9.3 MB | ########## | 100%
testpath-0.5.0 | 86 KB | | 0%
testpath-0.5.0 | 86 KB | ########## | 100%
libcblas-3.9.0 | 11 KB | | 0%
libcblas-3.9.0 | 11 KB | ########## | 100%
zipp-3.4.1 | 11 KB | | 0%
zipp-3.4.1 | 11 KB | ########## | 100%
heapdict-1.0.1 | 7 KB | | 0%
heapdict-1.0.1 | 7 KB | ########## | 100%
partd-1.2.0 | 18 KB | | 0%
partd-1.2.0 | 18 KB | ########## | 100%
xorg-libxau-1.0.9 | 13 KB | | 0%
xorg-libxau-1.0.9 | 13 KB | ########## | 100%
freexl-1.0.6 | 48 KB | | 0%
freexl-1.0.6 | 48 KB | ########## | 100%
libspatialindex-1.9. | 4.6 MB | | 0%
libspatialindex-1.9. | 4.6 MB | ########## | 100%
libspatialindex-1.9. | 4.6 MB | ########## | 100%
pandoc-2.14.0.2 | 12.0 MB | | 0%
pandoc-2.14.0.2 | 12.0 MB | ####### | 71%
pandoc-2.14.0.2 | 12.0 MB | ########## | 100%
pandoc-2.14.0.2 | 12.0 MB | ########## | 100%
libcurl-7.77.0 | 334 KB | | 0%
libcurl-7.77.0 | 334 KB | ########## | 100%
hdf5-1.10.6 | 3.1 MB | | 0%
hdf5-1.10.6 | 3.1 MB | ########## | 100%
hdf5-1.10.6 | 3.1 MB | ########## | 100%
libsodium-1.0.18 | 366 KB | | 0%
libsodium-1.0.18 | 366 KB | ########## | 100%
pcre-8.45 | 253 KB | | 0%
pcre-8.45 | 253 KB | ########## | 100%
liblapack-3.9.0 | 11 KB | | 0%
liblapack-3.9.0 | 11 KB | ########## | 100%
pytz-2021.1 | 239 KB | | 0%
pytz-2021.1 | 239 KB | ########## | 100%
pytz-2021.1 | 239 KB | ########## | 100%
libcudf_kafka-21.06. | 125 KB | | 0%
libcudf_kafka-21.06. | 125 KB | #2 | 13%
libcudf_kafka-21.06. | 125 KB | #######6 | 77%
libcudf_kafka-21.06. | 125 KB | ########## | 100%
ipython-7.24.1 | 1.1 MB | | 0%
ipython-7.24.1 | 1.1 MB | ########## | 100%
ipython-7.24.1 | 1.1 MB | ########## | 100%
numpy-1.21.0 | 6.1 MB | | 0%
numpy-1.21.0 | 6.1 MB | ########## | 100%
numpy-1.21.0 | 6.1 MB | ########## | 100%
thrift-0.13.0 | 120 KB | | 0%
thrift-0.13.0 | 120 KB | ########## | 100%
backports.functools_ | 9 KB | | 0%
backports.functools_ | 9 KB | ########## | 100%
wcwidth-0.2.5 | 33 KB | | 0%
wcwidth-0.2.5 | 33 KB | ########## | 100%
websockets-8.1 | 90 KB | | 0%
websockets-8.1 | 90 KB | ########## | 100%
bokeh-2.2.3 | 7.0 MB | | 0%
bokeh-2.2.3 | 7.0 MB | ########## | 100%
bokeh-2.2.3 | 7.0 MB | ########## | 100%
tzdata-2021a | 121 KB | | 0%
tzdata-2021a | 121 KB | ########## | 100%
pyrsistent-0.17.3 | 89 KB | | 0%
pyrsistent-0.17.3 | 89 KB | ########## | 100%
pyarrow-1.0.1 | 2.4 MB | | 0%
pyarrow-1.0.1 | 2.4 MB | ########## | 100%
pyarrow-1.0.1 | 2.4 MB | ########## | 100%
libllvm10-10.0.1 | 26.4 MB | | 0%
libllvm10-10.0.1 | 26.4 MB | ###8 | 38%
libllvm10-10.0.1 | 26.4 MB | ########## | 100%
libllvm10-10.0.1 | 26.4 MB | ########## | 100%
simpervisor-0.4 | 9 KB | | 0%
simpervisor-0.4 | 9 KB | ########## | 100%
proj-8.0.0 | 3.1 MB | | 0%
proj-8.0.0 | 3.1 MB | ########## | 100%
proj-8.0.0 | 3.1 MB | ########## | 100%
openjpeg-2.4.0 | 444 KB | | 0%
openjpeg-2.4.0 | 444 KB | ########## | 100%
boost-1.72.0 | 339 KB | | 0%
boost-1.72.0 | 339 KB | ########## | 100%
boost-1.72.0 | 339 KB | ########## | 100%
pyhive-0.6.4 | 39 KB | | 0%
pyhive-0.6.4 | 39 KB | ########## | 100%
packaging-20.9 | 35 KB | | 0%
packaging-20.9 | 35 KB | ########## | 100%
jinja2-3.0.1 | 99 KB | | 0%
jinja2-3.0.1 | 99 KB | ########## | 100%
panel-0.10.3 | 6.1 MB | | 0%
panel-0.10.3 | 6.1 MB | ########## | 100%
panel-0.10.3 | 6.1 MB | ########## | 100%
jupyter_server-1.8.0 | 255 KB | | 0%
jupyter_server-1.8.0 | 255 KB | ########## | 100%
aws-c-event-stream-0 | 47 KB | | 0%
aws-c-event-stream-0 | 47 KB | ########## | 100%
websocket-client-0.5 | 59 KB | | 0%
websocket-client-0.5 | 59 KB | ########## | 100%
libcumlprims-21.06.0 | 1.1 MB | | 0%
libcumlprims-21.06.0 | 1.1 MB | ########## | 100%
libcumlprims-21.06.0 | 1.1 MB | ########## | 100%
libgsasl-1.8.0 | 125 KB | | 0%
libgsasl-1.8.0 | 125 KB | ########## | 100%
cudf-21.06.01 | 108.4 MB | | 0%
cudf-21.06.01 | 108.4 MB | | 0%
cudf-21.06.01 | 108.4 MB | | 0%
cudf-21.06.01 | 108.4 MB | | 0%
cudf-21.06.01 | 108.4 MB | 1 | 2%
cudf-21.06.01 | 108.4 MB | 5 | 5%
cudf-21.06.01 | 108.4 MB | 7 | 8%
cudf-21.06.01 | 108.4 MB | #2 | 12%
cudf-21.06.01 | 108.4 MB | #5 | 15%
cudf-21.06.01 | 108.4 MB | #9 | 20%
cudf-21.06.01 | 108.4 MB | ##2 | 23%
cudf-21.06.01 | 108.4 MB | ##7 | 27%
cudf-21.06.01 | 108.4 MB | ### | 31%
cudf-21.06.01 | 108.4 MB | ###5 | 35%
cudf-21.06.01 | 108.4 MB | ###8 | 38%
cudf-21.06.01 | 108.4 MB | ####2 | 43%
cudf-21.06.01 | 108.4 MB | ####6 | 46%
cudf-21.06.01 | 108.4 MB | ##### | 51%
cudf-21.06.01 | 108.4 MB | #####4 | 55%
cudf-21.06.01 | 108.4 MB | #####8 | 58%
cudf-21.06.01 | 108.4 MB | ######2 | 63%
cudf-21.06.01 | 108.4 MB | ######6 | 66%
cudf-21.06.01 | 108.4 MB | ####### | 71%
cudf-21.06.01 | 108.4 MB | #######4 | 75%
cudf-21.06.01 | 108.4 MB | #######8 | 79%
cudf-21.06.01 | 108.4 MB | ########3 | 83%
cudf-21.06.01 | 108.4 MB | ########6 | 87%
cudf-21.06.01 | 108.4 MB | #########1 | 91%
cudf-21.06.01 | 108.4 MB | #########4 | 95%
cudf-21.06.01 | 108.4 MB | #########9 | 99%
cudf-21.06.01 | 108.4 MB | ########## | 100%
dlpack-0.5 | 12 KB | | 0%
dlpack-0.5 | 12 KB | ########## | 100%
pyct-0.4.6 | 3 KB | | 0%
pyct-0.4.6 | 3 KB | ########## | 100%
xorg-renderproto-0.1 | 9 KB | | 0%
xorg-renderproto-0.1 | 9 KB | ########## | 100%
blinker-1.4 | 13 KB | | 0%
blinker-1.4 | 13 KB | ########## | 100%
async-timeout-3.0.1 | 11 KB | | 0%
async-timeout-3.0.1 | 11 KB | ########## | 100%
parquet-cpp-1.5.1 | 3 KB | | 0%
parquet-cpp-1.5.1 | 3 KB | ########## | 100%
attrs-21.2.0 | 44 KB | | 0%
attrs-21.2.0 | 44 KB | ########## | 100%
aws-c-common-0.6.2 | 168 KB | | 0%
aws-c-common-0.6.2 | 168 KB | ########## | 100%
libopenblas-0.3.15 | 9.2 MB | | 0%
libopenblas-0.3.15 | 9.2 MB | ########## | 100%
libopenblas-0.3.15 | 9.2 MB | ########## | 100%
pickleshare-0.7.5 | 9 KB | | 0%
pickleshare-0.7.5 | 9 KB | ########## | 100%
rapids-21.06.00 | 5 KB | | 0%
rapids-21.06.00 | 5 KB | ########## | 100%
rapids-21.06.00 | 5 KB | ########## | 100%
jupyterlab_widgets-1 | 130 KB | | 0%
jupyterlab_widgets-1 | 130 KB | ########## | 100%
py-xgboost-1.4.2dev. | 151 KB | | 0%
py-xgboost-1.4.2dev. | 151 KB | # | 11%
py-xgboost-1.4.2dev. | 151 KB | ######3 | 64%
py-xgboost-1.4.2dev. | 151 KB | ########## | 100%
geopandas-0.9.0 | 5 KB | | 0%
geopandas-0.9.0 | 5 KB | ########## | 100%
geopandas-0.9.0 | 5 KB | ########## | 100%
arrow-cpp-proc-3.0.0 | 24 KB | | 0%
arrow-cpp-proc-3.0.0 | 24 KB | ########## | 100%
async_generator-1.10 | 18 KB | | 0%
async_generator-1.10 | 18 KB | ########## | 100%
dask-cuda-21.06.00 | 110 KB | | 0%
dask-cuda-21.06.00 | 110 KB | #4 | 15%
dask-cuda-21.06.00 | 110 KB | ########## | 100%
libutf8proc-2.6.1 | 95 KB | | 0%
libutf8proc-2.6.1 | 95 KB | ########## | 100%
libuuid-2.32.1 | 28 KB | | 0%
libuuid-2.32.1 | 28 KB | ########## | 100%
xorg-xextproto-7.3.0 | 28 KB | | 0%
xorg-xextproto-7.3.0 | 28 KB | ########## | 100%
aws-c-io-0.10.5 | 121 KB | | 0%
aws-c-io-0.10.5 | 121 KB | ########## | 100%
defusedxml-0.7.1 | 23 KB | | 0%
defusedxml-0.7.1 | 23 KB | ########## | 100%
conda-4.10.1 | 3.1 MB | | 0%
conda-4.10.1 | 3.1 MB | ########## | 100%
conda-4.10.1 | 3.1 MB | ########## | 100%
tblib-1.7.0 | 15 KB | | 0%
tblib-1.7.0 | 15 KB | ########## | 100%
ucx-1.9.0+gcd9efd3 | 8.2 MB | | 0%
ucx-1.9.0+gcd9efd3 | 8.2 MB | | 0%
ucx-1.9.0+gcd9efd3 | 8.2 MB | 5 | 6%
ucx-1.9.0+gcd9efd3 | 8.2 MB | #8 | 19%
ucx-1.9.0+gcd9efd3 | 8.2 MB | #####8 | 58%
ucx-1.9.0+gcd9efd3 | 8.2 MB | #########8 | 99%
ucx-1.9.0+gcd9efd3 | 8.2 MB | ########## | 100%
matplotlib-inline-0. | 11 KB | | 0%
matplotlib-inline-0. | 11 KB | ########## | 100%
librmm-21.06.00 | 57 KB | | 0%
librmm-21.06.00 | 57 KB | ##8 | 28%
librmm-21.06.00 | 57 KB | ########## | 100%
openjdk-8.0.282 | 99.3 MB | | 0%
openjdk-8.0.282 | 99.3 MB | 9 | 10%
openjdk-8.0.282 | 99.3 MB | ##2 | 23%
openjdk-8.0.282 | 99.3 MB | ###6 | 37%
openjdk-8.0.282 | 99.3 MB | ####9 | 50%
openjdk-8.0.282 | 99.3 MB | ######2 | 63%
openjdk-8.0.282 | 99.3 MB | #######6 | 77%
openjdk-8.0.282 | 99.3 MB | ######### | 90%
openjdk-8.0.282 | 99.3 MB | ########## | 100%
boost-cpp-1.72.0 | 16.3 MB | | 0%
boost-cpp-1.72.0 | 16.3 MB | ##### | 50%
boost-cpp-1.72.0 | 16.3 MB | ########## | 100%
boost-cpp-1.72.0 | 16.3 MB | ########## | 100%
aws-sdk-cpp-1.8.186 | 4.6 MB | | 0%
aws-sdk-cpp-1.8.186 | 4.6 MB | ########## | 100%
aws-sdk-cpp-1.8.186 | 4.6 MB | ########## | 100%
pexpect-4.8.0 | 47 KB | | 0%
pexpect-4.8.0 | 47 KB | ########## | 100%
fiona-1.8.20 | 1.1 MB | | 0%
fiona-1.8.20 | 1.1 MB | ########## | 100%
fiona-1.8.20 | 1.1 MB | ########## | 100%
librdkafka-1.5.3 | 11.2 MB | | 0%
librdkafka-1.5.3 | 11.2 MB | ######4 | 65%
librdkafka-1.5.3 | 11.2 MB | ########## | 100%
librdkafka-1.5.3 | 11.2 MB | ########## | 100%
dask-core-2021.5.0 | 735 KB | | 0%
dask-core-2021.5.0 | 735 KB | ########## | 100%
dask-core-2021.5.0 | 735 KB | ########## | 100%
dask-cudf-21.06.01 | 103 KB | | 0%
dask-cudf-21.06.01 | 103 KB | #5 | 15%
dask-cudf-21.06.01 | 103 KB | #########2 | 93%
dask-cudf-21.06.01 | 103 KB | ########## | 100%
certifi-2021.5.30 | 141 KB | | 0%
certifi-2021.5.30 | 141 KB | ########## | 100%
cupy-9.0.0 | 50.3 MB | | 0%
cupy-9.0.0 | 50.3 MB | ##4 | 25%
cupy-9.0.0 | 50.3 MB | #####9 | 59%
cupy-9.0.0 | 50.3 MB | ########6 | 86%
cupy-9.0.0 | 50.3 MB | ########## | 100%
terminado-0.10.1 | 26 KB | | 0%
terminado-0.10.1 | 26 KB | ########## | 100%
libcrc32c-1.1.1 | 20 KB | | 0%
libcrc32c-1.1.1 | 20 KB | ########## | 100%
xorg-libx11-1.7.2 | 941 KB | | 0%
xorg-libx11-1.7.2 | 941 KB | ########## | 100%
xorg-libx11-1.7.2 | 941 KB | ########## | 100%
pyu2f-0.1.5 | 31 KB | | 0%
pyu2f-0.1.5 | 31 KB | ########## | 100%
rsa-4.7.2 | 28 KB | | 0%
rsa-4.7.2 | 28 KB | ########## | 100%
appdirs-1.4.4 | 13 KB | | 0%
appdirs-1.4.4 | 13 KB | ########## | 100%
libxml2-2.9.12 | 772 KB | | 0%
libxml2-2.9.12 | 772 KB | ########## | 100%
libxml2-2.9.12 | 772 KB | ########## | 100%
ptyprocess-0.7.0 | 16 KB | | 0%
ptyprocess-0.7.0 | 16 KB | ########## | 100%
libgfortran-ng-9.3.0 | 22 KB | | 0%
libgfortran-ng-9.3.0 | 22 KB | ########## | 100%
libblas-3.9.0 | 11 KB | | 0%
libblas-3.9.0 | 11 KB | ########## | 100%
libgdal-3.2.2 | 13.2 MB | | 0%
libgdal-3.2.2 | 13.2 MB | #####6 | 56%
libgdal-3.2.2 | 13.2 MB | ########## | 100%
libgdal-3.2.2 | 13.2 MB | ########## | 100%
treelite-1.3.0 | 2.7 MB | | 0%
treelite-1.3.0 | 2.7 MB | ########## | 100%
treelite-1.3.0 | 2.7 MB | ########## | 100%
msgpack-python-1.0.2 | 91 KB | | 0%
msgpack-python-1.0.2 | 91 KB | ########## | 100%
pickle5-0.0.11 | 173 KB | | 0%
pickle5-0.0.11 | 173 KB | ########## | 100%
libcudf-21.06.01 | 187.7 MB | | 0%
libcudf-21.06.01 | 187.7 MB | | 0%
libcudf-21.06.01 | 187.7 MB | | 0%
libcudf-21.06.01 | 187.7 MB | | 0%
libcudf-21.06.01 | 187.7 MB | | 1%
libcudf-21.06.01 | 187.7 MB | 2 | 3%
libcudf-21.06.01 | 187.7 MB | 5 | 5%
libcudf-21.06.01 | 187.7 MB | 7 | 8%
libcudf-21.06.01 | 187.7 MB | 9 | 10%
libcudf-21.06.01 | 187.7 MB | #2 | 12%
libcudf-21.06.01 | 187.7 MB | #4 | 15%
libcudf-21.06.01 | 187.7 MB | #6 | 17%
libcudf-21.06.01 | 187.7 MB | #8 | 19%
libcudf-21.06.01 | 187.7 MB | ## | 21%
libcudf-21.06.01 | 187.7 MB | ##2 | 23%
libcudf-21.06.01 | 187.7 MB | ##5 | 25%
libcudf-21.06.01 | 187.7 MB | ##7 | 27%
libcudf-21.06.01 | 187.7 MB | ##9 | 30%
libcudf-21.06.01 | 187.7 MB | ###1 | 32%
libcudf-21.06.01 | 187.7 MB | ###4 | 34%
libcudf-21.06.01 | 187.7 MB | ###6 | 36%
libcudf-21.06.01 | 187.7 MB | ###8 | 38%
libcudf-21.06.01 | 187.7 MB | #### | 41%
libcudf-21.06.01 | 187.7 MB | ####2 | 43%
libcudf-21.06.01 | 187.7 MB | ####5 | 45%
libcudf-21.06.01 | 187.7 MB | ####7 | 48%
libcudf-21.06.01 | 187.7 MB | ####9 | 50%
libcudf-21.06.01 | 187.7 MB | #####2 | 52%
libcudf-21.06.01 | 187.7 MB | #####4 | 54%
libcudf-21.06.01 | 187.7 MB | #####6 | 57%
libcudf-21.06.01 | 187.7 MB | #####8 | 59%
libcudf-21.06.01 | 187.7 MB | ######1 | 61%
libcudf-21.06.01 | 187.7 MB | ######3 | 64%
libcudf-21.06.01 | 187.7 MB | ######5 | 66%
libcudf-21.06.01 | 187.7 MB | ######8 | 68%
libcudf-21.06.01 | 187.7 MB | ####### | 71%
libcudf-21.06.01 | 187.7 MB | #######2 | 73%
libcudf-21.06.01 | 187.7 MB | #######5 | 75%
libcudf-21.06.01 | 187.7 MB | #######7 | 78%
libcudf-21.06.01 | 187.7 MB | #######9 | 80%
libcudf-21.06.01 | 187.7 MB | ########2 | 82%
libcudf-21.06.01 | 187.7 MB | ########4 | 85%
libcudf-21.06.01 | 187.7 MB | ########6 | 87%
libcudf-21.06.01 | 187.7 MB | ########9 | 89%
libcudf-21.06.01 | 187.7 MB | #########1 | 92%
libcudf-21.06.01 | 187.7 MB | #########3 | 94%
libcudf-21.06.01 | 187.7 MB | #########6 | 96%
libcudf-21.06.01 | 187.7 MB | #########8 | 98%
libcudf-21.06.01 | 187.7 MB | ########## | 100%
traitlets-5.0.5 | 81 KB | | 0%
traitlets-5.0.5 | 81 KB | ########## | 100%
aws-c-cal-0.5.11 | 37 KB | | 0%
aws-c-cal-0.5.11 | 37 KB | ########## | 100%
future-0.18.2 | 714 KB | | 0%
future-0.18.2 | 714 KB | ########## | 100%
future-0.18.2 | 714 KB | ########## | 100%
pyasn1-0.4.8 | 53 KB | | 0%
pyasn1-0.4.8 | 53 KB | ########## | 100%
xarray-0.18.2 | 599 KB | | 0%
xarray-0.18.2 | 599 KB | ########## | 100%
xarray-0.18.2 | 599 KB | ########## | 100%
curl-7.77.0 | 149 KB | | 0%
curl-7.77.0 | 149 KB | ########## | 100%
xorg-libsm-1.2.3 | 26 KB | | 0%
xorg-libsm-1.2.3 | 26 KB | ########## | 100%
libkml-1.3.0 | 640 KB | | 0%
libkml-1.3.0 | 640 KB | ########## | 100%
libkml-1.3.0 | 640 KB | ########## | 100%
multipledispatch-0.6 | 12 KB | | 0%
multipledispatch-0.6 | 12 KB | ########## | 100%
send2trash-1.7.1 | 17 KB | | 0%
send2trash-1.7.1 | 17 KB | ########## | 100%
libdap4-3.20.6 | 11.3 MB | | 0%
libdap4-3.20.6 | 11.3 MB | ######9 | 69%
libdap4-3.20.6 | 11.3 MB | ########## | 100%
libdap4-3.20.6 | 11.3 MB | ########## | 100%
gflags-2.2.2 | 114 KB | | 0%
gflags-2.2.2 | 114 KB | ########## | 100%
yarl-1.6.3 | 141 KB | | 0%
yarl-1.6.3 | 141 KB | ########## | 100%
libtiff-4.2.0 | 639 KB | | 0%
libtiff-4.2.0 | 639 KB | ########## | 100%
libtiff-4.2.0 | 639 KB | ########## | 100%
poppler-21.03.0 | 15.9 MB | | 0%
poppler-21.03.0 | 15.9 MB | #####8 | 59%
poppler-21.03.0 | 15.9 MB | ########## | 100%
poppler-21.03.0 | 15.9 MB | ########## | 100%
mistune-0.8.4 | 54 KB | | 0%
mistune-0.8.4 | 54 KB | ########## | 100%
multidict-5.1.0 | 67 KB | | 0%
multidict-5.1.0 | 67 KB | ########## | 100%
backcall-0.2.0 | 13 KB | | 0%
backcall-0.2.0 | 13 KB | ########## | 100%
geotiff-1.6.0 | 296 KB | | 0%
geotiff-1.6.0 | 296 KB | ########## | 100%
cugraph-21.06.00 | 65.0 MB | | 0%
cugraph-21.06.00 | 65.0 MB | | 0%
cugraph-21.06.00 | 65.0 MB | | 0%
cugraph-21.06.00 | 65.0 MB | | 1%
cugraph-21.06.00 | 65.0 MB | 2 | 3%
cugraph-21.06.00 | 65.0 MB | 7 | 7%
cugraph-21.06.00 | 65.0 MB | #3 | 13%
cugraph-21.06.00 | 65.0 MB | #8 | 19%
cugraph-21.06.00 | 65.0 MB | ##4 | 25%
cugraph-21.06.00 | 65.0 MB | ### | 30%
cugraph-21.06.00 | 65.0 MB | ###7 | 37%
cugraph-21.06.00 | 65.0 MB | ####2 | 42%
cugraph-21.06.00 | 65.0 MB | ####7 | 47%
cugraph-21.06.00 | 65.0 MB | #####2 | 53%
cugraph-21.06.00 | 65.0 MB | #####7 | 58%
cugraph-21.06.00 | 65.0 MB | ######3 | 63%
cugraph-21.06.00 | 65.0 MB | ######8 | 69%
cugraph-21.06.00 | 65.0 MB | #######4 | 74%
cugraph-21.06.00 | 65.0 MB | #######9 | 80%
cugraph-21.06.00 | 65.0 MB | ########5 | 85%
cugraph-21.06.00 | 65.0 MB | ######### | 91%
cugraph-21.06.00 | 65.0 MB | #########6 | 97%
cugraph-21.06.00 | 65.0 MB | ########## | 100%
cugraph-21.06.00 | 65.0 MB | ########## | 100%
threadpoolctl-2.1.0 | 15 KB | | 0%
threadpoolctl-2.1.0 | 15 KB | ########## | 100%
libfaiss-1.7.0 | 67.0 MB | | 0%
libfaiss-1.7.0 | 67.0 MB | #5 | 15%
libfaiss-1.7.0 | 67.0 MB | #### | 40%
libfaiss-1.7.0 | 67.0 MB | ######4 | 64%
libfaiss-1.7.0 | 67.0 MB | ########7 | 88%
libfaiss-1.7.0 | 67.0 MB | ########## | 100%
pyviz_comms-2.0.2 | 25 KB | | 0%
pyviz_comms-2.0.2 | 25 KB | ########## | 100%
sniffio-1.2.0 | 15 KB | | 0%
sniffio-1.2.0 | 15 KB | ########## | 100%
libspatialite-5.0.1 | 4.4 MB | | 0%
libspatialite-5.0.1 | 4.4 MB | 2 | 2%
libspatialite-5.0.1 | 4.4 MB | #######6 | 77%
libspatialite-5.0.1 | 4.4 MB | ########## | 100%
libspatialite-5.0.1 | 4.4 MB | ########## | 100%
gettext-0.19.8.1 | 3.6 MB | | 0%
gettext-0.19.8.1 | 3.6 MB | ########## | 100%
gettext-0.19.8.1 | 3.6 MB | ########## | 100%
fastavro-1.4.1 | 496 KB | | 0%
fastavro-1.4.1 | 496 KB | ########## | 100%
fastavro-1.4.1 | 496 KB | ########## | 100%
aiohttp-3.7.4.post0 | 625 KB | | 0%
aiohttp-3.7.4.post0 | 625 KB | ########## | 100%
aiohttp-3.7.4.post0 | 625 KB | ########## | 100%
zeromq-4.3.4 | 352 KB | | 0%
zeromq-4.3.4 | 352 KB | ########## | 100%
gdal-3.2.2 | 1.5 MB | | 0%
gdal-3.2.2 | 1.5 MB | ########## | 100%
gdal-3.2.2 | 1.5 MB | ########## | 100%
olefile-0.46 | 32 KB | | 0%
olefile-0.46 | 32 KB | ########## | 100%
greenlet-1.1.0 | 83 KB | | 0%
greenlet-1.1.0 | 83 KB | ########## | 100%
cycler-0.10.0 | 9 KB | | 0%
cycler-0.10.0 | 9 KB | ########## | 100%
datashape-0.5.4 | 49 KB | | 0%
datashape-0.5.4 | 49 KB | ########## | 100%
pygments-2.9.0 | 754 KB | | 0%
pygments-2.9.0 | 754 KB | ########## | 100%
pygments-2.9.0 | 754 KB | ########## | 100%
nbconvert-6.0.7 | 535 KB | | 0%
nbconvert-6.0.7 | 535 KB | ########## | 100%
nbconvert-6.0.7 | 535 KB | ########## | 100%
typing-extensions-3. | 8 KB | | 0%
typing-extensions-3. | 8 KB | ########## | 100%
libcuml-21.06.02 | 95.2 MB | | 0%
libcuml-21.06.02 | 95.2 MB | | 0%
libcuml-21.06.02 | 95.2 MB | | 0%
libcuml-21.06.02 | 95.2 MB | | 0%
libcuml-21.06.02 | 95.2 MB | 1 | 2%
libcuml-21.06.02 | 95.2 MB | 5 | 6%
libcuml-21.06.02 | 95.2 MB | 9 | 10%
libcuml-21.06.02 | 95.2 MB | #3 | 14%
libcuml-21.06.02 | 95.2 MB | #8 | 18%
libcuml-21.06.02 | 95.2 MB | ##2 | 22%
libcuml-21.06.02 | 95.2 MB | ##5 | 26%
libcuml-21.06.02 | 95.2 MB | ### | 30%
libcuml-21.06.02 | 95.2 MB | ###4 | 34%
libcuml-21.06.02 | 95.2 MB | ###8 | 39%
libcuml-21.06.02 | 95.2 MB | ####2 | 42%
libcuml-21.06.02 | 95.2 MB | ####7 | 47%
libcuml-21.06.02 | 95.2 MB | #####1 | 51%
libcuml-21.06.02 | 95.2 MB | #####6 | 56%
libcuml-21.06.02 | 95.2 MB | ###### | 60%
libcuml-21.06.02 | 95.2 MB | ######4 | 65%
libcuml-21.06.02 | 95.2 MB | ######9 | 69%
libcuml-21.06.02 | 95.2 MB | #######3 | 74%
libcuml-21.06.02 | 95.2 MB | #######8 | 78%
libcuml-21.06.02 | 95.2 MB | ########3 | 83%
libcuml-21.06.02 | 95.2 MB | ########7 | 87%
libcuml-21.06.02 | 95.2 MB | #########2 | 92%
libcuml-21.06.02 | 95.2 MB | #########6 | 97%
libcuml-21.06.02 | 95.2 MB | ########## | 100%
toolz-0.11.1 | 46 KB | | 0%
toolz-0.11.1 | 46 KB | ########## | 100%
python-confluent-kaf | 122 KB | | 0%
python-confluent-kaf | 122 KB | ########## | 100%
sqlalchemy-1.4.19 | 2.3 MB | | 0%
sqlalchemy-1.4.19 | 2.3 MB | ########## | 100%
sqlalchemy-1.4.19 | 2.3 MB | ########## | 100%
libgcrypt-1.9.3 | 677 KB | | 0%
libgcrypt-1.9.3 | 677 KB | ########## | 100%
libgcrypt-1.9.3 | 677 KB | ########## | 100%
thrift_sasl-0.4.2 | 14 KB | | 0%
thrift_sasl-0.4.2 | 14 KB | ########## | 100%
json-c-0.15 | 274 KB | | 0%
json-c-0.15 | 274 KB | ########## | 100%
sasl-0.3a1 | 74 KB | | 0%
sasl-0.3a1 | 74 KB | ########## | 100%
nbclient-0.5.3 | 67 KB | | 0%
nbclient-0.5.3 | 67 KB | ########## | 100%
datashader-0.11.1 | 14.0 MB | | 0%
datashader-0.11.1 | 14.0 MB | 5 | 6%
datashader-0.11.1 | 14.0 MB | ########1 | 81%
datashader-0.11.1 | 14.0 MB | ########## | 100%
ipywidgets-7.6.3 | 101 KB | | 0%
ipywidgets-7.6.3 | 101 KB | ########## | 100%
brotli-1.0.9 | 389 KB | | 0%
brotli-1.0.9 | 389 KB | ########## | 100%
brotli-1.0.9 | 389 KB | ########## | 100%
netifaces-0.10.9 | 17 KB | | 0%
netifaces-0.10.9 | 17 KB | ########## | 100%
pyct-core-0.4.6 | 13 KB | | 0%
pyct-core-0.4.6 | 13 KB | ########## | 100%
hdf4-4.2.15 | 950 KB | | 0%
hdf4-4.2.15 | 950 KB | ########## | 100%
hdf4-4.2.15 | 950 KB | ########## | 100%
cytoolz-0.11.0 | 403 KB | | 0%
cytoolz-0.11.0 | 403 KB | ########## | 100%
cairo-1.16.0 | 1.5 MB | | 0%
cairo-1.16.0 | 1.5 MB | ########## | 100%
cairo-1.16.0 | 1.5 MB | ########## | 100%
cachetools-4.2.2 | 12 KB | | 0%
cachetools-4.2.2 | 12 KB | ########## | 100%
nbformat-5.1.3 | 47 KB | | 0%
nbformat-5.1.3 | 47 KB | ########## | 100%
joblib-1.0.1 | 206 KB | | 0%
joblib-1.0.1 | 206 KB | ########## | 100%
libcuspatial-21.06.0 | 7.6 MB | | 0%
libcuspatial-21.06.0 | 7.6 MB | | 0%
libcuspatial-21.06.0 | 7.6 MB | 1 | 1%
libcuspatial-21.06.0 | 7.6 MB | 3 | 4%
libcuspatial-21.06.0 | 7.6 MB | #5 | 15%
libcuspatial-21.06.0 | 7.6 MB | #####8 | 59%
libcuspatial-21.06.0 | 7.6 MB | ########## | 100%
libcuspatial-21.06.0 | 7.6 MB | ########## | 100%
krb5-1.19.1 | 1.4 MB | | 0%
krb5-1.19.1 | 1.4 MB | ########## | 100%
krb5-1.19.1 | 1.4 MB | ########## | 100%
pyasn1-modules-0.2.7 | 60 KB | | 0%
pyasn1-modules-0.2.7 | 60 KB | ########## | 100%
tornado-6.1 | 646 KB | | 0%
tornado-6.1 | 646 KB | ########## | 100%
tornado-6.1 | 646 KB | ########## | 100%
click-7.1.2 | 64 KB | | 0%
click-7.1.2 | 64 KB | ########## | 100%
scikit-learn-0.24.2 | 7.5 MB | | 0%
scikit-learn-0.24.2 | 7.5 MB | #######7 | 77%
scikit-learn-0.24.2 | 7.5 MB | ########## | 100%
custreamz-21.06.01 | 32 KB | | 0%
custreamz-21.06.01 | 32 KB | ##### | 50%
custreamz-21.06.01 | 32 KB | ########## | 100%
jsonschema-3.2.0 | 45 KB | | 0%
jsonschema-3.2.0 | 45 KB | ########## | 100%
xerces-c-3.2.3 | 1.8 MB | | 0%
xerces-c-3.2.3 | 1.8 MB | ########## | 100%
xerces-c-3.2.3 | 1.8 MB | ########## | 100%
munch-2.5.0 | 12 KB | | 0%
munch-2.5.0 | 12 KB | ########## | 100%
openssl-1.1.1k | 2.1 MB | | 0%
openssl-1.1.1k | 2.1 MB | ########## | 100%
openssl-1.1.1k | 2.1 MB | ########## | 100%
webencodings-0.5.1 | 12 KB | | 0%
webencodings-0.5.1 | 12 KB | ########## | 100%
ipykernel-5.5.5 | 167 KB | | 0%
ipykernel-5.5.5 | 167 KB | ########## | 100%
networkx-2.5.1 | 1.2 MB | | 0%
networkx-2.5.1 | 1.2 MB | ########## | 100%
networkx-2.5.1 | 1.2 MB | ########## | 100%
ipython_genutils-0.2 | 21 KB | | 0%
ipython_genutils-0.2 | 21 KB | ########## | 100%
nest-asyncio-1.5.1 | 9 KB | | 0%
nest-asyncio-1.5.1 | 9 KB | ########## | 100%
cligj-0.7.2 | 10 KB | | 0%
cligj-0.7.2 | 10 KB | ########## | 100%
orc-1.6.7 | 751 KB | | 0%
orc-1.6.7 | 751 KB | ########## | 100%
orc-1.6.7 | 751 KB | ########## | 100%
cfitsio-3.470 | 1.3 MB | | 0%
cfitsio-3.470 | 1.3 MB | ########## | 100%
cfitsio-3.470 | 1.3 MB | ########## | 100%
fontconfig-2.13.1 | 357 KB | | 0%
fontconfig-2.13.1 | 357 KB | ########## | 100%
dask-2021.5.0 | 4 KB | | 0%
dask-2021.5.0 | 4 KB | ########## | 100%
prometheus_client-0. | 46 KB | | 0%
prometheus_client-0. | 46 KB | ########## | 100%
snappy-1.1.8 | 32 KB | | 0%
snappy-1.1.8 | 32 KB | ########## | 100%
entrypoints-0.3 | 8 KB | | 0%
entrypoints-0.3 | 8 KB | ########## | 100%
postgresql-13.3 | 5.3 MB | | 0%
postgresql-13.3 | 5.3 MB | ########## | 100%
postgresql-13.3 | 5.3 MB | ########## | 100%
xorg-kbproto-1.0.7 | 27 KB | | 0%
xorg-kbproto-1.0.7 | 27 KB | ########## | 100%
parso-0.8.2 | 68 KB | | 0%
parso-0.8.2 | 68 KB | ########## | 100%
nodejs-14.15.4 | 15.7 MB | | 0%
nodejs-14.15.4 | 15.7 MB | ######8 | 69%
nodejs-14.15.4 | 15.7 MB | ########## | 100%
nodejs-14.15.4 | 15.7 MB | ########## | 100%
xgboost-1.4.2dev.rap | 17 KB | | 0%
xgboost-1.4.2dev.rap | 17 KB | #########5 | 96%
xgboost-1.4.2dev.rap | 17 KB | ########## | 100%
param-1.10.1 | 64 KB | | 0%
param-1.10.1 | 64 KB | ########## | 100%
glog-0.5.0 | 104 KB | | 0%
glog-0.5.0 | 104 KB | ########## | 100%
fsspec-2021.6.0 | 79 KB | | 0%
fsspec-2021.6.0 | 79 KB | ########## | 100%
libxcb-1.13 | 395 KB | | 0%
libxcb-1.13 | 395 KB | ########## | 100%
libxcb-1.13 | 395 KB | ########## | 100%
widgetsnbextension-3 | 1.8 MB | | 0%
widgetsnbextension-3 | 1.8 MB | ########## | 100%
widgetsnbextension-3 | 1.8 MB | ########## | 100%
scipy-1.6.3 | 20.5 MB | | 0%
scipy-1.6.3 | 20.5 MB | ##### | 50%
scipy-1.6.3 | 20.5 MB | ########## | 100%
scipy-1.6.3 | 20.5 MB | ########## | 100%
libevent-2.1.10 | 1.1 MB | | 0%
libevent-2.1.10 | 1.1 MB | ########## | 100%
libevent-2.1.10 | 1.1 MB | ########## | 100%
numba-0.53.1 | 3.7 MB | | 0%
numba-0.53.1 | 3.7 MB | #####3 | 53%
numba-0.53.1 | 3.7 MB | ########## | 100%
numba-0.53.1 | 3.7 MB | ########## | 100%
jupyter-server-proxy | 27 KB | | 0%
jupyter-server-proxy | 27 KB | ########## | 100%
spdlog-1.8.5 | 353 KB | | 0%
spdlog-1.8.5 | 353 KB | ########## | 100%
markdown-3.3.4 | 67 KB | | 0%
markdown-3.3.4 | 67 KB | ########## | 100%
argon2-cffi-20.1.0 | 47 KB | | 0%
argon2-cffi-20.1.0 | 47 KB | ########## | 100%
zict-2.0.0 | 10 KB | | 0%
zict-2.0.0 | 10 KB | ########## | 100%
llvmlite-0.36.0 | 2.7 MB | | 0%
llvmlite-0.36.0 | 2.7 MB | ########## | 100%
llvmlite-0.36.0 | 2.7 MB | ########## | 100%
cudatoolkit-11.0.221 | 953.0 MB | | 0%
cudatoolkit-11.0.221 | 953.0 MB | ########## | 100%
cudatoolkit-11.0.221 | 953.0 MB | ########## | 100%
libntlm-1.4 | 32 KB | | 0%
libntlm-1.4 | 32 KB | ########## | 100%
libcugraph-21.06.00 | 213.6 MB | | 0%
libcugraph-21.06.00 | 213.6 MB | | 0%
libcugraph-21.06.00 | 213.6 MB | | 0%
libcugraph-21.06.00 | 213.6 MB | | 0%
libcugraph-21.06.00 | 213.6 MB | | 1%
libcugraph-21.06.00 | 213.6 MB | 2 | 3%
libcugraph-21.06.00 | 213.6 MB | 5 | 5%
libcugraph-21.06.00 | 213.6 MB | 7 | 8%
libcugraph-21.06.00 | 213.6 MB | 9 | 10%
libcugraph-21.06.00 | 213.6 MB | #2 | 12%
libcugraph-21.06.00 | 213.6 MB | #4 | 15%
libcugraph-21.06.00 | 213.6 MB | #6 | 17%
libcugraph-21.06.00 | 213.6 MB | #8 | 19%
libcugraph-21.06.00 | 213.6 MB | ##1 | 21%
libcugraph-21.06.00 | 213.6 MB | ##2 | 23%
libcugraph-21.06.00 | 213.6 MB | ##5 | 25%
libcugraph-21.06.00 | 213.6 MB | ##6 | 27%
libcugraph-21.06.00 | 213.6 MB | ##9 | 29%
libcugraph-21.06.00 | 213.6 MB | ### | 31%
libcugraph-21.06.00 | 213.6 MB | ###2 | 33%
libcugraph-21.06.00 | 213.6 MB | ###4 | 35%
libcugraph-21.06.00 | 213.6 MB | ###7 | 37%
libcugraph-21.06.00 | 213.6 MB | ###8 | 39%
libcugraph-21.06.00 | 213.6 MB | ####1 | 41%
libcugraph-21.06.00 | 213.6 MB | ####3 | 43%
libcugraph-21.06.00 | 213.6 MB | ####5 | 45%
libcugraph-21.06.00 | 213.6 MB | ####6 | 47%
libcugraph-21.06.00 | 213.6 MB | ####8 | 49%
libcugraph-21.06.00 | 213.6 MB | ##### | 50%
libcugraph-21.06.00 | 213.6 MB | #####2 | 52%
libcugraph-21.06.00 | 213.6 MB | #####4 | 54%
libcugraph-21.06.00 | 213.6 MB | #####6 | 56%
libcugraph-21.06.00 | 213.6 MB | #####7 | 58%
libcugraph-21.06.00 | 213.6 MB | #####9 | 60%
libcugraph-21.06.00 | 213.6 MB | ######1 | 62%
libcugraph-21.06.00 | 213.6 MB | ######3 | 64%
libcugraph-21.06.00 | 213.6 MB | ######5 | 65%
libcugraph-21.06.00 | 213.6 MB | ######7 | 67%
libcugraph-21.06.00 | 213.6 MB | ######9 | 69%
libcugraph-21.06.00 | 213.6 MB | #######1 | 71%
libcugraph-21.06.00 | 213.6 MB | #######2 | 73%
libcugraph-21.06.00 | 213.6 MB | #######4 | 75%
libcugraph-21.06.00 | 213.6 MB | #######6 | 77%
libcugraph-21.06.00 | 213.6 MB | #######8 | 78%
libcugraph-21.06.00 | 213.6 MB | ######## | 80%
libcugraph-21.06.00 | 213.6 MB | ########2 | 82%
libcugraph-21.06.00 | 213.6 MB | ########3 | 84%
libcugraph-21.06.00 | 213.6 MB | ########5 | 86%
libcugraph-21.06.00 | 213.6 MB | ########7 | 88%
libcugraph-21.06.00 | 213.6 MB | ########9 | 89%
libcugraph-21.06.00 | 213.6 MB | #########1 | 91%
libcugraph-21.06.00 | 213.6 MB | #########3 | 93%
libcugraph-21.06.00 | 213.6 MB | #########5 | 95%
libcugraph-21.06.00 | 213.6 MB | #########6 | 97%
libcugraph-21.06.00 | 213.6 MB | #########8 | 99%
libcugraph-21.06.00 | 213.6 MB | ########## | 100%
pynvml-11.0.0 | 39 KB | | 0%
pynvml-11.0.0 | 39 KB | ########## | 100%
google-auth-1.30.2 | 77 KB | | 0%
google-auth-1.30.2 | 77 KB | ########## | 100%
ucx-proc-1.0.0 | 9 KB | | 0%
ucx-proc-1.0.0 | 9 KB | ########## | 100%
ucx-proc-1.0.0 | 9 KB | ########## | 100%
libwebp-base-1.2.0 | 815 KB | | 0%
libwebp-base-1.2.0 | 815 KB | ########## | 100%
libwebp-base-1.2.0 | 815 KB | ########## | 100%
click-plugins-1.1.1 | 9 KB | | 0%
click-plugins-1.1.1 | 9 KB | ########## | 100%
jupyter_core-4.7.1 | 72 KB | | 0%
jupyter_core-4.7.1 | 72 KB | ########## | 100%
libxgboost-1.4.2dev. | 115.3 MB | | 0%
libxgboost-1.4.2dev. | 115.3 MB | | 0%
libxgboost-1.4.2dev. | 115.3 MB | | 0%
libxgboost-1.4.2dev. | 115.3 MB | | 0%
libxgboost-1.4.2dev. | 115.3 MB | 1 | 1%
libxgboost-1.4.2dev. | 115.3 MB | 3 | 4%
libxgboost-1.4.2dev. | 115.3 MB | 7 | 8%
libxgboost-1.4.2dev. | 115.3 MB | #1 | 11%
libxgboost-1.4.2dev. | 115.3 MB | #4 | 14%
libxgboost-1.4.2dev. | 115.3 MB | #7 | 18%
libxgboost-1.4.2dev. | 115.3 MB | ## | 21%
libxgboost-1.4.2dev. | 115.3 MB | ##4 | 25%
libxgboost-1.4.2dev. | 115.3 MB | ##8 | 28%
libxgboost-1.4.2dev. | 115.3 MB | ###2 | 32%
libxgboost-1.4.2dev. | 115.3 MB | ###5 | 36%
libxgboost-1.4.2dev. | 115.3 MB | ###9 | 39%
libxgboost-1.4.2dev. | 115.3 MB | ####2 | 43%
libxgboost-1.4.2dev. | 115.3 MB | ####6 | 47%
libxgboost-1.4.2dev. | 115.3 MB | ##### | 50%
libxgboost-1.4.2dev. | 115.3 MB | #####4 | 54%
libxgboost-1.4.2dev. | 115.3 MB | #####7 | 58%
libxgboost-1.4.2dev. | 115.3 MB | ######1 | 61%
libxgboost-1.4.2dev. | 115.3 MB | ######4 | 65%
libxgboost-1.4.2dev. | 115.3 MB | ######8 | 69%
libxgboost-1.4.2dev. | 115.3 MB | #######2 | 72%
libxgboost-1.4.2dev. | 115.3 MB | #######5 | 76%
libxgboost-1.4.2dev. | 115.3 MB | #######8 | 79%
libxgboost-1.4.2dev. | 115.3 MB | ########2 | 82%
libxgboost-1.4.2dev. | 115.3 MB | ########6 | 86%
libxgboost-1.4.2dev. | 115.3 MB | ########9 | 90%
libxgboost-1.4.2dev. | 115.3 MB | #########2 | 93%
libxgboost-1.4.2dev. | 115.3 MB | #########6 | 97%
libxgboost-1.4.2dev. | 115.3 MB | ########## | 100%
libxgboost-1.4.2dev. | 115.3 MB | ########## | 100%
protobuf-3.16.0 | 342 KB | | 0%
protobuf-3.16.0 | 342 KB | ########## | 100%
jpype1-1.3.0 | 482 KB | | 0%
jpype1-1.3.0 | 482 KB | ########## | 100%
grpc-cpp-1.38.0 | 3.6 MB | | 0%
grpc-cpp-1.38.0 | 3.6 MB | ########## | 100%
grpc-cpp-1.38.0 | 3.6 MB | ########## | 100%
abseil-cpp-20210324. | 1015 KB | | 0%
abseil-cpp-20210324. | 1015 KB | ########## | 100%
abseil-cpp-20210324. | 1015 KB | ########## | 100%
re2-2021.04.01 | 218 KB | | 0%
re2-2021.04.01 | 218 KB | ########## | 100%
libwebp-1.2.0 | 85 KB | | 0%
libwebp-1.2.0 | 85 KB | ########## | 100%
rtree-0.9.7 | 45 KB | | 0%
rtree-0.9.7 | 45 KB | ########## | 100%
libglib-2.68.3 | 3.1 MB | | 0%
libglib-2.68.3 | 3.1 MB | ########## | 100%
libglib-2.68.3 | 3.1 MB | ########## | 100%
streamz-0.6.2 | 59 KB | | 0%
streamz-0.6.2 | 59 KB | ########## | 100%
shapely-1.7.1 | 438 KB | | 0%
shapely-1.7.1 | 438 KB | ########## | 100%
shapely-1.7.1 | 438 KB | ########## | 100%
fastrlock-0.6 | 31 KB | | 0%
fastrlock-0.6 | 31 KB | ########## | 100%
aws-checksums-0.1.11 | 50 KB | | 0%
aws-checksums-0.1.11 | 50 KB | ########## | 100%
jedi-0.18.0 | 923 KB | | 0%
jedi-0.18.0 | 923 KB | ########## | 100%
jedi-0.18.0 | 923 KB | ########## | 100%
jpeg-9d | 264 KB | | 0%
jpeg-9d | 264 KB | ########## | 100%
decorator-4.4.2 | 11 KB | | 0%
decorator-4.4.2 | 11 KB | ########## | 100%
markupsafe-2.0.1 | 22 KB | | 0%
markupsafe-2.0.1 | 22 KB | ########## | 100%
google-auth-oauthlib | 19 KB | | 0%
google-auth-oauthlib | 19 KB | ########## | 100%
cyrus-sasl-2.1.27 | 224 KB | | 0%
cyrus-sasl-2.1.27 | 224 KB | ########## | 100%
xorg-libice-1.0.10 | 58 KB | | 0%
xorg-libice-1.0.10 | 58 KB | ########## | 100%
pydeck-0.5.0 | 3.6 MB | | 0%
pydeck-0.5.0 | 3.6 MB | ########## | 100%
pydeck-0.5.0 | 3.6 MB | ########## | 100%
librttopo-1.1.0 | 235 KB | | 0%
librttopo-1.1.0 | 235 KB | ########## | 100%
bleach-3.3.0 | 111 KB | | 0%
bleach-3.3.0 | 111 KB | ########## | 100%
requests-oauthlib-1. | 21 KB | | 0%
requests-oauthlib-1. | 21 KB | ########## | 100%
rapids-blazing-21.06 | 5 KB | | 0%
rapids-blazing-21.06 | 5 KB | ########## | 100%
rapids-blazing-21.06 | 5 KB | ########## | 100%
sortedcontainers-2.4 | 26 KB | | 0%
sortedcontainers-2.4 | 26 KB | ########## | 100%
libpng-1.6.37 | 306 KB | | 0%
libpng-1.6.37 | 306 KB | ########## | 100%
python-dateutil-2.8. | 220 KB | | 0%
python-dateutil-2.8. | 220 KB | ########## | 100%
psutil-5.8.0 | 342 KB | | 0%
psutil-5.8.0 | 342 KB | ########## | 100%
distributed-2021.5.0 | 1.1 MB | | 0%
distributed-2021.5.0 | 1.1 MB | ########## | 100%
distributed-2021.5.0 | 1.1 MB | ########## | 100%
cuml-21.06.02 | 78.9 MB | | 0%
cuml-21.06.02 | 78.9 MB | | 0%
cuml-21.06.02 | 78.9 MB | | 0%
cuml-21.06.02 | 78.9 MB | | 0%
cuml-21.06.02 | 78.9 MB | 1 | 2%
cuml-21.06.02 | 78.9 MB | 6 | 6%
cuml-21.06.02 | 78.9 MB | # | 10%
cuml-21.06.02 | 78.9 MB | #6 | 16%
cuml-21.06.02 | 78.9 MB | ## | 20%
cuml-21.06.02 | 78.9 MB | ##6 | 26%
cuml-21.06.02 | 78.9 MB | ### | 30%
cuml-21.06.02 | 78.9 MB | ###7 | 37%
cuml-21.06.02 | 78.9 MB | ####1 | 42%
cuml-21.06.02 | 78.9 MB | ####8 | 48%
cuml-21.06.02 | 78.9 MB | #####2 | 53%
cuml-21.06.02 | 78.9 MB | #####9 | 59%
cuml-21.06.02 | 78.9 MB | ######4 | 64%
cuml-21.06.02 | 78.9 MB | ####### | 71%
cuml-21.06.02 | 78.9 MB | #######5 | 76%
cuml-21.06.02 | 78.9 MB | ########2 | 82%
cuml-21.06.02 | 78.9 MB | ########6 | 87%
cuml-21.06.02 | 78.9 MB | #########1 | 91%
cuml-21.06.02 | 78.9 MB | #########6 | 97%
cuml-21.06.02 | 78.9 MB | ########## | 100%
backports-1.0 | 4 KB | | 0%
backports-1.0 | 4 KB | ########## | 100%
rapids-xgboost-21.06 | 4 KB | | 0%
rapids-xgboost-21.06 | 4 KB | ########## | 100%
rapids-xgboost-21.06 | 4 KB | ########## | 100%
xorg-libxext-1.3.4 | 54 KB | | 0%
xorg-libxext-1.3.4 | 54 KB | ########## | 100%
arrow-cpp-1.0.1 | 21.1 MB | | 0%
arrow-cpp-1.0.1 | 21.1 MB | ####1 | 42%
arrow-cpp-1.0.1 | 21.1 MB | ########## | 100%
arrow-cpp-1.0.1 | 21.1 MB | ########## | 100%
cudf_kafka-21.06.01 | 1.7 MB | | 0%
cudf_kafka-21.06.01 | 1.7 MB | | 1%
cudf_kafka-21.06.01 | 1.7 MB | 5 | 5%
cudf_kafka-21.06.01 | 1.7 MB | #8 | 19%
cudf_kafka-21.06.01 | 1.7 MB | #######4 | 75%
cudf_kafka-21.06.01 | 1.7 MB | ########## | 100%
cudf_kafka-21.06.01 | 1.7 MB | ########## | 100%
notebook-6.4.0 | 6.1 MB | | 0%
notebook-6.4.0 | 6.1 MB | ########1 | 82%
notebook-6.4.0 | 6.1 MB | ########## | 100%
pyjwt-2.1.0 | 17 KB | | 0%
pyjwt-2.1.0 | 17 KB | ########## | 100%
geos-3.9.1 | 1.1 MB | | 0%
geos-3.9.1 | 1.1 MB | ########## | 100%
geos-3.9.1 | 1.1 MB | ########## | 100%
tzcode-2021a | 68 KB | | 0%
tzcode-2021a | 68 KB | ########## | 100%
rmm-21.06.00 | 7.0 MB | | 0%
rmm-21.06.00 | 7.0 MB | | 0%
rmm-21.06.00 | 7.0 MB | #####3 | 53%
rmm-21.06.00 | 7.0 MB | ########## | 100%
rmm-21.06.00 | 7.0 MB | ########## | 100%
pyproj-3.0.1 | 484 KB | | 0%
pyproj-3.0.1 | 484 KB | ########## | 100%
pyproj-3.0.1 | 484 KB | ########## | 100%
anyio-3.2.0 | 138 KB | | 0%
anyio-3.2.0 | 138 KB | ########## | 100%
nccl-2.9.9.1 | 82.3 MB | | 0%
nccl-2.9.9.1 | 82.3 MB | 8 | 8%
nccl-2.9.9.1 | 82.3 MB | #6 | 16%
nccl-2.9.9.1 | 82.3 MB | ###2 | 32%
nccl-2.9.9.1 | 82.3 MB | ##### | 51%
nccl-2.9.9.1 | 82.3 MB | ######7 | 68%
nccl-2.9.9.1 | 82.3 MB | ########4 | 85%
nccl-2.9.9.1 | 82.3 MB | ########## | 100%
nccl-2.9.9.1 | 82.3 MB | ########## | 100%
colorcet-2.0.6 | 1.5 MB | | 0%
colorcet-2.0.6 | 1.5 MB | ########## | 100%
colorcet-2.0.6 | 1.5 MB | ########## | 100%
pillow-8.2.0 | 684 KB | | 0%
pillow-8.2.0 | 684 KB | ########## | 100%
pillow-8.2.0 | 684 KB | ########## | 100%
ca-certificates-2021 | 136 KB | | 0%
ca-certificates-2021 | 136 KB | ########## | 100%
mapclassify-2.4.2 | 36 KB | | 0%
mapclassify-2.4.2 | 36 KB | ########## | 100%
libthrift-0.14.1 | 4.5 MB | | 0%
libthrift-0.14.1 | 4.5 MB | ########## | 100%
libthrift-0.14.1 | 4.5 MB | ########## | 100%
pandas-1.2.5 | 11.8 MB | | 0%
pandas-1.2.5 | 11.8 MB | ######9 | 70%
pandas-1.2.5 | 11.8 MB | ########## | 100%
pandas-1.2.5 | 11.8 MB | ########## | 100%
Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... By downloading and using the CUDA Toolkit conda packages, you accept the terms and conditions of the CUDA End User License Agreement (EULA): https://docs.nvidia.com/cuda/eula/index.html
Enabling notebook extension jupyter-js-widgets/extension...
Paths used for configuration of notebook:
/usr/local/etc/jupyter/nbconfig/notebook.d/plotlywidget.json
/usr/local/etc/jupyter/nbconfig/notebook.d/pydeck.json
/usr/local/etc/jupyter/nbconfig/notebook.d/widgetsnbextension.json
/usr/local/etc/jupyter/nbconfig/notebook.json
Paths used for configuration of notebook:
/usr/local/etc/jupyter/nbconfig/notebook.d/plotlywidget.json
/usr/local/etc/jupyter/nbconfig/notebook.d/pydeck.json
/usr/local/etc/jupyter/nbconfig/notebook.d/widgetsnbextension.json
- Validating: [32mOK[0m
Paths used for configuration of notebook:
/usr/local/etc/jupyter/nbconfig/notebook.d/plotlywidget.json
/usr/local/etc/jupyter/nbconfig/notebook.d/pydeck.json
/usr/local/etc/jupyter/nbconfig/notebook.d/widgetsnbextension.json
/usr/local/etc/jupyter/nbconfig/notebook.json
done
RAPIDS conda installation complete. Updating Colab's libraries...
Copying /usr/local/lib/libcudf.so to /usr/lib/libcudf.so
Copying /usr/local/lib/libnccl.so to /usr/lib/libnccl.so
Copying /usr/local/lib/libcuml.so to /usr/lib/libcuml.so
Copying /usr/local/lib/libcugraph.so to /usr/lib/libcugraph.so
Copying /usr/local/lib/libxgboost.so to /usr/lib/libxgboost.so
Copying /usr/local/lib/libcuspatial.so to /usr/lib/libcuspatial.so
Copying /usr/local/lib/libgeos.so to /usr/lib/libgeos.so
###Markdown
Instalando as Bibliotecas Necessárias
###Code
%matplotlib inline
%load_ext google.colab.data_table
import matplotlib.pyplot as plt
import numpy as np
import gc
import pandas as pd
import pickle
import dask
import dask_cudf
import cudf
from datetime import datetime
from dask import dataframe as dd
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from google.colab import files
from oauth2client.client import GoogleCredentials
pd.set_option('display.max_columns', None)
pd.options.display.precision = 2
pd.options.display.max_rows = 50
import seaborn as sns
import missingno as msno
import matplotlib as mpl
from matplotlib import rcParams
from numba import jit, njit
mpl.rc('figure', max_open_warning = 0)
from sklearn import preprocessing
###Output
_____no_output_____
###Markdown
Criando um Client para o Dask
###Code
from dask.distributed import Client,wait
client = Client()
#client = Client(n_workers=2, threads_per_worker=4)
client.cluster
###Output
/usr/local/lib/python3.7/site-packages/distributed/client.py:1148: VersionMismatchWarning: Mismatched versions found
+---------+--------+-----------+---------+
| Package | client | scheduler | workers |
+---------+--------+-----------+---------+
| numpy | 1.19.5 | 1.19.5 | 1.21.0 |
| tornado | 5.1.1 | 5.1.1 | 6.1 |
+---------+--------+-----------+---------+
warnings.warn(version_module.VersionMismatchWarning(msg[0]["warning"]))
###Markdown
Fazendo autenticação no Google, importando os arquivos através do Google Drive e criando Dask dataframes com limpeza de RAM (garbage collect).
###Code
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
ids = ['1Oyd1VdQo3fHJD5LGXNgi5kZBS812MiKN','183SF0fxXbTVXYfko-BOyuwAB2BmZQAmK']
estados = ['BR','BRPRO']
arquivo = ['brasil.pkl','brasilprocessed.pkl']
dflist=[]
for i in range (len(ids)):
fileDownloaded = drive.CreateFile({'id':ids[i]})
fileDownloaded.GetContentFile(arquivo[i])
globals()[estados[i]] = dd.from_pandas(pd.read_pickle(arquivo[i]),npartitions=245)
n=gc.collect()
globals()[estados[i]] = (globals()[estados[i]]).reset_index(drop=True)
n=gc.collect()
dflist.append(eval(estados[i]))
n=gc.collect()
dflist[0].head()
###Output
_____no_output_____
###Markdown
Fazendo a Aprendizem de Máquina com Computação Paralela. Instalando o Dask Machine Learning.
###Code
!pip install dask-ml
###Output
Collecting dask-ml
Downloading dask_ml-1.9.0-py3-none-any.whl (143 kB)
[?25l
[K |██▎ | 10 kB 42.5 MB/s eta 0:00:01
[K |████▋ | 20 kB 44.5 MB/s eta 0:00:01
[K |██████▉ | 30 kB 40.3 MB/s eta 0:00:01
[K |█████████▏ | 40 kB 24.0 MB/s eta 0:00:01
[K |███████████▌ | 51 kB 14.5 MB/s eta 0:00:01
[K |█████████████▊ | 61 kB 16.8 MB/s eta 0:00:01
[K |████████████████ | 71 kB 15.1 MB/s eta 0:00:01
[K |██████████████████▎ | 81 kB 16.7 MB/s eta 0:00:01
[K |████████████████████▋ | 92 kB 16.8 MB/s eta 0:00:01
[K |███████████████████████ | 102 kB 14.1 MB/s eta 0:00:01
[K |█████████████████████████▏ | 112 kB 14.1 MB/s eta 0:00:01
[K |███████████████████████████▌ | 122 kB 14.1 MB/s eta 0:00:01
[K |█████████████████████████████▊ | 133 kB 14.1 MB/s eta 0:00:01
[K |████████████████████████████████| 143 kB 14.1 MB/s
[?25hRequirement already satisfied: packaging in /usr/local/lib/python3.7/site-packages (from dask-ml) (20.9)
Requirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.7/site-packages (from dask-ml) (1.21.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/site-packages (from dask-ml) (1.6.3)
Requirement already satisfied: distributed>=2.4.0 in /usr/local/lib/python3.7/site-packages (from dask-ml) (2021.5.0)
Requirement already satisfied: pandas>=0.24.2 in /usr/local/lib/python3.7/site-packages (from dask-ml) (1.2.5)
Collecting dask-glm>=0.2.0
Downloading dask_glm-0.2.0-py2.py3-none-any.whl (12 kB)
Requirement already satisfied: multipledispatch>=0.4.9 in /usr/local/lib/python3.7/site-packages (from dask-ml) (0.6.0)
Requirement already satisfied: scikit-learn>=0.23 in /usr/local/lib/python3.7/site-packages (from dask-ml) (0.24.2)
Requirement already satisfied: dask[array,dataframe]>=2.4.0 in /usr/local/lib/python3.7/site-packages (from dask-ml) (2021.5.0)
Requirement already satisfied: numba>=0.51.0 in /usr/local/lib/python3.7/site-packages (from dask-ml) (0.53.1)
Requirement already satisfied: cloudpickle>=0.2.2 in /usr/local/lib/python3.7/site-packages (from dask-glm>=0.2.0->dask-ml) (1.6.0)
Requirement already satisfied: fsspec>=0.6.0 in /usr/local/lib/python3.7/site-packages (from dask[array,dataframe]>=2.4.0->dask-ml) (2021.6.0)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.7/site-packages (from dask[array,dataframe]>=2.4.0->dask-ml) (1.2.0)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.7/site-packages (from dask[array,dataframe]>=2.4.0->dask-ml) (0.11.1)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/site-packages (from dask[array,dataframe]>=2.4.0->dask-ml) (5.4.1)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.7/site-packages (from distributed>=2.4.0->dask-ml) (1.0.2)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.7/site-packages (from distributed>=2.4.0->dask-ml) (5.8.0)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.7/site-packages (from distributed>=2.4.0->dask-ml) (1.7.0)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.7/site-packages (from distributed>=2.4.0->dask-ml) (2.0.0)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.7/site-packages (from distributed>=2.4.0->dask-ml) (2.4.0)
Requirement already satisfied: tornado>=5 in /usr/local/lib/python3.7/site-packages (from distributed>=2.4.0->dask-ml) (6.1)
Requirement already satisfied: click>=6.6 in /usr/local/lib/python3.7/site-packages (from distributed>=2.4.0->dask-ml) (7.1.2)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/site-packages (from distributed>=2.4.0->dask-ml) (49.6.0.post20210108)
Requirement already satisfied: six in /usr/local/lib/python3.7/site-packages (from multipledispatch>=0.4.9->dask-ml) (1.15.0)
Requirement already satisfied: llvmlite<0.37,>=0.36.0rc1 in /usr/local/lib/python3.7/site-packages (from numba>=0.51.0->dask-ml) (0.36.0)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/site-packages (from pandas>=0.24.2->dask-ml) (2.8.1)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/site-packages (from pandas>=0.24.2->dask-ml) (2021.1)
Requirement already satisfied: locket in /usr/local/lib/python3.7/site-packages (from partd>=0.3.10->dask[array,dataframe]>=2.4.0->dask-ml) (0.2.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/site-packages (from scikit-learn>=0.23->dask-ml) (2.1.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/site-packages (from scikit-learn>=0.23->dask-ml) (1.0.1)
Requirement already satisfied: heapdict in /usr/local/lib/python3.7/site-packages (from zict>=0.1.3->distributed>=2.4.0->dask-ml) (1.0.1)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/site-packages (from packaging->dask-ml) (2.4.7)
Installing collected packages: dask-glm, dask-ml
Successfully installed dask-glm-0.2.0 dask-ml-1.9.0
###Markdown
Instalando as Bibliotecas Necessárias do Sklearn
###Code
import sklearn
from sklearn.metrics import mean_squared_error
from sklearn.metrics import classification_report
from sklearn import preprocessing, metrics
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
import joblib
from dask_ml.model_selection import train_test_split
import warnings
###Output
_____no_output_____
###Markdown
Ajustando os dados para tratamento por Machine Learning
###Code
t1 = (dflist[1]).drop(['v49','v82','v104','v105','v225','v226','v227','v228','v229','v230','v231','v232','v253','v254','v255','v256','v257'], axis=1)
n=gc.collect()
t1 = t1[['v0','v1','v2','v3','v4','v5','v6','v7','v8','v9','v10','v11','v12','v13','v14','v15','v19','v20','v21','v22','v23','v27','v28','v29','v30','v31','v32','v33','v43','v44','v45','v46','v47','v48','v50','v51','v52','v54','v55','v56','v57','v59','v60','v61','v62','v63','v64','v65','v66','v67','v68','v69','v70','v72','v73','v76','v77','v78','v79','v80','v83','v85','v87','v88','v89','v90','v91','v92','v93','v94','v96','v97','v98','v100','v101','v106','v107','v108','v109','v112','v113','v114','v115','v116','v117','v118','v121','v122','v123','v124','v125','v126','v127','v128','v137','v138','v139','v141','v143','v145','v147','v149','v151','v153','v155','v156','v157','v158','v159','v160','v161','v162','v163','v164','v165','v166','v167','v168','v169','v170','v171','v172','v173','v174','v175','v176','v177','v178','v179','v180','v181','v182','v183','v184','v185','v186','v187','v188','v189','v192','v193','v194','v195','v196','v197','v198','v199','v200','v201','v202','v203','v204','v205','v206','v207','v208','v209','v210','v211','v212','v213','v215','v216','v218','v219','v220','v221','v222','v223','v224','v234','v235','v237','v238','v239','v241','v242','v243','v244','v245','v246','v247','v248','v249','v250','v261','v251','v252','v260','v262','v259','v258']]
n=gc.collect()
t1 = t1.loc[t1['v0'].between(410000, 413000,inclusive=True)]
t1.head()
###Output
_____no_output_____
###Markdown
Fazendo a Separação de Dados de Treino (70%) e Dados de Teste (30%).
###Code
xtreino, xteste, ytreino, yteste = train_test_split((t1.iloc[:,0:191]),(t1.iloc[:,191:]), test_size = 0.3,random_state=66,shuffle=True)
n=gc.collect()
###Output
_____no_output_____
###Markdown
Modelo 0: Regressão Logística.
###Code
model = LogisticRegression(C=100000, dual=False, max_iter=3000000)
from joblib import parallel_backend
with parallel_backend('dask'):
model.fit(xtreino,ytreino)
n=gc.collect()
with parallel_backend('dask'):
print('\033[1m'+'R² de treino:'+'\033[0m',model.score(xtreino,ytreino),'\033[1m'+'R² de teste:'+'\033[0m',model.score(xteste,yteste))
print('\033[1m'+'Intercept:'+'\033[0m',model.intercept_,'\033[1m')
print('\033[1m'+'RMSE de Treino:'+'\033[0m',mean_squared_error(ytreino, model.predict(xtreino)),'\033[1m'+'RMSE de Teste:'+'\033[0m',mean_squared_error(yteste, model.predict(xteste)))
print('\n')
print('\033[1m'+'Reporte dos Dados de Treino - PR'+'\033[0m')
print(classification_report(ytreino, model.predict(xtreino)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_treino = confusion_matrix(ytreino, model.predict(xtreino))
sns.heatmap(pd.DataFrame(cnf_matrix_treino), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão PR - Dados de Treino',y=1,fontsize=18)
plt.show()
print('\n')
print('\033[1m'+'Reporte dos Dados de Teste - PR'+'\033[0m')
print(classification_report(yteste, model.predict(xteste)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_teste = confusion_matrix(yteste, model.predict(xteste))
sns.heatmap(pd.DataFrame(cnf_matrix_teste), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão PR - Dados de Teste',y=1,fontsize=18)
plt.show()
print('\n')
metrics.plot_roc_curve(model,xteste,yteste)
plt.show()
print('\n')
with parallel_backend('dask'):
importancia01a = pd.DataFrame(model.coef_).T
t2= t1[['v0','v1','v2','v3','v4','v5','v6','v7','v8','v9','v10','v11','v12','v13','v14','v15','v19','v20','v21','v22','v23','v27','v28','v29','v30','v31','v32','v33','v43','v44','v45','v46','v47','v48','v50','v51','v52','v54','v55','v56','v57','v59','v60','v61','v62','v63','v64','v65','v66','v67','v68','v69','v70','v72','v73','v76','v77','v78','v79','v80','v83','v85','v87','v88','v89','v90','v91','v92','v93','v94','v96','v97','v98','v100','v101','v106','v107','v108','v109','v112','v113','v114','v115','v116','v117','v118','v121','v122','v123','v124','v125','v126','v127','v128','v137','v138','v139','v141','v143','v145','v147','v149','v151','v153','v155','v156','v157','v158','v159','v160','v161','v162','v163','v164','v165','v166','v167','v168','v169','v170','v171','v172','v173','v174','v175','v176','v177','v178','v179','v180','v181','v182','v183','v184','v185','v186','v187','v188','v189','v192','v193','v194','v195','v196','v197','v198','v199','v200','v201','v202','v203','v204','v205','v206','v207','v208','v209','v210','v211','v212','v213','v215','v216','v218','v219','v220','v221','v222','v223','v224','v234','v235','v237','v238','v239','v241','v242','v243','v244','v245','v246','v247','v248','v249','v250','v261','v251','v252','v260','v262','v259']]
importancia01a['02'] = pd.DataFrame(t2.columns)
importancia01 = importancia01a.sort_values(by=0,ascending=False)
importancia01.head(200)
n=gc.collect()
###Output
_____no_output_____
###Markdown
Modelo 1: Árvore de Decisão.
###Code
model1 = DecisionTreeClassifier(max_depth=2, random_state=18)
with parallel_backend('dask'):
model1.fit(xtreino,ytreino)
n=gc.collect()
with parallel_backend('dask'):
print('\033[1m'+'R² de treino:'+'\033[0m',model1.score(xtreino,ytreino),'\033[1m'+'R² de teste:'+'\033[0m',model1.score(xteste,yteste))
print('\033[1m'+'RMSE de Treino:'+'\033[0m',mean_squared_error(ytreino, model1.predict(xtreino)),'\033[1m'+'RMSE de Teste:'+'\033[0m',mean_squared_error(yteste, model1.predict(xteste)))
print('\n')
print('\033[1m'+'Reporte dos Dados de Treino - PR'+'\033[0m')
print(classification_report(ytreino, model1.predict(xtreino)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_treino = confusion_matrix(ytreino, model1.predict(xtreino))
sns.heatmap(pd.DataFrame(cnf_matrix_treino), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão PR - Dados de Treino',y=1,fontsize=18)
plt.show()
print('\n')
print('\033[1m'+'Reporte dos Dados de Teste - PR'+'\033[0m')
print(classification_report(yteste, model1.predict(xteste)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_teste = confusion_matrix(yteste, model1.predict(xteste))
sns.heatmap(pd.DataFrame(cnf_matrix_teste), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão PR - Dados de Teste',y=1,fontsize=18)
plt.show()
print('\n')
metrics.plot_roc_curve(model1,xteste,yteste)
plt.show()
print('\n')
with parallel_backend('dask'):
importancia01a = pd.DataFrame(model1.feature_importances_ )
t2= t1[['v0','v1','v2','v3','v4','v5','v6','v7','v8','v9','v10','v11','v12','v13','v14','v15','v19','v20','v21','v22','v23','v27','v28','v29','v30','v31','v32','v33','v43','v44','v45','v46','v47','v48','v50','v51','v52','v54','v55','v56','v57','v59','v60','v61','v62','v63','v64','v65','v66','v67','v68','v69','v70','v72','v73','v76','v77','v78','v79','v80','v83','v85','v87','v88','v89','v90','v91','v92','v93','v94','v96','v97','v98','v100','v101','v106','v107','v108','v109','v112','v113','v114','v115','v116','v117','v118','v121','v122','v123','v124','v125','v126','v127','v128','v137','v138','v139','v141','v143','v145','v147','v149','v151','v153','v155','v156','v157','v158','v159','v160','v161','v162','v163','v164','v165','v166','v167','v168','v169','v170','v171','v172','v173','v174','v175','v176','v177','v178','v179','v180','v181','v182','v183','v184','v185','v186','v187','v188','v189','v192','v193','v194','v195','v196','v197','v198','v199','v200','v201','v202','v203','v204','v205','v206','v207','v208','v209','v210','v211','v212','v213','v215','v216','v218','v219','v220','v221','v222','v223','v224','v234','v235','v237','v238','v239','v241','v242','v243','v244','v245','v246','v247','v248','v249','v250','v261','v251','v252','v260','v262','v259']]
importancia01a['02'] = pd.DataFrame(t2.columns)
importancia01 = importancia01a.sort_values(by=0,ascending=False)
importancia01.head(200)
n=gc.collect()
###Output
_____no_output_____
###Markdown
Modelo 2: Ada Boost.
###Code
model2 = AdaBoostClassifier(n_estimators=40)
with parallel_backend('dask'):
model2.fit(xtreino,ytreino)
n=gc.collect()
with parallel_backend('dask'):
print('\033[1m'+'R² de treino:'+'\033[0m',model2.score(xtreino,ytreino),'\033[1m'+'R² de teste:'+'\033[0m',model2.score(xteste,yteste))
print('\033[1m'+'RMSE de Treino:'+'\033[0m',mean_squared_error(ytreino, model2.predict(xtreino)),'\033[1m'+'RMSE de Teste:'+'\033[0m',mean_squared_error(yteste, model2.predict(xteste)))
print('\n')
print('\033[1m'+'Reporte dos Dados de Treino - PR'+'\033[0m')
print(classification_report(ytreino, model2.predict(xtreino)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_treino = confusion_matrix(ytreino, model2.predict(xtreino))
sns.heatmap(pd.DataFrame(cnf_matrix_treino), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão PR - Dados de Treino',y=1,fontsize=18)
plt.show()
print('\n')
print('\033[1m'+'Reporte dos Dados de Teste - PR'+'\033[0m')
print(classification_report(yteste, model2.predict(xteste)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_teste = confusion_matrix(yteste, model2.predict(xteste))
sns.heatmap(pd.DataFrame(cnf_matrix_teste), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão PR - Dados de Teste',y=1,fontsize=18)
plt.show()
print('\n')
metrics.plot_roc_curve(model2,xteste,yteste)
plt.show()
print('\n')
with parallel_backend('dask'):
importancia01a = pd.DataFrame(model2.feature_importances_ )
t2= t1[['v0','v1','v2','v3','v4','v5','v6','v7','v8','v9','v10','v11','v12','v13','v14','v15','v19','v20','v21','v22','v23','v27','v28','v29','v30','v31','v32','v33','v43','v44','v45','v46','v47','v48','v50','v51','v52','v54','v55','v56','v57','v59','v60','v61','v62','v63','v64','v65','v66','v67','v68','v69','v70','v72','v73','v76','v77','v78','v79','v80','v83','v85','v87','v88','v89','v90','v91','v92','v93','v94','v96','v97','v98','v100','v101','v106','v107','v108','v109','v112','v113','v114','v115','v116','v117','v118','v121','v122','v123','v124','v125','v126','v127','v128','v137','v138','v139','v141','v143','v145','v147','v149','v151','v153','v155','v156','v157','v158','v159','v160','v161','v162','v163','v164','v165','v166','v167','v168','v169','v170','v171','v172','v173','v174','v175','v176','v177','v178','v179','v180','v181','v182','v183','v184','v185','v186','v187','v188','v189','v192','v193','v194','v195','v196','v197','v198','v199','v200','v201','v202','v203','v204','v205','v206','v207','v208','v209','v210','v211','v212','v213','v215','v216','v218','v219','v220','v221','v222','v223','v224','v234','v235','v237','v238','v239','v241','v242','v243','v244','v245','v246','v247','v248','v249','v250','v261','v251','v252','v260','v262','v259']]
importancia01a['02'] = pd.DataFrame(t2.columns)
importancia01 = importancia01a.sort_values(by=0,ascending=False)
importancia01.head(200)
n=gc.collect()
###Output
_____no_output_____
###Markdown
Modelo 3: Gradient Boosting.
###Code
model3 = GradientBoostingClassifier(n_estimators=300)
with parallel_backend('dask'):
model3.fit(xtreino,ytreino)
n=gc.collect()
with parallel_backend('dask'):
print('\033[1m'+'R² de treino:'+'\033[0m',model3.score(xtreino,ytreino),'\033[1m'+'R² de teste:'+'\033[0m',model3.score(xteste,yteste))
print('\033[1m'+'RMSE de Treino:'+'\033[0m',mean_squared_error(ytreino, model3.predict(xtreino)),'\033[1m'+'RMSE de Teste:'+'\033[0m',mean_squared_error(yteste, model3.predict(xteste)))
print('\n')
print('\033[1m'+'Reporte dos Dados de Treino - PR'+'\033[0m')
print(classification_report(ytreino, model3.predict(xtreino)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_treino = confusion_matrix(ytreino, model3.predict(xtreino))
sns.heatmap(pd.DataFrame(cnf_matrix_treino), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão PR - Dados de Treino',y=1,fontsize=18)
plt.show()
print('\n')
print('\033[1m'+'Reporte dos Dados de Teste - PR'+'\033[0m')
print(classification_report(yteste, model3.predict(xteste)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_teste = confusion_matrix(yteste, model3.predict(xteste))
sns.heatmap(pd.DataFrame(cnf_matrix_teste), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão PR - Dados de Teste',y=1,fontsize=18)
plt.show()
print('\n')
metrics.plot_roc_curve(model3,xteste,yteste)
plt.show()
print('\n')
with parallel_backend('dask'):
importancia01a = pd.DataFrame(model3.feature_importances_ )
t2= t1[['v0','v1','v2','v3','v4','v5','v6','v7','v8','v9','v10','v11','v12','v13','v14','v15','v19','v20','v21','v22','v23','v27','v28','v29','v30','v31','v32','v33','v43','v44','v45','v46','v47','v48','v50','v51','v52','v54','v55','v56','v57','v59','v60','v61','v62','v63','v64','v65','v66','v67','v68','v69','v70','v72','v73','v76','v77','v78','v79','v80','v83','v85','v87','v88','v89','v90','v91','v92','v93','v94','v96','v97','v98','v100','v101','v106','v107','v108','v109','v112','v113','v114','v115','v116','v117','v118','v121','v122','v123','v124','v125','v126','v127','v128','v137','v138','v139','v141','v143','v145','v147','v149','v151','v153','v155','v156','v157','v158','v159','v160','v161','v162','v163','v164','v165','v166','v167','v168','v169','v170','v171','v172','v173','v174','v175','v176','v177','v178','v179','v180','v181','v182','v183','v184','v185','v186','v187','v188','v189','v192','v193','v194','v195','v196','v197','v198','v199','v200','v201','v202','v203','v204','v205','v206','v207','v208','v209','v210','v211','v212','v213','v215','v216','v218','v219','v220','v221','v222','v223','v224','v234','v235','v237','v238','v239','v241','v242','v243','v244','v245','v246','v247','v248','v249','v250','v261','v251','v252','v260','v262','v259']]
importancia01a['02'] = pd.DataFrame(t2.columns)
importancia01 = importancia01a.sort_values(by=0,ascending=False)
importancia01.head(200)
n=gc.collect()
###Output
_____no_output_____
###Markdown
Modelo 4: Bagging.
###Code
model4 = BaggingClassifier(n_estimators=1)
with parallel_backend('dask'):
model4.fit(xtreino,ytreino)
n=gc.collect()
with parallel_backend('dask'):
print('\033[1m'+'R² de treino:'+'\033[0m',model4.score(xtreino,ytreino),'\033[1m'+'R² de teste:'+'\033[0m',model4.score(xteste,yteste))
print('\033[1m'+'RMSE de Treino:'+'\033[0m',mean_squared_error(ytreino, model4.predict(xtreino)),'\033[1m'+'RMSE de Teste:'+'\033[0m',mean_squared_error(yteste, model4.predict(xteste)))
print('\n')
print('\033[1m'+'Reporte dos Dados de Treino - PR'+'\033[0m')
print(classification_report(ytreino, model4.predict(xtreino)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_treino = confusion_matrix(ytreino, model4.predict(xtreino))
sns.heatmap(pd.DataFrame(cnf_matrix_treino), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão PR - Dados de Treino',y=1,fontsize=18)
plt.show()
print('\n')
print('\033[1m'+'Reporte dos Dados de Teste - PR'+'\033[0m')
print(classification_report(yteste, model4.predict(xteste)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_teste = confusion_matrix(yteste, model4.predict(xteste))
sns.heatmap(pd.DataFrame(cnf_matrix_teste), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão PR - Dados de Teste',y=1,fontsize=18)
plt.show()
print('\n')
metrics.plot_roc_curve(model4,xteste,yteste)
plt.show()
print('\n')
n=gc.collect()
###Output
_____no_output_____
###Markdown
Modelo 5: Random Forrest.
###Code
model5 = RandomForestClassifier(n_estimators=2)
with parallel_backend('dask'):
model5.fit(xtreino,ytreino)
n=gc.collect()
with parallel_backend('dask'):
print('\033[1m'+'R² de treino:'+'\033[0m',model5.score(xtreino,ytreino),'\033[1m'+'R² de teste:'+'\033[0m',model5.score(xteste,yteste))
print('\033[1m'+'RMSE de Treino:'+'\033[0m',mean_squared_error(ytreino, model5.predict(xtreino)),'\033[1m'+'RMSE de Teste:'+'\033[0m',mean_squared_error(yteste, model5.predict(xteste)))
print('\n')
print('\033[1m'+'Reporte dos Dados de Treino - PR'+'\033[0m')
print(classification_report(ytreino, model5.predict(xtreino)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_treino = confusion_matrix(ytreino, model5.predict(xtreino))
sns.heatmap(pd.DataFrame(cnf_matrix_treino), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão PR - Dados de Treino',y=1,fontsize=18)
plt.show()
print('\n')
print('\033[1m'+'Reporte dos Dados de Teste - PR'+'\033[0m')
print(classification_report(yteste, model5.predict(xteste)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_teste = confusion_matrix(yteste, model5.predict(xteste))
sns.heatmap(pd.DataFrame(cnf_matrix_teste), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão PR - Dados de Teste',y=1,fontsize=18)
plt.show()
print('\n')
metrics.plot_roc_curve(model5,xteste,yteste)
plt.show()
print('\n')
with parallel_backend('dask'):
importancia01a = pd.DataFrame(model5.feature_importances_ )
t2= t1[['v0','v1','v2','v3','v4','v5','v6','v7','v8','v9','v10','v11','v12','v13','v14','v15','v19','v20','v21','v22','v23','v27','v28','v29','v30','v31','v32','v33','v43','v44','v45','v46','v47','v48','v50','v51','v52','v54','v55','v56','v57','v59','v60','v61','v62','v63','v64','v65','v66','v67','v68','v69','v70','v72','v73','v76','v77','v78','v79','v80','v83','v85','v87','v88','v89','v90','v91','v92','v93','v94','v96','v97','v98','v100','v101','v106','v107','v108','v109','v112','v113','v114','v115','v116','v117','v118','v121','v122','v123','v124','v125','v126','v127','v128','v137','v138','v139','v141','v143','v145','v147','v149','v151','v153','v155','v156','v157','v158','v159','v160','v161','v162','v163','v164','v165','v166','v167','v168','v169','v170','v171','v172','v173','v174','v175','v176','v177','v178','v179','v180','v181','v182','v183','v184','v185','v186','v187','v188','v189','v192','v193','v194','v195','v196','v197','v198','v199','v200','v201','v202','v203','v204','v205','v206','v207','v208','v209','v210','v211','v212','v213','v215','v216','v218','v219','v220','v221','v222','v223','v224','v234','v235','v237','v238','v239','v241','v242','v243','v244','v245','v246','v247','v248','v249','v250','v261','v251','v252','v260','v262','v259']]
importancia01a['02'] = pd.DataFrame(t2.columns)
importancia01 = importancia01a.sort_values(by=0,ascending=False)
importancia01.head(200)
n=gc.collect()
###Output
_____no_output_____
###Markdown
Modelo 6: Suport Vector Machine (kernel rbf).
###Code
model6 = SVC(C=30000)
with parallel_backend('dask'):
model6.fit(xtreino,ytreino)
n=gc.collect()
with parallel_backend('dask'):
print('\033[1m'+'R² de treino:'+'\033[0m',model6.score(xtreino,ytreino),'\033[1m'+'R² de teste:'+'\033[0m',model6.score(xteste,yteste))
print('\033[1m'+'RMSE de Treino:'+'\033[0m',mean_squared_error(ytreino, model6.predict(xtreino)),'\033[1m'+'RMSE de Teste:'+'\033[0m',mean_squared_error(yteste, model6.predict(xteste)))
print('\n')
print('\033[1m'+'Reporte dos Dados de Treino - PR'+'\033[0m')
print(classification_report(ytreino, model6.predict(xtreino)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_treino = confusion_matrix(ytreino, model6.predict(xtreino))
sns.heatmap(pd.DataFrame(cnf_matrix_treino), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão PR - Dados de Treino',y=1,fontsize=18)
plt.show()
print('\n')
print('\033[1m'+'Reporte dos Dados de Teste - PR'+'\033[0m')
print(classification_report(yteste, model6.predict(xteste)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_teste = confusion_matrix(yteste, model6.predict(xteste))
sns.heatmap(pd.DataFrame(cnf_matrix_teste), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão PR - Dados de Teste',y=1,fontsize=18)
plt.show()
print('\n')
metrics.plot_roc_curve(model6,xteste,yteste)
plt.show()
print('\n')
n=gc.collect()
###Output
_____no_output_____
###Markdown
Modelo 7: KNN.
###Code
model7 = KNeighborsClassifier(n_neighbors=3)
with parallel_backend('dask'):
model7.fit(xtreino,ytreino)
n=gc.collect()
with parallel_backend('dask'):
print('\033[1m'+'R² de treino:'+'\033[0m',model7.score(xtreino,ytreino),'\033[1m'+'R² de teste:'+'\033[0m',model7.score(xteste,yteste))
print('\033[1m'+'RMSE de Treino:'+'\033[0m',mean_squared_error(ytreino, model7.predict(xtreino)),'\033[1m'+'RMSE de Teste:'+'\033[0m',mean_squared_error(yteste, model7.predict(xteste)))
print('\n')
print('\033[1m'+'Reporte dos Dados de Treino - PR'+'\033[0m')
print(classification_report(ytreino, model7.predict(xtreino)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_treino = confusion_matrix(ytreino, model7.predict(xtreino))
sns.heatmap(pd.DataFrame(cnf_matrix_treino), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão PR - Dados de Treino',y=1,fontsize=18)
plt.show()
print('\n')
print('\033[1m'+'Reporte dos Dados de Teste - PR'+'\033[0m')
print(classification_report(yteste, model7.predict(xteste)))
print('\n')
plt.figure(figsize=(10, 5))
cnf_matrix_teste = confusion_matrix(yteste, model7.predict(xteste))
sns.heatmap(pd.DataFrame(cnf_matrix_teste), annot=True, cmap="YlGnBu" ,fmt='g')
plt.suptitle('Matriz de Confusão PR - Dados de Teste',y=1,fontsize=18)
plt.show()
print('\n')
metrics.plot_roc_curve(model7,xteste,yteste)
plt.show()
print('\n')
n=gc.collect()
###Output
_____no_output_____
###Markdown
Consolidação das Curvas ROC
###Code
with parallel_backend('dask'):
classifiers = [model, model1, model2, model3, model4, model5, model6, model7]
ax = plt.gca()
for i in classifiers:
metrics.plot_roc_curve(i, xteste, yteste, ax=ax)
###Output
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
|
its_pre_processing.ipynb | ###Markdown
Interrupted Time Series Analysis - Data Pre-Processing Basic steps in pre-processing1. Read in all ED attendences (adult and children)2. Filter for age >= 183. Create flags for arrivals 0600-2200 (day) and 2200-0600 (night)4. Filter for day of week = Mon-Thur5. Create monthly aggregates for outcomes and explanatory variables (e.g mean_total_time)6. Create time series specification dataset (e.g. add level, trend variables)7. Save dataset to file. Step 1: Read in all data
###Code
import pandas as pd
import numpy as np
col_names = ['atd_no','age','arrival_time','arrival_mode','clock_stop','bed_request','speciality_ref','total_time',
'flag_breach','flag_admit','flag_reatten']
df = pd.read_csv('./Attendances.txt',names=col_names)
df.head(5)
df.shape
###Output
_____no_output_____
###Markdown
Step 2: Filter by Age
###Code
df = df.loc[df['age']>=18]
df['on_target'] = np.where(df['flag_breach'] == 1, 0, 1)
df['arrival_time_d'] = pd.to_datetime(df['arrival_time'], dayfirst = True)
df['arrival_hr']= df['arrival_time_d'].dt.hour
df.head(5)
###Output
_____no_output_____
###Markdown
Step 3: Flag for night and day
###Code
df['night_att'] = np.where(np.logical_or(df['arrival_hr'] >= 22, df['arrival_hr'] <= 6), 1, 0)
###Output
_____no_output_____
###Markdown
Step 4: Flag for Mon-Thu
###Code
df['dow'] = df['arrival_time_d'].dt.dayofweek
###Output
_____no_output_____
###Markdown
Notes for dt.dayofweek. Monday = 0 and Sunday = 6
###Code
df['dow_mon_thur'] = np.where(df['dow'] <=3, 1, 0)
df['year_mth'] = pd.DatetimeIndex(df['arrival_time_d']).normalize()
df['arrival_time_d'].dt.month.head(4)
df['year_mth_only'] = df['arrival_time_d'].values.astype('datetime64[M]')
df.head(5)
df = df.loc[df['age']>18]
df.shape
###Output
_____no_output_____
###Markdown
Create monthly aggregates
###Code
month_series_mean = df.groupby(['year_mth_only', 'night_att', 'dow_mon_thur'])['total_time'].mean()
month_series_mean.rename('mean_total_time', inplace=True).head()
month_series_n = df.groupby(['year_mth_only', 'night_att', 'dow_mon_thur'])['total_time'].count()
month_series_n.rename('patients_n', inplace=True).head()
month_series_target = df.groupby(['year_mth_only', 'night_att', 'dow_mon_thur'])['on_target'].sum()
month_series_target.rename('on_target', inplace=True).head()
month_series_admit = df.groupby(['year_mth_only', 'night_att', 'dow_mon_thur'])['flag_admit'].sum()
month_series_admit.rename('admit_n', inplace=True).head()
df_month = pd.concat([month_series_mean, month_series_n, month_series_target, month_series_admit], axis=1)
df_month['per_on_target'] = df_month['on_target'] / df_month['patients_n']
df_month['per_admit'] = df_month['admit_n'] / df_month['patients_n']
df_month.reset_index(inplace=True)
df_month['level'] = np.where(df_month['year_mth_only']>='2015-11-01', 1, 0)
df_month.head()
#limit to Monday to Friday
df_week = df_month.loc[df_month['dow_mon_thur'] == 1]
df_week.head(5)
#limit to Night Performance
df_nights = df_week.loc[df_month['night_att'] == 1]
df_nights.reset_index(inplace=True)
df_nights['time'] = df_nights.index + 1
df_nights
df_nights['trend'] = np.where(df_nights['year_mth_only']>='2015-11-01',df_nights.index - 33 , 0)
df_nights['group'] = 1
df_nights
#limit to Day Performance
df_days = df_week.loc[df_month['night_att'] == 0]
df_days.reset_index(inplace=True)
###Output
_____no_output_____
###Markdown
Time series format for regression
###Code
df_days['time'] = df_days.index + 1
df_days
df_days['trend'] = np.where(df_days['year_mth_only']>='2015-11-01',df_days.index - 33 , 0)
df_days['group'] = 0
df_days
df_days.drop(labels = 'index', inplace=True, axis=1)
df_nights.drop(labels = 'index', inplace=True, axis=1)
df_ts_spec = pd.concat([df_nights, df_days])
df_ts_spec['group_time'] = df_ts_spec['group'] * df_ts_spec['time']
df_ts_spec['group_level'] = df_ts_spec['group'] * df_ts_spec['level']
df_ts_spec['group_trend'] = df_ts_spec['group'] * df_ts_spec['trend']
df_ts_spec.shape
df_ts_spec
df_ts_spec.to_csv("20180125_night_day_data_ts.csv")
###Output
_____no_output_____ |
f1data_wrangling_framework.ipynb | ###Markdown
F1 Data Wrangling Framework
###Code
#Hypothesis:
#Predict the probability of when a wreck will happen in an F1 race and how does the speed of the driver
#impact the likelihood of a wreck?
#We are looking specifically at races that have taken place from 2009-2020 at Monaco, Monza, and Barcelona.
#We will be using the following CSV Files found via our Kaggle Data Set:
#Laptimes.CSV, Pitstops.CSV, Results.CSV, Races.CSV, Status.CSV
# We will be using data with a CircuitID = 4, 6, and 14 for Barcelona, Monaco, and Monza
# We will be using data with a StatusID = 3, 4, and 104 for Accident, Collision, or Fatal Accident
# ^^^ would definitely want to include more statusIds that indicate a safety car/stoppage in play if you guys know
import pandas as pd
import os
import csv
print(os.getcwd())
###Output
/Users/jamesbifulco/Desktop/f1capstonerepo/githubrepo
###Markdown
Combining Results.CSV and Races.CSV
###Code
# Now let's import Results.csv and Races.csv,
# Let's filter them out to meet our conditions respectively,
# After filtering is complete, join these dataframes on the appropriate key values
# Appropriate key value: RaceID
results = pd.read_csv("/Users/jamesbifulco/Desktop/f1capstonerepo/githubrepo/results.csv")
results.head()
#lets Drop the columns that we do not need. "positionText" positionOrder points
results.drop("resultId", axis=1, inplace=True)
results.drop("number", axis=1, inplace=True)
results.drop("positionText", axis=1, inplace=True)
results.drop("positionOrder", axis=1, inplace=True)
results.drop("points", axis=1, inplace=True)
results.head()
#Now lets filter by statusID
statuses = [3, 4, 104]
resultsfiltered = results[results['statusId'].isin(statuses)]
resultsfiltered.head()
# Above is results CSV filtered by status and variables relevant to our hypothesis
#Check back with team to make sure there are no variables deleted that are desired for analysis
# Now to contniue crafting our project Dataframe
# Next step will be to filter out Races.csv
# And then join it with resultsfiltered ON RaceId
#-> check with team that this is valid
races = pd.read_csv("/Users/jamesbifulco/Desktop/f1capstonerepo/githubrepo/races.csv")
races.head()
races.drop("round", axis=1, inplace=True)
races.drop("time", axis=1, inplace=True)
races.drop("url", axis=1, inplace=True)
races.drop("year", axis=1, inplace=True)
races.head()
results_and_races = pd.merge(resultsfiltered, races, on="raceId")
results_and_races.head()
# Above is a Dataframe containing the CSVs of Results and Races filtered out for Statuses that designate
# a wreck or safety car appearance.
#Next we will filter out the dataframe for our specific locations (Monaco, Monza, Barcelona)
circuits = [4,6,14]
results_and_races_in_MMB = results_and_races[results_and_races['circuitId'].isin(circuits)]
results_and_races_in_MMB.head()
# Transfer all of the values of the raceID column into a list that I will use to filter out Lap_Times and Pit_Stops
raceIdMMB = results_and_races_in_MMB["raceId"].tolist()
# the list below contains all the values in the name column in this df
#to confirm that the data has been sorted by location correctly
locationconfirmation = results_and_races_in_MMB["name"].tolist()
#Above is a dataframe containing instances of crashes (designated by status 3, 4, 104) that have taken place in Monaco
#Monza, and Barcelona from 2009-2020
#This includes: RaceId, DriverId, ConstructorID, Starting Position on Grid, Final Position, Laps, Driver's Fastest Lap
#and how that ranks per race, Driver's fastest laptime, fastest lap speed, Status, Circuit, name of race and Date.
###Output
_____no_output_____
###Markdown
Crash Data Filtered by Track
###Code
#BARCELONA
barcelona = [4]
results_and_races_in_barcelona = results_and_races[results_and_races['circuitId'].isin(barcelona)]
results_and_races_in_barcelona.head()
#MONACO
monaco = [6]
results_and_races_in_monaco = results_and_races[results_and_races['circuitId'].isin(monaco)]
results_and_races_in_monaco.head()
#MONZA
monza = [14]
results_and_races_in_monza = results_and_races[results_and_races['circuitId'].isin(monza)]
results_and_races_in_monza.head()
###Output
_____no_output_____
###Markdown
Crash & Non Crash Data Filtered by Track
###Code
all_results_and_races = pd.merge(results, races, on="raceId")
#BARCELONA
barcelona = [4]
results_and_races_in_barcelona = all_results_and_races[all_results_and_races['circuitId'].isin(barcelona)]
results_and_races_in_barcelona.head()
#monaco
monaco = [6]
results_and_races_in_monaco = all_results_and_races[all_results_and_races['circuitId'].isin(monaco)]
results_and_races_in_monaco.head()
#monza
monza = [14]
results_and_races_in_monza = all_results_and_races[all_results_and_races['circuitId'].isin(monza)]
results_and_races_in_monza.head()
###Output
_____no_output_____
###Markdown
Cleaning Laptimes.CSV and PitStops.CSV
###Code
lap_times = pd.read_csv("/Users/jamesbifulco/Desktop/f1capstonerepo/githubrepo/lap_times.csv")
lap_times.head()
pit_stops = pd.read_csv("/Users/jamesbifulco/Desktop/f1capstonerepo/githubrepo/pit_stops.csv")
pit_stops.head()
pit_stops_MMB = pit_stops[pit_stops["raceId"].isin(raceIdMMB)]
lap_times_MMB = lap_times[lap_times["raceId"].isin(raceIdMMB)]
# These two dataframes are lap times and pitstops that occurred in races in Monaco, Monza, and Barcelona 2009-2020
# Based on the list of raceId's that were present in the previous dataframe that includes crash data.
#Confused about where to go from here... do I then filter these dataframes out by a list of driverIds found in the
#results and races in mbb dataframe?
#After doing that I do I merge these two filtered dataframes?
# These two dataframes would only contain info on drivers and races who are included in the previous dataframe..
#Would merging ON driverId make sense?
#The final product of this filtering and sorting would contain a dataframe of crash data (designated by status 3, 4, 104)
# where it includes information on instances of crashes in races that took place in Monaco, Monza, Barcelona from
#'09-'20
###Output
_____no_output_____ |
pml1/figure_notebooks/chapter11_linear_regression_figures.ipynb | ###Markdown
Figure 11.1: Polynomial of degrees 1 and 2 fit to 21 datapoints. Figure(s) generated by [linreg_poly_vs_degree.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_poly_vs_degree.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run linreg_poly_vs_degree.py
###Output
_____no_output_____
###Markdown
Figure 11.2: (a) Contours of the RSS error surface for the example in \cref fig:linregPolyDegree1 . The blue cross represents the MLE. (b) Corresponding surface plot. Figure(s) generated by [linreg_contours_sse_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_contours_sse_plot.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run linreg_contours_sse_plot.py
###Output
_____no_output_____
###Markdown
Figure 11.3: Graphical interpretation of least squares for $m=3$ equations and $n=2$ unknowns when solving the system $\mathbf A \mathbf x = \mathbf b $. $\mathbf a _1$ and $\mathbf a _2$ are the columns of $\mathbf A $, which define a 2d linear subspace embedded in $\mathbb R ^3$. The target vector $\mathbf b $ is a vector in $\mathbb R ^3$; its orthogonal projection onto the linear subspace is denoted $ \mathbf b $. The line from $\mathbf b $ to $ \mathbf b $ is the vector of residual errors, whose norm we want to minimize.
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
###Output
_____no_output_____
###Markdown
Figure 11.4: Regression coefficients over time for the 1d model in \cref fig:linregPoly2 (a). Figure(s) generated by [linregOnlineDemo.py](https://github.com/probml/pyprobml/blob/master/scripts/linregOnlineDemo.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run linregOnlineDemo.py
###Output
_____no_output_____
###Markdown
Figure 11.5: Residual plot for polynomial regression of degree 1 and 2 for the functions in \cref fig:linregPoly2 (a-b). Figure(s) generated by [linreg_poly_vs_degree.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_poly_vs_degree.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run linreg_poly_vs_degree.py
###Output
_____no_output_____
###Markdown
Figure 11.6: Fit vs actual plots for polynomial regression of degree 1 and 2 for the functions in \cref fig:linregPoly2 (a-b). Figure(s) generated by [linreg_poly_vs_degree.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_poly_vs_degree.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run linreg_poly_vs_degree.py
###Output
_____no_output_____
###Markdown
Figure 11.7: (a-c) Ridge regression applied to a degree 14 polynomial fit to 21 datapoints. (d) MSE vs strength of regularizer. The degree of regularization increases from left to right, so model complexity decreases from left to right. Figure(s) generated by [linreg_poly_ridge.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_poly_ridge.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run linreg_poly_ridge.py
###Output
_____no_output_____
###Markdown
Figure 11.8: Geometry of ridge regression. The likelihood is shown as an ellipse, and the prior is shown as a circle centered on the origin. Adapted from Figure 3.15 of [Bis06] . Figure(s) generated by [geom_ridge.py](https://github.com/probml/pyprobml/blob/master/scripts/geom_ridge.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run geom_ridge.py
###Output
_____no_output_____
###Markdown
Figure 11.9: Illustration of $\ell _1$ (left) vs $\ell _2$ (right) regularization of a least squares problem. Adapted from Figure 3.12 of [HTF01] .
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
###Output
_____no_output_____
###Markdown
Figure 11.10: Left: soft thresholding. Right: hard thresholding. In both cases, the horizontal axis is the residual error incurred by making predictions using all the coefficients except for $w_k$, and the vertical axis is the estimated coefficient $ w _k$ that minimizes this penalized residual. The flat region in the middle is the interval $[-\lambda ,+\lambda ]$.
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
###Output
_____no_output_____
###Markdown
Figure 11.11: (a) Profiles of ridge coefficients for the prostate cancer example vs bound $B$ on $\ell _2$ norm of $\mathbf w $, so small $B$ (large $\lambda $) is on the left. The vertical line is the value chosen by 5-fold CV using the 1 standard error rule. Adapted from Figure 3.8 of [HTF09] . Figure(s) generated by [ridgePathProstate.py](https://github.com/probml/pyprobml/blob/master/scripts/ridgePathProstate.py) [lassoPathProstate.py](https://github.com/probml/pyprobml/blob/master/scripts/lassoPathProstate.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run ridgePathProstate.py
%run lassoPathProstate.py
###Output
_____no_output_____
###Markdown
Figure 11.12: Values of the coefficients for linear regression model fit to prostate cancer dataset as we vary the strength of the $\ell _1$ regularizer. These numbers are plotted in \cref fig:lassoPathProstate (b).
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
###Output
_____no_output_____
###Markdown
Figure 11.13: Results of different methods on the prostate cancer data, which has 8 features and 67 training cases. Methods are: OLS = ordinary least squares, Subset = best subset regression, Ridge, Lasso. Rows represent the coefficients; we see that subset regression and lasso give sparse solutions. Bottom row is the mean squared error on the test set (30 cases). Adapted from Table 3.3. of [HTF09] . Figure(s) generated by [prostate_comparison.py](https://github.com/probml/pyprobml/blob/master/scripts/prostate_comparison.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run prostate_comparison.py
###Output
_____no_output_____
###Markdown
Figure 11.14: Boxplot displaying (absolute value of) prediction errors on the prostate cancer test set for different regression methods. Figure(s) generated by [prostate_comparison.py](https://github.com/probml/pyprobml/blob/master/scripts/prostate_comparison.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run prostate_comparison.py
###Output
_____no_output_____
###Markdown
Figure 11.15: Example of recovering a sparse signal using lasso. See text for details. Adapted from Figure 1 of [FNW07] . Figure(s) generated by [sparse_sensing_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/sparse_sensing_demo.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run sparse_sensing_demo.py
###Output
_____no_output_____
###Markdown
Figure 11.16: Illustration of group lasso where the original signal is piecewise Gaussian. (a) Original signal. (b) Vanilla lasso estimate. (c) Group lasso estimate using an $\ell _2$ norm on the blocks. (d) Group lasso estimate using an $\ell _ \infty $ norm on the blocks. Adapted from Figures 3-4 of [WNF09] . Figure(s) generated by [groupLassoDemo.py](https://github.com/probml/pyprobml/blob/master/scripts/groupLassoDemo.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run groupLassoDemo.py
###Output
_____no_output_____
###Markdown
Figure 11.17: Same as \cref fig:groupLassoGauss , except the original signal is piecewise constant. Figure(s) generated by [groupLassoDemo.py](https://github.com/probml/pyprobml/blob/master/scripts/groupLassoDemo.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run groupLassoDemo.py
###Output
_____no_output_____
###Markdown
Figure 11.18: Illustration of B-splines of degree 0, 1 and 3. Top row: unweighted basis functions. Dots mark the locations of the 3 internal knots at $[0.25, 0.5, 0.75]$. Bottom row: weighted combination of basis functions using random weights. Figure(s) generated by [splines_basis_weighted.py](https://github.com/probml/pyprobml/blob/master/scripts/splines_basis_weighted.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run splines_basis_weighted.py
###Output
_____no_output_____
###Markdown
Figure 11.19: Design matrix for B-splines of degree (a) 0, (b) 1 and (c) 3. We evaluate the splines on 20 inputs ranging from 0 to 1. Figure(s) generated by [splines_basis_heatmap.py](https://github.com/probml/pyprobml/blob/master/scripts/splines_basis_heatmap.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run splines_basis_heatmap.py
###Output
_____no_output_____
###Markdown
Figure 11.20: Fitting a cubic spline regression model with 15 knots to a 1d dataset. Figure(s) generated by [splines_cherry_blossoms.py](https://github.com/probml/pyprobml/blob/master/scripts/splines_cherry_blossoms.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run splines_cherry_blossoms.py
###Output
_____no_output_____
###Markdown
Figure 11.21: (a) Illustration of robust linear regression. Figure(s) generated by [linregRobustDemoCombined.py](https://github.com/probml/pyprobml/blob/master/scripts/linregRobustDemoCombined.py) [huberLossPlot.py](https://github.com/probml/pyprobml/blob/master/scripts/huberLossPlot.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run linregRobustDemoCombined.py
%run huberLossPlot.py
###Output
_____no_output_____
###Markdown
Figure 11.22: Sequential Bayesian inference of the parameters of a linear regression model $p(y|\mathbf x ) = \mathcal N (y | w_0 + w_1 x_1, \sigma ^2)$. Left column: likelihood function for current data point. Middle column: posterior given first $N$ data points, $p(w_0,w_1|\mathbf x _ 1:N ,y_ 1:N ,\sigma ^2)$. Right column: samples from the current posterior predictive distribution. Row 1: prior distribution ($N=0$). Row 2: after 1 data point. Row 3: after 2 data points. Row 4: after 100 data points. The white cross in columns 1 and 2 represents the true parameter value; we see that the mode of the posterior rapidly converges to this point. The blue circles in column 3 are the observed data points. Adapted from Figure 3.7 of [Bis06] . Figure(s) generated by [linreg_2d_bayes_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_2d_bayes_demo.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run linreg_2d_bayes_demo.py
###Output
_____no_output_____
###Markdown
Figure 11.23: (a) Plugin approximation to predictive density (we plug in the MLE of the parameters) when fitting a second degree polynomial to some 1d data. (b) Posterior predictive density, obtained by integrating out the parameters. Black curve is posterior mean, error bars are 2 standard deviations of the posterior predictive density. (c) 10 samples from the plugin approximation to posterior predictive distribution. (d) 10 samples from the true posterior predictive distribution. Figure(s) generated by [linreg_post_pred_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_post_pred_plot.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run linreg_post_pred_plot.py
###Output
_____no_output_____
###Markdown
Figure 11.24: Posterior samples of $p(w_0,w_1| \mathcal D )$ for 1d linear regression model $p(y|x,\boldsymbol \theta )=\mathcal N (y|w_0 + w_1 x, \sigma ^2)$ with a Gaussian prior. (a) Original data. (b) Centered data. Figure(s) generated by [linreg_2d_bayes_centering_pymc3.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_2d_bayes_centering_pymc3.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run linreg_2d_bayes_centering_pymc3.py
###Output
_____no_output_____
###Markdown
Figure 11.25: Posterior marginals for the parameters in the multi-leg example. Figure(s) generated by [multi_collinear_legs_numpyro.py](https://github.com/probml/pyprobml/blob/master/scripts/multi_collinear_legs_numpyro.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run multi_collinear_legs_numpyro.py
###Output
_____no_output_____
###Markdown
Figure 11.26: Posteriors for the multi-leg example. (a) Joint posterior $p(\beta _l,\beta _r| \mathcal D )$ (b) Posterior of $p(\beta _l + \beta _r | data)$. Figure(s) generated by [multi_collinear_legs_numpyro.py](https://github.com/probml/pyprobml/blob/master/scripts/multi_collinear_legs_numpyro.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
%run multi_collinear_legs_numpyro.py
###Output
_____no_output_____
###Markdown
Figure 11.1: Polynomial of degrees 1 and 2 fit to 21 datapoints. Figure(s) generated by [linreg_poly_vs_degree.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_poly_vs_degree.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run linreg_poly_vs_degree.py
###Output
_____no_output_____
###Markdown
Figure 11.2: (a) Contours of the RSS error surface for the example in \cref fig:linregPolyDegree1 . The blue cross represents the MLE. (b) Corresponding surface plot. Figure(s) generated by [linreg_contours_sse_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_contours_sse_plot.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run linreg_contours_sse_plot.py
###Output
_____no_output_____
###Markdown
Figure 11.3: Graphical interpretation of least squares for $m=3$ equations and $n=2$ unknowns when solving the system $\mathbf A \bm x = \bm b $. $ \bm a _1$ and $ \bm a _2$ are the columns of $\mathbf A $, which define a 2d linear subspace embedded in $\mathbb R ^3$. The target vector $ \bm b $ is a vector in $\mathbb R ^3$; its orthogonal projection onto the linear subspace is denoted $ \bm b $. The line from $ \bm b $ to $ \bm b $ is the vector of residual errors, whose norm we want to minimize
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
###Output
_____no_output_____
###Markdown
Figure 11.4: Regression coefficients over time for the 1d model in \cref fig:linregPoly2 (a). Figure(s) generated by [linregOnlineDemo.py](https://github.com/probml/pyprobml/blob/master/scripts/linregOnlineDemo.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run linregOnlineDemo.py
###Output
_____no_output_____
###Markdown
Figure 11.5: Residual plot for polynomial regression of degree 1 and 2 for the functions in \cref fig:linregPoly2 (a-b). Figure(s) generated by [linreg_poly_vs_degree.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_poly_vs_degree.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run linreg_poly_vs_degree.py
###Output
_____no_output_____
###Markdown
Figure 11.6: Fit vs actual plots for polynomial regression of degree 1 and 2 for the functions in \cref fig:linregPoly2 (a-b). Figure(s) generated by [linreg_poly_vs_degree.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_poly_vs_degree.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run linreg_poly_vs_degree.py
###Output
_____no_output_____
###Markdown
Figure 11.7: Geometry of ridge regression. The likelihood is shown as an ellipse, and the prior is shown as a circle centered on the origin. Adapted from Figure 3.15 of [Bis06] . Figure(s) generated by [geom_ridge.py](https://github.com/probml/pyprobml/blob/master/scripts/geom_ridge.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run geom_ridge.py
###Output
_____no_output_____
###Markdown
Figure 11.8: Illustration of $\ell _1$(left) vs $\ell _2$(right) regularization of a least squares problem. Adapted from Figure 3.12 of [HTF01]
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
###Output
_____no_output_____
###Markdown
Figure 11.9: Left: soft thresholding. Right: hard thresholding. In both cases, the horizontal axis is the residual error incurred by making predictions using all the coefficients except for $w_k$, and the vertical axis is the estimated coefficient $ w _k$ that minimizes this penalized residual. The flat region in the middle is the interval $[-\lambda ,+\lambda ]$
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
###Output
_____no_output_____
###Markdown
Figure 11.10: (a) Profiles of ridge coefficients for the prostate cancer example vs bound $B$ on $\ell _2$ norm of $ \bm w $, so small $B$(large $\lambda $) is on the left. The vertical line is the value chosen by 5-fold CV using the 1 standard error rule. Adapted from Figure 3.8 of [HTF09] . Figure(s) generated by [ridgePathProstate.py](https://github.com/probml/pyprobml/blob/master/scripts/ridgePathProstate.py) [lassoPathProstate.py](https://github.com/probml/pyprobml/blob/master/scripts/lassoPathProstate.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run ridgePathProstate.py
deimport(superimport)
%run lassoPathProstate.py
###Output
_____no_output_____
###Markdown
Figure 11.11: Results of different methods on the prostate cancer data, which has 8 features and 67 training cases. Methods are: OLS = ordinary least squares, Subset = best subset regression, Ridge, Lasso. Rows represent the coefficients; we see that subset regression and lasso give sparse solutions. Bottom row is the mean squared error on the test set (30 cases). Adapted from Table 3.3. of [HTF09] . Figure(s) generated by [prostate_comparison.py](https://github.com/probml/pyprobml/blob/master/scripts/prostate_comparison.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run prostate_comparison.py
###Output
_____no_output_____
###Markdown
Figure 11.12: Boxplot displaying (absolute value of) prediction errors on the prostate cancer test set for different regression methods. Figure(s) generated by [prostate_comparison.py](https://github.com/probml/pyprobml/blob/master/scripts/prostate_comparison.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run prostate_comparison.py
###Output
_____no_output_____
###Markdown
Figure 11.13: Example of recovering a sparse signal using lasso. See text for details. Adapted from Figure 1 of [FNW07] . Figure(s) generated by [sparse_sensing_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/sparse_sensing_demo.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run sparse_sensing_demo.py
###Output
_____no_output_____
###Markdown
Figure 11.14: Illustration of group lasso where the original signal is piecewise Gaussian. (a) Original signal. (b) Vanilla lasso estimate. (c) Group lasso estimate using an $\ell _2$ norm on the blocks. (d) Group lasso estimate using an $\ell _ \infty $ norm on the blocks. Adapted from Figures 3-4 of [WNF09] . Figure(s) generated by [groupLassoDemo.py](https://github.com/probml/pyprobml/blob/master/scripts/groupLassoDemo.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run groupLassoDemo.py
###Output
_____no_output_____
###Markdown
Figure 11.15: Same as \cref fig:groupLassoGauss , except the original signal is piecewise constant. Figure(s) generated by [groupLassoDemo.py](https://github.com/probml/pyprobml/blob/master/scripts/groupLassoDemo.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run groupLassoDemo.py
###Output
_____no_output_____
###Markdown
Figure 11.16: Illustration of B-splines of degree 0, 1 and 3. Top row: unweighted basis functions. Dots mark the locations of the 3 internal knots at $[0.25, 0.5, 0.75]$. Bottom row: weighted combination of basis functions using random weights. Figure(s) generated by [splines_basis_weighted.py](https://github.com/probml/pyprobml/blob/master/scripts/splines_basis_weighted.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run splines_basis_weighted.py
###Output
_____no_output_____
###Markdown
Figure 11.17: Design matrix for B-splines of degree (a) 0, (b) 1 and (c) 3. We evaluate the splines on 20 inputs ranging from 0 to 1. Figure(s) generated by [splines_basis_heatmap.py](https://github.com/probml/pyprobml/blob/master/scripts/splines_basis_heatmap.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run splines_basis_heatmap.py
###Output
_____no_output_____
###Markdown
Figure 11.18: Fitting a cubic spline regression model with 15 knots to a 1d dataset. Figure(s) generated by [splines_cherry_blossoms.py](https://github.com/probml/pyprobml/blob/master/scripts/splines_cherry_blossoms.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run splines_cherry_blossoms.py
###Output
_____no_output_____
###Markdown
Figure 11.19: (a) Illustration of robust linear regression. Figure(s) generated by [linregRobustDemoCombined.py](https://github.com/probml/pyprobml/blob/master/scripts/linregRobustDemoCombined.py) [huberLossPlot.py](https://github.com/probml/pyprobml/blob/master/scripts/huberLossPlot.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run linregRobustDemoCombined.py
deimport(superimport)
%run huberLossPlot.py
###Output
_____no_output_____
###Markdown
Figure 11.20: Sequential Bayesian inference of the parameters of a linear regression model $p(y| \bm x ) = \mathcal N (y | w_0 + w_1 x_1, \sigma ^2)$. Left column: likelihood function for current data point. Middle column: posterior given first $N$ data points, $p(w_0,w_1| \bm x _ 1:N ,y_ 1:N ,\sigma ^2)$. Right column: samples from the current posterior predictive distribution. Row 1: prior distribution ($N=0$). Row 2: after 1 data point. Row 3: after 2 data points. Row 4: after 100 data points. The white cross in columns 1 and 2 represents the true parameter value; we see that the mode of the posterior rapidly converges to this point. The blue circles in column 3 are the observed data points. Adapted from Figure 3.7 of [Bis06] . Figure(s) generated by [linreg_2d_bayes_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_2d_bayes_demo.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run linreg_2d_bayes_demo.py
###Output
_____no_output_____
###Markdown
Figure 11.21: (a) Plugin approximation to predictive density (we plug in the MLE of the parameters) when fitting a second degree polynomial to some 1d data. (b) Posterior predictive density, obtained by integrating out the parameters. Black curve is posterior mean, error bars are 2 standard deviations of the posterior predictive density. (c) 10 samples from the plugin approximation to posterior predictive distribution. (d) 10 samples from the true posterior predictive distribution. Figure(s) generated by [linreg_post_pred_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_post_pred_plot.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run linreg_post_pred_plot.py
###Output
_____no_output_____
###Markdown
Figure 11.22: Posterior samples of $p(w_0,w_1| \mathcal D )$ for 1d linear regression model $p(y|x, \bm \theta )=\mathcal N (y|w_0 + w_1 x, \sigma ^2)$ with a Gaussian prior. (a) Original data. (b) Centered data. Figure(s) generated by [linreg_2d_bayes_centering_pymc3.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_2d_bayes_centering_pymc3.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run linreg_2d_bayes_centering_pymc3.py
###Output
_____no_output_____
###Markdown
Figure 11.23: Posterior marginals for the parameters in the multi-leg example. Figure(s) generated by [multi_collinear_legs_numpyro.py](https://github.com/probml/pyprobml/blob/master/scripts/multi_collinear_legs_numpyro.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run multi_collinear_legs_numpyro.py
###Output
_____no_output_____
###Markdown
Figure 11.24: Posteriors for the multi-leg example. (a) Joint posterior $p(\beta _l,\beta _r| \mathcal D )$(b) Posterior of $p(\beta _l + \beta _r | data)$. Figure(s) generated by [multi_collinear_legs_numpyro.py](https://github.com/probml/pyprobml/blob/master/scripts/multi_collinear_legs_numpyro.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
from deimport.deimport import deimport
print('finished!')
deimport(superimport)
%run multi_collinear_legs_numpyro.py
###Output
_____no_output_____
###Markdown
Figure 11.1: Polynomial of degrees 1 and 2 fit to 21 datapoints. Figure(s) generated by [linreg_poly_vs_degree.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_poly_vs_degree.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n linreg_poly_vs_degree.py
###Output
_____no_output_____
###Markdown
Figure 11.2: (a) Contours of the RSS error surface for the example in \cref fig:linregPolyDegree1 . The blue cross represents the MLE. (b) Corresponding surface plot. Figure(s) generated by [linreg_contours_sse_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_contours_sse_plot.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n linreg_contours_sse_plot.py
###Output
_____no_output_____
###Markdown
Figure 11.3: Graphical interpretation of least squares for $m=3$ equations and $n=2$ unknowns when solving the system $\mathbf A \bm x = \bm b $. $ \bm a _1$ and $ \bm a _2$ are the columns of $\mathbf A $, which define a 2d linear subspace embedded in $\mathbb R ^3$. The target vector $ \bm b $ is a vector in $\mathbb R ^3$; its orthogonal projection onto the linear subspace is denoted $ \bm b $. The line from $ \bm b $ to $ \bm b $ is the vector of residual errors, whose norm we want to minimize
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
###Output
_____no_output_____
###Markdown
Figure 11.4: Regression coefficients over time for the 1d model in \cref fig:linregPoly2 (a). Figure(s) generated by [linregOnlineDemo.py](https://github.com/probml/pyprobml/blob/master/scripts/linregOnlineDemo.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n linregOnlineDemo.py
###Output
_____no_output_____
###Markdown
Figure 11.5: Residual plot for polynomial regression of degree 1 and 2 for the functions in \cref fig:linregPoly2 (a-b). Figure(s) generated by [linreg_poly_vs_degree.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_poly_vs_degree.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n linreg_poly_vs_degree.py
###Output
_____no_output_____
###Markdown
Figure 11.6: Fit vs actual plots for polynomial regression of degree 1 and 2 for the functions in \cref fig:linregPoly2 (a-b). Figure(s) generated by [linreg_poly_vs_degree.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_poly_vs_degree.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n linreg_poly_vs_degree.py
###Output
_____no_output_____
###Markdown
Figure 11.7: Geometry of ridge regression. The likelihood is shown as an ellipse, and the prior is shown as a circle centered on the origin. Adapted from Figure 3.15 of [Bis06] . Figure(s) generated by [geom_ridge.py](https://github.com/probml/pyprobml/blob/master/scripts/geom_ridge.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n geom_ridge.py
###Output
_____no_output_____
###Markdown
Figure 11.8: Illustration of $\ell _1$(left) vs $\ell _2$(right) regularization of a least squares problem. Adapted from Figure 3.12 of [HTF01]
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
###Output
_____no_output_____
###Markdown
Figure 11.9: Left: soft thresholding. Right: hard thresholding. In both cases, the horizontal axis is the residual error incurred by making predictions using all the coefficients except for $w_k$, and the vertical axis is the estimated coefficient $ w _k$ that minimizes this penalized residual. The flat region in the middle is the interval $[-\lambda ,+\lambda ]$
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
###Output
_____no_output_____
###Markdown
Figure 11.10: (a) Profiles of ridge coefficients for the prostate cancer example vs bound $B$ on $\ell _2$ norm of $ \bm w $, so small $B$(large $\lambda $) is on the left. The vertical line is the value chosen by 5-fold CV using the 1 standard error rule. Adapted from Figure 3.8 of [HTF09] . Figure(s) generated by [ridgePathProstate.py](https://github.com/probml/pyprobml/blob/master/scripts/ridgePathProstate.py) [lassoPathProstate.py](https://github.com/probml/pyprobml/blob/master/scripts/lassoPathProstate.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n ridgePathProstate.py
try_deimport()
%run -n lassoPathProstate.py
###Output
_____no_output_____
###Markdown
Figure 11.11: Results of different methods on the prostate cancer data, which has 8 features and 67 training cases. Methods are: OLS = ordinary least squares, Subset = best subset regression, Ridge, Lasso. Rows represent the coefficients; we see that subset regression and lasso give sparse solutions. Bottom row is the mean squared error on the test set (30 cases). Adapted from Table 3.3. of [HTF09] . Figure(s) generated by [prostate_comparison.py](https://github.com/probml/pyprobml/blob/master/scripts/prostate_comparison.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n prostate_comparison.py
###Output
_____no_output_____
###Markdown
Figure 11.12: Boxplot displaying (absolute value of) prediction errors on the prostate cancer test set for different regression methods. Figure(s) generated by [prostate_comparison.py](https://github.com/probml/pyprobml/blob/master/scripts/prostate_comparison.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n prostate_comparison.py
###Output
_____no_output_____
###Markdown
Figure 11.13: Example of recovering a sparse signal using lasso. See text for details. Adapted from Figure 1 of [FNW07] . Figure(s) generated by [sparse_sensing_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/sparse_sensing_demo.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n sparse_sensing_demo.py
###Output
_____no_output_____
###Markdown
Figure 11.14: Illustration of group lasso where the original signal is piecewise Gaussian. (a) Original signal. (b) Vanilla lasso estimate. (c) Group lasso estimate using an $\ell _2$ norm on the blocks. (d) Group lasso estimate using an $\ell _ \infty $ norm on the blocks. Adapted from Figures 3-4 of [WNF09] . Figure(s) generated by [groupLassoDemo.py](https://github.com/probml/pyprobml/blob/master/scripts/groupLassoDemo.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n groupLassoDemo.py
###Output
_____no_output_____
###Markdown
Figure 11.15: Same as \cref fig:groupLassoGauss , except the original signal is piecewise constant. Figure(s) generated by [groupLassoDemo.py](https://github.com/probml/pyprobml/blob/master/scripts/groupLassoDemo.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n groupLassoDemo.py
###Output
_____no_output_____
###Markdown
Figure 11.16: Illustration of B-splines of degree 0, 1 and 3. Top row: unweighted basis functions. Dots mark the locations of the 3 internal knots at $[0.25, 0.5, 0.75]$. Bottom row: weighted combination of basis functions using random weights. Figure(s) generated by [splines_basis_weighted.py](https://github.com/probml/pyprobml/blob/master/scripts/splines_basis_weighted.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n splines_basis_weighted.py
###Output
_____no_output_____
###Markdown
Figure 11.17: Design matrix for B-splines of degree (a) 0, (b) 1 and (c) 3. We evaluate the splines on 20 inputs ranging from 0 to 1. Figure(s) generated by [splines_basis_heatmap.py](https://github.com/probml/pyprobml/blob/master/scripts/splines_basis_heatmap.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n splines_basis_heatmap.py
###Output
_____no_output_____
###Markdown
Figure 11.18: Fitting a cubic spline regression model with 15 knots to a 1d dataset. Figure(s) generated by [splines_cherry_blossoms.py](https://github.com/probml/pyprobml/blob/master/scripts/splines_cherry_blossoms.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n splines_cherry_blossoms.py
###Output
_____no_output_____
###Markdown
Figure 11.19: (a) Illustration of robust linear regression. Figure(s) generated by [linregRobustDemoCombined.py](https://github.com/probml/pyprobml/blob/master/scripts/linregRobustDemoCombined.py) [huberLossPlot.py](https://github.com/probml/pyprobml/blob/master/scripts/huberLossPlot.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n linregRobustDemoCombined.py
try_deimport()
%run -n huberLossPlot.py
###Output
_____no_output_____
###Markdown
Figure 11.20: Sequential Bayesian inference of the parameters of a linear regression model $p(y| \bm x ) = \mathcal N (y | w_0 + w_1 x_1, \sigma ^2)$. Left column: likelihood function for current data point. Middle column: posterior given first $N$ data points, $p(w_0,w_1| \bm x _ 1:N ,y_ 1:N ,\sigma ^2)$. Right column: samples from the current posterior predictive distribution. Row 1: prior distribution ($N=0$). Row 2: after 1 data point. Row 3: after 2 data points. Row 4: after 100 data points. The white cross in columns 1 and 2 represents the true parameter value; we see that the mode of the posterior rapidly converges to this point. The blue circles in column 3 are the observed data points. Adapted from Figure 3.7 of [Bis06] . Figure(s) generated by [linreg_2d_bayes_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_2d_bayes_demo.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n linreg_2d_bayes_demo.py
###Output
_____no_output_____
###Markdown
Figure 11.21: (a) Plugin approximation to predictive density (we plug in the MLE of the parameters) when fitting a second degree polynomial to some 1d data. (b) Posterior predictive density, obtained by integrating out the parameters. Black curve is posterior mean, error bars are 2 standard deviations of the posterior predictive density. (c) 10 samples from the plugin approximation to posterior predictive distribution. (d) 10 samples from the true posterior predictive distribution. Figure(s) generated by [linreg_post_pred_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_post_pred_plot.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n linreg_post_pred_plot.py
###Output
_____no_output_____
###Markdown
Figure 11.22: Posterior samples of $p(w_0,w_1| \mathcal D )$ for 1d linear regression model $p(y|x, \bm \theta )=\mathcal N (y|w_0 + w_1 x, \sigma ^2)$ with a Gaussian prior. (a) Original data. (b) Centered data. Figure(s) generated by [linreg_2d_bayes_centering_pymc3.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_2d_bayes_centering_pymc3.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n linreg_2d_bayes_centering_pymc3.py
###Output
_____no_output_____
###Markdown
Figure 11.23: Posterior marginals for the parameters in the multi-leg example. Figure(s) generated by [multi_collinear_legs_numpyro.py](https://github.com/probml/pyprobml/blob/master/scripts/multi_collinear_legs_numpyro.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n multi_collinear_legs_numpyro.py
###Output
_____no_output_____
###Markdown
Figure 11.24: Posteriors for the multi-leg example. (a) Joint posterior $p(\beta _l,\beta _r| \mathcal D )$(b) Posterior of $p(\beta _l + \beta _r | data)$. Figure(s) generated by [multi_collinear_legs_numpyro.py](https://github.com/probml/pyprobml/blob/master/scripts/multi_collinear_legs_numpyro.py)
###Code
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
%reload_ext autoreload
%autoreload 2
!pip install superimport deimport -qqq
import superimport
def try_deimport():
try:
from deimport.deimport import deimport
deimport(superimport)
except Exception as e:
print(e)
print('finished!')
try_deimport()
%run -n multi_collinear_legs_numpyro.py
###Output
_____no_output_____ |
notebooks/05_Date_and_Time.ipynb | ###Markdown
Python: Date and Time DataThe standard library for manipulating dates and times is [`datetime`](https://docs.python.org/3/library/datetime.html). We will use the more full-featured [`pendulum`](https://github.com/crsmithdev/pendulum/) library to illustrate concepts for working with date/time data, but the concepts are generally also applicable to the use of `datetime`.Note that `pandas` also has a core set of functions for working with time series data.- Documentation for [pendulum](http://pendulum.readthedocs.io/en/latest/)- Documentation for [datetime](https://pymotw.com/2/datetime/)- Documentation for [Pandas time series functions](https://pandas.pydata.org/pandas-docs/stable/timeseries.html)
###Code
import pendulum
import arrow
from datetime import datetime
###Output
_____no_output_____
###Markdown
A readable display for date and time
###Code
fmt = 'ddd hh:mm:ss A, DD-MMM-YYYY'
###Output
_____no_output_____
###Markdown
Creation Local time now
###Code
local = pendulum.now()
local
local.format(fmt='LLL')
local.format(fmt='ddd, DD MMM YYYY, h:MM:SS A')
###Output
_____no_output_____
###Markdown
UTCCoordinated Universal Time (UTC) is the basis for civil time today. This 24-hour time standard is kept using highly precise atomic clocks combined with the Earth's rotation. In practice, UTC shares the same current time as Greenwich Mean Time (GMT).
###Code
utc = local.in_timezone('utc')
utc.format('LLL')
###Output
_____no_output_____
###Markdown
Creation from timestamps
###Code
import time
ts = time.time()
ts
pendulum.from_timestamp(ts).format(fmt)
###Output
_____no_output_____
###Markdown
Creation from strings
###Code
fisher_birthday = arrow.get('Fisher was born on October 21, 1956', 'MMMM DD, YYYY')
fisher_birthday.format('dddd, DD MMM YYYY')
###Output
_____no_output_____
###Markdown
From Unix date command
###Code
ts = ! date
ts
tt = pendulum.parse(ts[0], strict=False)
tt.format(fmt)
###Output
_____no_output_____
###Markdown
Creation from values
###Code
santa_is_coming = pendulum.datetime(2017, 12, 24, 23, 59, 59)
santa_is_coming.format(fmt)
###Output
_____no_output_____
###Markdown
Conversion between time zones
###Code
utc.in_timezone('local').format(fmt)
hawaii = utc.in_timezone('US/Hawaii')
hawaii.format(fmt)
singapore = utc.in_timezone('Asia/Singapore')
singapore.format(fmt)
paris = utc.in_timezone('Europe/Paris')
paris.format(fmt)
###Output
_____no_output_____
###Markdown
Shifting
###Code
current = pendulum.now()
current.format(fmt)
homework_due = current.add(weeks=1, hours=5)
homework_due.format(fmt)
###Output
_____no_output_____
###Markdown
Replacing
###Code
last_year = current.replace(year=2016)
last_year.format(fmt)
###Output
_____no_output_____
###Markdown
Periods and durations
###Code
past = current.add(hours=-4, minutes=-30)
current.diff(past).in_seconds()
current.diff(past).in_words()
###Output
_____no_output_____
###Markdown
Ranges and iteration
###Code
start = pendulum.now()
stop = start.add(months=3)
period = stop - start
for m in period.range('months'):
print(m.format(fmt))
for m in period.range('weeks'):
print(m.format(fmt))
###Output
Fri 04:47:07 PM, 30-Aug-2019
Fri 04:47:07 PM, 06-Sep-2019
Fri 04:47:07 PM, 13-Sep-2019
Fri 04:47:07 PM, 20-Sep-2019
Fri 04:47:07 PM, 27-Sep-2019
Fri 04:47:07 PM, 04-Oct-2019
Fri 04:47:07 PM, 11-Oct-2019
Fri 04:47:07 PM, 18-Oct-2019
Fri 04:47:07 PM, 25-Oct-2019
Fri 04:47:07 PM, 01-Nov-2019
Fri 04:47:07 PM, 08-Nov-2019
Fri 04:47:07 PM, 15-Nov-2019
Fri 04:47:07 PM, 22-Nov-2019
Fri 04:47:07 PM, 29-Nov-2019
###Markdown
Generating readable stringsShow a human readable difference between two times. By default, the difference is from the current time.
###Code
homework_due
homework_due.diff_for_humans()
homework_due.diff_for_humans(locale='zh')
homework_due.diff_for_humans(locale='ko')
###Output
_____no_output_____
###Markdown
Conversion between pendulum and datetime pendulum
###Code
t1 = pendulum.now()
t1
###Output
_____no_output_____
###Markdown
pendulum -> datetime Note: Pendulum inherits datetime from datetime, so conversion is not necessary.
###Code
t1.timestamp()
datetime.fromtimestamp(t1.timestamp())
###Output
_____no_output_____
###Markdown
Compare with direct datetime call
###Code
t2 = datetime.now()
t2
###Output
_____no_output_____
###Markdown
Full compatibility with datetime
###Code
datetime.timestamp(t2)
datetime.timestamp(t1)
###Output
_____no_output_____ |
test/CAB420_Week1_Prac_Q2_Solution(1) (2).ipynb | ###Markdown
CAB420, Week 1 Practical - Question 2 Solution Linear RegressionUsing the dataset from Problem 1, split the data into training, validation and testing as follows:* Training: All data from the years 2014-2016* Validation: All data from 2017* Training: All data from 2018Develop a regression model to predict one of the cycleway data series in your dataset. In developing this model you should:* Initially, use all weather data (temperature, rainfall and solar exposure) and all other data series for a particular counter type (i.e. if you’re predicting cyclists inbound for a counter, use all other cyclist inbound counters)* Use p-values, qqplots, and performance on the validation set to remove terms and improve the model.When you have finished refining the model, evaluate it on test set, and compare the Root Mean Squared Error (RMSE) for the training, validation and test sets.In training the model, you will need to ensure that you have no samples (i.e. rows) with missing data. As such, you should remove samples with missing data from the dataset before training and evaluating the model. This may also mean that you have to remove some columns that contain large amounts of missing data.
###Code
# unlike MATLAB, core Python is limited to a few data types and built in methods
# Thats ok though, because there is a tonne of open source packages that do
# pretty much everything we need, we just need to import them
# numpy handles pretty much anything that is a number/vector/matrix/array
import numpy as np
# pandas handles dataframes (exactly the same as tables in Matlab)
import pandas as pd
# matplotlib emulates Matlabs plotting functionality
import matplotlib.pyplot as plt
# stats models is a package that is going to perform the regression analysis
from statsmodels import api as sm
from scipy import stats
from sklearn.metrics import mean_squared_error
# os allows us to manipulate variables on out local machine, such as paths and environment variables
import os
# self explainatory, dates and times
from datetime import datetime, date
# a helper package to help us iterate over objects
import itertools
###Output
_____no_output_____
###Markdown
Start by loading the data we merged in Q1.
###Code
combined = pd.read_csv('combined(1).csv')
combined['Date']= pd.to_datetime(combined['Date'])
combined.head()
###Output
_____no_output_____
###Markdown
Now find columns/features/covariates that have a suitable amount of data, lets say 300 is the minimum number of samples we need.
###Code
threshold = 300
columns_to_remove = []
for column in combined.columns.values:
if np.sum(combined[column].isna()) > 300:
# add this column to the list that should be removed
columns_to_remove.append(column)
print(columns_to_remove)
print(len(columns_to_remove))
# now lets remove them
combined = combined.drop(columns_to_remove, axis=1)
print(combined.shape)
###Output
['North Brisbane Bikeway Mann Park Windsor Cyclists Outbound', 'Jack Pesch Bridge Pedestrians Outbound', 'Kedron Brook Bikeway Lutwyche Pedestrians Outbound', 'Kedron Brook Bikeway Mitchelton Pedestrian Outbound', 'Ekibin Park Pedestrians Outbound', 'Kedron Brook Bikeway Mitchelton', 'Bishop Street Cyclists Inbound', 'Riverwalk Cyclists Inbound', 'Granville Street Bridge Pedestrians Outbound', 'Riverwalk Cyclists Outbound', 'Kedron Brook Bikeway Mitchelton Cyclist Inbound', 'Granville Street Bridge Cyclists Inbound', 'Kedron Brook Bikeway Lutwyche Pedestrians Inbound', 'Ekibin Park Cyclists Inbound', 'Kedron Brook Bikeway Lutwyche Cyclists Inbound', 'Granville Street Bridge Pedestrians Inbound', 'Kedron Brook Bikeway Lutwyche', 'Ekibin Park Cyclists Outbound', 'Ekibin Park Pedestrians Inbound', 'Granville Street Bridge Cyclists Outbound', 'Bishop Street Pedestrians Outbound', 'Riverwalk Pedestrians Inbound', 'Riverwalk Pedestrians Outbound', 'Jack Pesch Bridge Cyclists Inbound', 'Jack Pesch Bridge Pedestrians Inbound', 'Kedron Brook Bikeway Mitchelton Cyclist Outbound', 'Bishop Street Cyclists Outbound', 'Jack Pesch Bridge Cyclists Outbound', 'Kedron Brook Bikeway Lutwyche Cyclists Outbound', 'Bishop Street Pedestrians Inbound', 'Story Bridge West Cyclists Outbound', 'Kedron Brook Bikeway Mitchelton Pedestrian Inbound']
32
(1826, 25)
###Markdown
Now drop any rows that contain a NaN.
###Code
print(np.sum(combined.isna(), axis=1))
print(np.sum(np.sum(combined.isna(), axis=1) > 0))
nans = combined.isna()
print(type(nans))
nans.to_csv('nans.csv')
combined_filtered = combined.dropna(axis=0)
# lets have a look at the final data set
print(combined_filtered.head())
print('Final dataset shape = {}'.format(combined_filtered.shape))
print(combined.iloc[11, :])
###Output
0 3
1 3
2 3
3 3
4 3
..
1821 0
1822 0
1823 0
1824 0
1825 0
Length: 1826, dtype: int64
404
<class 'pandas.core.frame.DataFrame'>
Unnamed: 0 Rainfall amount (millimetres) Date \
169 169 0.0 2014-06-19
170 170 5.8 2014-06-20
171 171 0.0 2014-06-21
172 172 5.2 2014-06-22
173 173 0.2 2014-06-23
Maximum temperature (Degree C) Daily global solar exposure (MJ/m*m) \
169 20.3 8.0
170 22.5 9.1
171 25.6 12.9
172 24.2 13.0
173 24.1 13.6
Story Bridge East Pedestrian Inbound \
169 0.0
170 0.0
171 0.0
172 0.0
173 0.0
Schulz Canal Bridge Cyclists Outbound \
169 55.0
170 49.0
171 67.0
172 76.0
173 69.0
Story Bridge West Pedestrian Outbound \
169 0.0
170 0.0
171 0.0
172 0.0
173 0.0
Bicentennial Bikeway Pedestrians Inbound \
169 1630.0
170 1170.0
171 1289.0
172 1542.0
173 1862.0
Story Bridge West Pedestrian Inbound ... \
169 0.0 ...
170 0.0 ...
171 0.0 ...
172 0.0 ...
173 0.0 ...
Bicentennial Bikeway Pedestrians Outbound \
169 1900.0
170 1586.0
171 1847.0
172 2126.0
173 2180.0
Story Bridge East Cyclists Outbound \
169 0.0
170 0.0
171 0.0
172 0.0
173 0.0
Bicentennial Bikeway Cyclists Outbound \
169 333.0
170 403.0
171 642.0
172 635.0
173 631.0
Story Bridge East Pedestrian Outbound \
169 0.0
170 0.0
171 0.0
172 0.0
173 0.0
North Brisbane Bikeway Mann Park Windsor Pedestrian Outbound \
169 0.0
170 0.0
171 0.0
172 0.0
173 0.0
Story Bridge West Cyclists Inbound Bicenntenial Bikeway \
169 0.0 4223.0
170 0.0 3619.0
171 0.0 4423.0
172 0.0 5023.0
173 0.0 5329.0
Story Bridge East Cyclists Inbound \
169 0.0
170 0.0
171 0.0
172 0.0
173 0.0
North Brisbane Bikeway Mann Park Windsor Pedestrian Inbound \
169 0.0
170 0.0
171 0.0
172 0.0
173 0.0
Schulz Canal Bridge Cyclists Inbound
169 60.0
170 45.0
171 72.0
172 82.0
173 74.0
[5 rows x 25 columns]
Final dataset shape = (1422, 25)
Unnamed: 0 11
Rainfall amount (millimetres) 0
Date 2014-01-12 00:00:00
Maximum temperature (Degree C) 30.6
Daily global solar exposure (MJ/m*m) 27.5
Story Bridge East Pedestrian Inbound 0
Schulz Canal Bridge Cyclists Outbound 121
Story Bridge West Pedestrian Outbound 0
Bicentennial Bikeway Pedestrians Inbound 1431
Story Bridge West Pedestrian Inbound 0
Unnamed: 1 0:00
Bicentennial Bikeway Cyclists Inbound 659
Schulz Canal Bridge Pedestrians Inbound 158
North Brisbane Bikeway Mann Park Windsor Cyclists Inbound NaN
Schulz Canal Bridge Pedestrians Outbound 41
Bicentennial Bikeway Pedestrians Outbound 2597
Story Bridge East Cyclists Outbound 0
Bicentennial Bikeway Cyclists Outbound 659
Story Bridge East Pedestrian Outbound 0
North Brisbane Bikeway Mann Park Windsor Pedestrian Outbound NaN
Story Bridge West Cyclists Inbound 0
Bicenntenial Bikeway 5346
Story Bridge East Cyclists Inbound 0
North Brisbane Bikeway Mann Park Windsor Pedestrian Inbound NaN
Schulz Canal Bridge Cyclists Inbound 127
Name: 11, dtype: object
###Markdown
Split into train/test splits.We'll split the data by time such that pre-2017 is training, 2018 is validation and 2019 is testing.As a sanity check, we'll print the size of each set when we're finished.
###Code
train = combined_filtered[combined_filtered.Date < datetime(year=2017, month=1, day=1)]
val = combined_filtered[((combined_filtered.Date >= datetime(year=2017, month=1, day=1)) &
(combined_filtered.Date < datetime(year=2018, month=1, day=1)))]
test = combined_filtered[((combined_filtered.Date >= datetime(year=2018, month=1, day=1)) &
(combined_filtered.Date < datetime(year=2019, month=1, day=1)))]
print('num train = {}'.format(train.shape[0]))
print('val train = {}'.format(val.shape[0]))
print('test train = {}'.format(test.shape[0]))
###Output
num train = 888
val train = 276
test train = 258
###Markdown
Now we want to perform linear regression using Ordinary Least Squares. We want to use all weather data from the BOM to start with
###Code
X_bom = ['Rainfall amount (millimetres)',
'Daily global solar exposure (MJ/m*m)',
'Maximum temperature (Degree C)']
###Output
_____no_output_____
###Markdown
We can use any of the counters that we chose. We'll select 'Bicentennial Bikeway Cyclists Inbound' as our response, and use the rest of the inbound counters as our predictors along side thte BOM data.
###Code
# want to use all variables cyclist inbound variables
X_bcc = [x for x in train.columns.values if 'Cyclists Inbound' in x]
# remove the response variable from here
X_bcc.remove('Bicentennial Bikeway Cyclists Inbound')
# combine this list of variables together by just extending the
# BOM data with the BCC data
X_variables = X_bom + X_bcc
Y_variable = 'Bicentennial Bikeway Cyclists Inbound'
Y_train = np.array(train[Y_variable], dtype=np.float64)
X_train = np.array(train[X_variables], dtype=np.float64)
# want to add a constant to the model (the y-axis intercept)
X_train = sm.add_constant(X_train)
###Output
_____no_output_____
###Markdown
Also create validation and test data.
###Code
Y_val = np.array(val[Y_variable], dtype=np.float64)
X_val = np.array(val[X_variables], dtype=np.float64)
X_val = sm.add_constant(X_val)
Y_test = np.array(test[Y_variable], dtype=np.float64)
X_test = np.array(test[X_variables], dtype=np.float64)
X_test = sm.add_constant(X_test)
###Output
_____no_output_____
###Markdown
Now create the model and evaluate it
###Code
# create the linear model
model = sm.OLS(Y_train, X_train)
# fit the model
model_1_fit = model.fit()
pred = model_1_fit.predict(X_val)
print('Model 1 RMSE = {}'.format(
np.sqrt(mean_squared_error(Y_val, model_1_fit.predict(X_val)))))
print(model_1_fit.summary())
print(model_1_fit.params)
fig, ax = plt.subplots(figsize=(8,6))
sm.qqplot(model_1_fit.resid, ax=ax, line='s')
plt.title('Q-Q Plot for Linear Regression')
plt.show()
###Output
Model 1 RMSE = 623.3791739360704
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.474
Model: OLS Adj. R-squared: 0.469
Method: Least Squares F-statistic: 113.1
Date: Thu, 04 Mar 2021 Prob (F-statistic): 4.59e-118
Time: 18:13:13 Log-Likelihood: -6899.6
No. Observations: 888 AIC: 1.382e+04
Df Residuals: 880 BIC: 1.385e+04
Df Model: 7
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 739.2276 142.489 5.188 0.000 459.569 1018.886
x1 -2.1218 1.726 -1.230 0.219 -5.509 1.265
x2 -5.9035 4.200 -1.406 0.160 -14.146 2.339
x3 -34.0164 6.637 -5.125 0.000 -47.042 -20.990
x4 -0.4872 0.585 -0.833 0.405 -1.635 0.661
x5 1.1847 0.104 11.385 0.000 0.980 1.389
x6 1.9407 0.108 17.964 0.000 1.729 2.153
x7 6.2318 1.169 5.330 0.000 3.937 8.527
==============================================================================
Omnibus: 150.696 Durbin-Watson: 0.247
Prob(Omnibus): 0.000 Jarque-Bera (JB): 233.912
Skew: -1.139 Prob(JB): 1.61e-51
Kurtosis: 4.062 Cond. No. 3.79e+03
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 3.79e+03. This might indicate that there are
strong multicollinearity or other numerical problems.
[ 7.39227580e+02 -2.12180551e+00 -5.90348877e+00 -3.40164060e+01
-4.87249158e-01 1.18469111e+00 1.94071221e+00 6.23180686e+00]
###Markdown
Our initial residual plot looks pretty bad. The wonky trend in our residuals suggests that the i.i.d. assumption made when performing ordinary least squares is bad. This implies that the variance is not common within our data samples, meaning that our dataset is heteroskedastic (don't need to worry too much about the implications of this for this class, but the concepts of homoskedasticity and heteroskedasticity are important for successful application of stats/ML models).Despite this poor model, we will continue on looking to see if we can tidy things up within a OLS model.Lets see if any variables aren't explicitly correlated with our response variable.
###Code
all_variables = X_variables + ['Bicentennial Bikeway Cyclists Inbound']
corr_coeffs = train[all_variables].corr()
plt.figure(figsize=[15, 15])
plt.matshow(corr_coeffs)
plt.colorbar();
print(np.array(corr_coeffs))
###Output
[[ 1. -0.1373654 -0.03216845 -0.16416081 -0.12134364 -0.08392224
-0.28375855 -0.16317026]
[-0.1373654 1. 0.64749344 0.35713766 0.24915125 0.1162428
0.52359272 0.09517715]
[-0.03216845 0.64749344 1. 0.25038631 0.21496214 0.11422163
0.44445062 0.02452469]
[-0.16416081 0.35713766 0.25038631 1. 0.34952993 0.5092007
0.53424962 0.45741757]
[-0.12134364 0.24915125 0.21496214 0.34952993 1. -0.25427468
0.35673773 0.22037637]
[-0.08392224 0.1162428 0.11422163 0.5092007 -0.25427468 1.
0.30354061 0.54449222]
[-0.28375855 0.52359272 0.44445062 0.53424962 0.35673773 0.30354061
1. 0.39493691]
[-0.16317026 0.09517715 0.02452469 0.45741757 0.22037637 0.54449222
0.39493691 1. ]]
###Markdown
Looks like there is little evidence in our dataset to identify a linear relationship (correlation) between variables (1 and 2) with our response. So, lets remove them and see what happens.
###Code
to_remove = [X_variables[0]]
print('Variables to remove -> {}'.format(to_remove[0]))
train = train.drop(X_variables[0], axis=1)
# also want to remove these variable names from the X_variable list
X_variables.remove(to_remove[0])
print(X_variables)
# now lets create a new model and perform regression on that
X_train = np.array(train[X_variables], dtype=np.float64)
# want to add a constant to the model (the y-axis intercept)
X_train = sm.add_constant(X_train)
# also creating validation and testing data
Y_val = np.array(val[Y_variable], dtype=np.float64)
X_val = np.array(val[X_variables], dtype=np.float64)
X_val = sm.add_constant(X_val)
Y_test = np.array(test[Y_variable], dtype=np.float64)
X_test = np.array(test[X_variables], dtype=np.float64)
X_test = sm.add_constant(X_test)
# now make the model and fit it
model_2 = sm.OLS(Y_train, X_train)
# fit the model without any regularisation
model_2_fit = model_2.fit()
pred = model_2_fit.predict(X_val)
print('Model 1 RMSE = {}'.format(
np.sqrt(mean_squared_error(Y_val, model_2_fit.predict(X_val)))))
print(model_2_fit.summary())
print(model_2_fit.params)
###Output
Model 1 RMSE = 622.699754823652
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.473
Model: OLS Adj. R-squared: 0.469
Method: Least Squares F-statistic: 131.6
Date: Thu, 04 Mar 2021 Prob (F-statistic): 8.12e-119
Time: 18:13:14 Log-Likelihood: -6900.4
No. Observations: 888 AIC: 1.381e+04
Df Residuals: 881 BIC: 1.385e+04
Df Model: 6
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 726.3396 142.145 5.110 0.000 447.358 1005.321
x1 -5.6302 4.195 -1.342 0.180 -13.863 2.603
x2 -35.0112 6.589 -5.313 0.000 -47.944 -22.079
x3 -0.4869 0.585 -0.832 0.405 -1.635 0.661
x4 1.1879 0.104 11.416 0.000 0.984 1.392
x5 1.9417 0.108 17.969 0.000 1.730 2.154
x6 6.5553 1.140 5.753 0.000 4.319 8.792
==============================================================================
Omnibus: 148.184 Durbin-Watson: 0.243
Prob(Omnibus): 0.000 Jarque-Bera (JB): 228.034
Skew: -1.129 Prob(JB): 3.04e-50
Kurtosis: 4.032 Cond. No. 3.78e+03
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 3.78e+03. This might indicate that there are
strong multicollinearity or other numerical problems.
[ 7.26339560e+02 -5.63023974e+00 -3.50112395e+01 -4.86859430e-01
1.18794596e+00 1.94168626e+00 6.55531776e+00]
###Markdown
x2 still looks ordinary, so we'll remove that as well
###Code
to_remove = [X_variables[2]]
print('Variable to remove -> {}'.format(to_remove[0]))
train = train.drop([X_variables[2]], axis=1)
# also want to remove these variable names from the X_variable list
X_variables.remove(to_remove[0])
print(X_variables)
# now lets create a new model and perform regression on that
X_train = np.array(train[X_variables], dtype=np.float64)
# want to add a constant to the model (the y-axis intercept)
X_train = sm.add_constant(X_train)
# also creating validation and testing data
Y_val = np.array(val[Y_variable], dtype=np.float64)
X_val = np.array(val[X_variables], dtype=np.float64)
X_val = sm.add_constant(X_val)
Y_test = np.array(test[Y_variable], dtype=np.float64)
X_test = np.array(test[X_variables], dtype=np.float64)
X_test = sm.add_constant(X_test)
# now make the model and fit it
model_3 = sm.OLS(Y_train, X_train)
# fit the model without any regularisation
model_3_fit = model_3.fit()
pred = model_3_fit.predict(X_val)
print('Model 1 RMSE = {}'.format(
np.sqrt(mean_squared_error(Y_val, model_3_fit.predict(X_val)))))
print(model_3_fit.summary())
print(model_3_fit.params)
###Output
Model 1 RMSE = 621.548527040436
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.472
Model: OLS Adj. R-squared: 0.469
Method: Least Squares F-statistic: 157.9
Date: Thu, 04 Mar 2021 Prob (F-statistic): 8.66e-120
Time: 18:13:14 Log-Likelihood: -6900.7
No. Observations: 888 AIC: 1.381e+04
Df Residuals: 882 BIC: 1.384e+04
Df Model: 5
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 716.0746 141.584 5.058 0.000 438.194 993.955
x1 -6.1679 4.144 -1.488 0.137 -14.301 1.966
x2 -34.5590 6.566 -5.264 0.000 -47.445 -21.673
x3 1.1484 0.093 12.408 0.000 0.967 1.330
x4 1.8895 0.088 21.472 0.000 1.717 2.062
x5 6.3699 1.117 5.701 0.000 4.177 8.563
==============================================================================
Omnibus: 151.481 Durbin-Watson: 0.236
Prob(Omnibus): 0.000 Jarque-Bera (JB): 235.988
Skew: -1.141 Prob(JB): 5.70e-52
Kurtosis: 4.080 Cond. No. 3.68e+03
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 3.68e+03. This might indicate that there are
strong multicollinearity or other numerical problems.
[716.07456939 -6.16793976 -34.55904113 1.1483853 1.88950759
6.36988372]
###Markdown
x1 is still looking poor, so we'll remove it too.
###Code
to_remove = [X_variables[0]]
print('Variable to remove -> {}'.format(to_remove[0]))
train = train.drop([X_variables[0]], axis=1)
# also want to remove these variable names from the X_variable list
X_variables.remove(to_remove[0])
print(X_variables)
# now lets create a new model and perform regression on that
X_train = np.array(train[X_variables], dtype=np.float64)
# want to add a constant to the model (the y-axis intercept)
X_train = sm.add_constant(X_train)
# also creating validation and testing data
Y_val = np.array(val[Y_variable], dtype=np.float64)
X_val = np.array(val[X_variables], dtype=np.float64)
X_val = sm.add_constant(X_val)
Y_test = np.array(test[Y_variable], dtype=np.float64)
X_test = np.array(test[X_variables], dtype=np.float64)
X_test = sm.add_constant(X_test)
# now make the model and fit it
model_4 = sm.OLS(Y_train, X_train)
# fit the model without any regularisation
model_4_fit = model_4.fit()
pred = model_4_fit.predict(X_val)
print('Model 4 RMSE = {}'.format(
np.sqrt(mean_squared_error(Y_val, model_4_fit.predict(X_val)))))
print(model_4_fit.summary())
print(model_4_fit.params)
###Output
Model 4 RMSE = 614.9652599304148
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.471
Model: OLS Adj. R-squared: 0.469
Method: Least Squares F-statistic: 196.5
Date: Thu, 04 Mar 2021 Prob (F-statistic): 1.75e-120
Time: 18:13:53 Log-Likelihood: -6901.8
No. Observations: 888 AIC: 1.381e+04
Df Residuals: 883 BIC: 1.384e+04
Df Model: 4
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 787.3752 133.325 5.906 0.000 525.705 1049.045
x1 -39.8489 5.524 -7.213 0.000 -50.691 -29.006
x2 1.1437 0.093 12.357 0.000 0.962 1.325
x3 1.8933 0.088 21.510 0.000 1.721 2.066
x4 5.8630 1.065 5.506 0.000 3.773 7.953
==============================================================================
Omnibus: 145.697 Durbin-Watson: 0.229
Prob(Omnibus): 0.000 Jarque-Bera (JB): 222.667
Skew: -1.116 Prob(JB): 4.45e-49
Kurtosis: 4.018 Cond. No. 3.46e+03
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 3.46e+03. This might indicate that there are
strong multicollinearity or other numerical problems.
[787.3751559 -39.84886523 1.14374877 1.89334112 5.86299853]
###Markdown
Finally, we'll run the model on the test data.
###Code
pred = model_4_fit.predict(X_test)
rmse_test = np.sqrt(mean_squared_error(Y_test, pred))
fig = plt.figure(figsize=[12, 8])
ax = fig.add_subplot(1, 1, 1)
ax.plot(np.arange(len(pred)), pred, label='Predicted')
ax.plot(np.arange(len(Y_test)), Y_test, label='Actual')
ax.set_title(rmse_test)
ax.legend()
###Output
_____no_output_____ |
lecture2/exercise.ipynb | ###Markdown
演習第2講の演習です。 PyTorchを使ってモデルを構築し、最適化アルゴリズムを設定しましょう。 データを訓練用とテスト用に分割
###Code
import torch
from sklearn import datasets
from sklearn.model_selection import train_test_split
digits_data = datasets.load_digits()
digit_images = digits_data.data
labels = digits_data.target
x_train, x_test, t_train, t_test = train_test_split(digit_images, labels) # 25%がテスト用
# Tensorに変換
x_train = torch.tensor(x_train, dtype=torch.float32)
t_train = torch.tensor(t_train, dtype=torch.int64)
x_test = torch.tensor(x_test, dtype=torch.float32)
t_test = torch.tensor(t_test, dtype=torch.int64)
###Output
_____no_output_____
###Markdown
モデルの構築`nn`モジュールの`Sequential`クラスを使い、`print(net)`で以下のように表示されるモデルを構築しましょう。```Sequential( (0): Linear(in_features=64, out_features=128, bias=True) (1): ReLU() (2): Linear(in_features=128, out_features=64, bias=True) (3): ReLU() (4): Linear(in_features=64, out_features=10, bias=True))```
###Code
from torch import nn
net = nn.Sequential(
# ------- ここからコードを記述 -------
nn.Linear(64, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10)
# ------- ここまで -------
)
print(net)
###Output
Sequential(
(0): Linear(in_features=64, out_features=128, bias=True)
(1): ReLU()
(2): Linear(in_features=128, out_features=64, bias=True)
(3): ReLU()
(4): Linear(in_features=64, out_features=10, bias=True)
)
###Markdown
学習モデルを訓練します。 最適化アルゴリズムの設定をしましょう。 最適化アルゴリズムは、以下のページから好きなものを選択してください。 https://pytorch.org/docs/stable/optim.html
###Code
from torch import optim
# 交差エントロピー誤差関数
loss_fnc = nn.CrossEntropyLoss()
# 最適化アルゴリズム
optimizer = optim.SGD(net.parameters(), lr=0.01)
# 損失のログ
record_loss_train = []
record_loss_test = []
# 1000エポック学習
for i in range(1000):
# 勾配を0に
optimizer.zero_grad()
# 順伝播
y_train = net(x_train)
y_test = net(x_test)
# 誤差を求める
loss_train = loss_fnc(y_train, t_train)
loss_test = loss_fnc(y_test, t_test)
record_loss_train.append(loss_train.item())
record_loss_test.append(loss_test.item())
# 逆伝播(勾配を求める)
loss_train.backward()
# パラメータの更新
optimizer.step()
if i%100 == 0:
print("Epoch:", i, "Loss_Train:", loss_train.item(), "Loss_Test:", loss_test.item())
###Output
Epoch: 0 Loss_Train: 2.754333019256592 Loss_Test: 2.782325506210327
Epoch: 100 Loss_Train: 0.3653141260147095 Loss_Test: 0.3780445456504822
Epoch: 200 Loss_Train: 0.19143040478229523 Loss_Test: 0.22563523054122925
Epoch: 300 Loss_Train: 0.13576538860797882 Loss_Test: 0.1811671108007431
Epoch: 400 Loss_Train: 0.10628854483366013 Loss_Test: 0.15894435346126556
Epoch: 500 Loss_Train: 0.08696656674146652 Loss_Test: 0.14534437656402588
Epoch: 600 Loss_Train: 0.07303904742002487 Loss_Test: 0.13582417368888855
Epoch: 700 Loss_Train: 0.06243514269590378 Loss_Test: 0.128996342420578
Epoch: 800 Loss_Train: 0.054069940000772476 Loss_Test: 0.12396707385778427
Epoch: 900 Loss_Train: 0.047345004975795746 Loss_Test: 0.1200050413608551
###Markdown
誤差の推移
###Code
import matplotlib.pyplot as plt
plt.plot(range(len(record_loss_train)), record_loss_train, label="Train")
plt.plot(range(len(record_loss_test)), record_loss_test, label="Test")
plt.legend()
plt.xlabel("Epochs")
plt.ylabel("Error")
plt.show()
###Output
_____no_output_____
###Markdown
正解率
###Code
y_test = net(x_test)
count = (y_test.argmax(1) == t_test).sum().item()
print("正解率:", str(count/len(y_test)*100) + "%")
###Output
正解率: 96.44444444444444%
###Markdown
解答例以下は、どうしても手がかりがないときのみ参考にしましょう。
###Code
from torch import nn
net = nn.Sequential(
# ------- ここからコードを記述 -------
nn.Linear(64, 128), # 全結合層
nn.ReLU(), # ReLU
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10)
# ------- ここまで -------
)
print(net)
from torch import optim
# 交差エントロピー誤差関数
loss_fnc = nn.CrossEntropyLoss()
# 最適化アルゴリズム
optimizer = optim.Adam(net.parameters()) # ここにコードを記述
# 損失のログ
record_loss_train = []
record_loss_test = []
# 1000エポック学習
for i in range(1000):
# 勾配を0に
optimizer.zero_grad()
# 順伝播
y_train = net(x_train)
y_test = net(x_test)
# 誤差を求める
loss_train = loss_fnc(y_train, t_train)
loss_test = loss_fnc(y_test, t_test)
record_loss_train.append(loss_train.item())
record_loss_test.append(loss_test.item())
# 逆伝播(勾配を求める)
loss_train.backward()
# パラメータの更新
optimizer.step()
if i%100 == 0:
print("Epoch:", i, "Loss_Train:", loss_train.item(), "Loss_Test:", loss_test.item())
###Output
_____no_output_____
###Markdown
演習第2講の演習です。 PyTorchを使ってモデルを構築し、最適化アルゴリズムを設定しましょう。 データを訓練用とテスト用に分割
###Code
import torch
from sklearn import datasets
from sklearn.model_selection import train_test_split
digits_data = datasets.load_digits()
digit_images = digits_data.data
labels = digits_data.target
x_train, x_test, t_train, t_test = train_test_split(digit_images, labels) # 25%がテスト用
# Tensorに変換
x_train = torch.tensor(x_train, dtype=torch.float32)
t_train = torch.tensor(t_train, dtype=torch.int64)
x_test = torch.tensor(x_test, dtype=torch.float32)
t_test = torch.tensor(t_test, dtype=torch.int64)
###Output
_____no_output_____
###Markdown
モデルの構築`nn`モジュールの`Sequential`クラスを使い、`print(net)`で以下のように表示されるモデルを構築しましょう。```Sequential( (0): Linear(in_features=64, out_features=128, bias=True) (1): ReLU() (2): Linear(in_features=128, out_features=64, bias=True) (3): ReLU() (4): Linear(in_features=64, out_features=10, bias=True))```
###Code
from torch import nn
net = nn.Sequential(
# ------- ここからコードを記述 -------
# ------- ここまで -------
)
print(net)
###Output
_____no_output_____
###Markdown
学習モデルを訓練します。 最適化アルゴリズムの設定をしましょう。 最適化アルゴリズムは、以下のページから好きなものを選択してください。 https://pytorch.org/docs/stable/optim.html
###Code
from torch import optim
# 交差エントロピー誤差関数
loss_fnc = nn.CrossEntropyLoss()
# 最適化アルゴリズム
optimizer = # ここにコードを記述
# 損失のログ
record_loss_train = []
record_loss_test = []
# 1000エポック学習
for i in range(1000):
# 勾配を0に
optimizer.zero_grad()
# 順伝播
y_train = net(x_train)
y_test = net(x_test)
# 誤差を求める
loss_train = loss_fnc(y_train, t_train)
loss_test = loss_fnc(y_test, t_test)
record_loss_train.append(loss_train.item())
record_loss_test.append(loss_test.item())
# 逆伝播(勾配を求める)
loss_train.backward()
# パラメータの更新
optimizer.step()
if i%100 == 0:
print("Epoch:", i, "Loss_Train:", loss_train.item(), "Loss_Test:", loss_test.item())
###Output
_____no_output_____
###Markdown
誤差の推移
###Code
import matplotlib.pyplot as plt
plt.plot(range(len(record_loss_train)), record_loss_train, label="Train")
plt.plot(range(len(record_loss_test)), record_loss_test, label="Test")
plt.legend()
plt.xlabel("Epochs")
plt.ylabel("Error")
plt.show()
###Output
_____no_output_____
###Markdown
正解率
###Code
y_test = net(x_test)
count = (y_test.argmax(1) == t_test).sum().item()
print("正解率:", str(count/len(y_test)*100) + "%")
###Output
_____no_output_____
###Markdown
解答例以下は、どうしても手がかりがないときのみ参考にしましょう。
###Code
from torch import nn
net = nn.Sequential(
# ------- ここからコードを記述 -------
nn.Linear(64, 128), # 全結合層
nn.ReLU(), # ReLU
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10)
# ------- ここまで -------
)
print(net)
from torch import optim
# 交差エントロピー誤差関数
loss_fnc = nn.CrossEntropyLoss()
# 最適化アルゴリズム
optimizer = optim.Adam(net.parameters()) # ここにコードを記述
# 損失のログ
record_loss_train = []
record_loss_test = []
# 1000エポック学習
for i in range(1000):
# 勾配を0に
optimizer.zero_grad()
# 順伝播
y_train = net(x_train)
y_test = net(x_test)
# 誤差を求める
loss_train = loss_fnc(y_train, t_train)
loss_test = loss_fnc(y_test, t_test)
record_loss_train.append(loss_train.item())
record_loss_test.append(loss_test.item())
# 逆伝播(勾配を求める)
loss_train.backward()
# パラメータの更新
optimizer.step()
if i%100 == 0:
print("Epoch:", i, "Loss_Train:", loss_train.item(), "Loss_Test:", loss_test.item())
###Output
_____no_output_____ |
Applied Data Science Capstone/5. Present Data-Driven Insights/Applied DS EDA Data Wrang.ipynb | ###Markdown
**Space X Falcon 9 First Stage Landing Prediction** Lab 2: Data wrangling Estimated time needed: **60** minutes In this lab, we will perform some Exploratory Data Analysis (EDA) to find some patterns in the data and determine what would be the label for training supervised models.In the data set, there are several different cases where the booster did not land successfully. Sometimes a landing was attempted but failed due to an accident; for example, True Ocean means the mission outcome was successfully landed to a specific region of the ocean while False Ocean means the mission outcome was unsuccessfully landed to a specific region of the ocean. True RTLS means the mission outcome was successfully landed to a ground pad False RTLS means the mission outcome was unsuccessfully landed to a ground pad.True ASDS means the mission outcome was successfully landed on a drone ship False ASDS means the mission outcome was unsuccessfully landed on a drone ship.In this lab we will mainly convert those outcomes into Training Labels with `1` means the booster successfully landed `0` means it was unsuccessful. Falcon 9 first stage will land successfully  Several examples of an unsuccessful landing are shown here:  ObjectivesPerform exploratory Data Analysis and determine Training Labels* Exploratory Data Analysis* Determine Training Labels *** Import Libraries and Define Auxiliary Functions We will import the following libraries.
###Code
# Pandas is a software library written for the Python programming language for data manipulation and analysis.
import pandas as pd
#NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays
import numpy as np
###Output
_____no_output_____
###Markdown
Data Analysis Load Space X dataset, from last section.
###Code
df=pd.read_csv("https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/dataset_part_1.csv")
df.head(10)
###Output
_____no_output_____
###Markdown
Identify and calculate the percentage of the missing values in each attribute
###Code
df.isnull().sum()/df.count()*100
###Output
_____no_output_____
###Markdown
Identify which columns are numerical and categorical:
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
TASK 1: Calculate the number of launches on each siteThe data contains several Space X launch facilities: Cape Canaveral Space Launch Complex 40 VAFB SLC 4E , Vandenberg Air Force Base Space Launch Complex 4E (SLC-4E), Kennedy Space Center Launch Complex 39A KSC LC 39A .The location of each Launch Is placed in the column LaunchSite Next, let's see the number of launches for each site.Use the method value_counts() on the column LaunchSite to determine the number of launches on each site:
###Code
# Apply value_counts() on column LaunchSite
launchsitecount = df.value_counts('LaunchSite')
launchsitecount
###Output
_____no_output_____
###Markdown
Each launch aims to an dedicated orbit, and here are some common orbit types: * LEO: Low Earth orbit (LEO)is an Earth-centred orbit with an altitude of 2,000 km (1,200 mi) or less (approximately one-third of the radius of Earth),\[1] or with at least 11.25 periods per day (an orbital period of 128 minutes or less) and an eccentricity less than 0.25.\[2] Most of the manmade objects in outer space are in LEO \[1].* VLEO: Very Low Earth Orbits (VLEO) can be defined as the orbits with a mean altitude below 450 km. Operating in these orbits can provide a number of benefits to Earth observation spacecraft as the spacecraft operates closer to the observation\[2].* GTO A geosynchronous orbit is a high Earth orbit that allows satellites to match Earth's rotation. Located at 22,236 miles (35,786 kilometers) above Earth's equator, this position is a valuable spot for monitoring weather, communications and surveillance. Because the satellite orbits at the same speed that the Earth is turning, the satellite seems to stay in place over a single longitude, though it may drift north to south,” NASA wrote on its Earth Observatory website \[3] .* SSO (or SO): It is a Sun-synchronous orbit also called a heliosynchronous orbit is a nearly polar orbit around a planet, in which the satellite passes over any given point of the planet's surface at the same local mean solar time \[4] .* ES-L1 :At the Lagrange points the gravitational forces of the two large bodies cancel out in such a way that a small object placed in orbit there is in equilibrium relative to the center of mass of the large bodies. L1 is one such point between the sun and the earth \[5] .* HEO A highly elliptical orbit, is an elliptic orbit with high eccentricity, usually referring to one around Earth \[6].* ISS A modular space station (habitable artificial satellite) in low Earth orbit. It is a multinational collaborative project between five participating space agencies: NASA (United States), Roscosmos (Russia), JAXA (Japan), ESA (Europe), and CSA (Canada) \[7] * MEO Geocentric orbits ranging in altitude from 2,000 km (1,200 mi) to just below geosynchronous orbit at 35,786 kilometers (22,236 mi). Also known as an intermediate circular orbit. These are "most commonly at 20,200 kilometers (12,600 mi), or 20,650 kilometers (12,830 mi), with an orbital period of 12 hours \[8] * HEO Geocentric orbits above the altitude of geosynchronous orbit (35,786 km or 22,236 mi) \[9] * GEO It is a circular geosynchronous orbit 35,786 kilometres (22,236 miles) above Earth's equator and following the direction of Earth's rotation \[10] * PO It is one type of satellites in which a satellite passes above or nearly above both poles of the body being orbited (usually a planet such as the Earth \[11] some are shown in the following plot:  TASK 2: Calculate the number and occurrence of each orbit Use the method .value_counts() to determine the number and occurrence of each orbit in the column Orbit
###Code
# Apply value_counts on Orbit column
orbitcount = df.value_counts('Orbit')
orbitcount
###Output
_____no_output_____
###Markdown
TASK 3: Calculate the number and occurence of mission outcome per orbit type Use the method .value_counts() on the column Outcome to determine the number of landing_outcomes.Then assign it to a variable landing_outcomes.
###Code
# landing_outcomes = values on Outcome column
landing_outcomes = df.value_counts('Outcome')
landing_outcomes
###Output
_____no_output_____
###Markdown
True Ocean means the mission outcome was successfully landed to a specific region of the ocean while False Ocean means the mission outcome was unsuccessfully landed to a specific region of the ocean. True RTLS means the mission outcome was successfully landed to a ground pad False RTLS means the mission outcome was unsuccessfully landed to a ground pad.True ASDS means the mission outcome was successfully landed to a drone ship False ASDS means the mission outcome was unsuccessfully landed to a drone ship. None ASDS and None None these represent a failure to land.
###Code
for i,outcome in enumerate(landing_outcomes.keys()):
print(i,outcome)
###Output
0 True ASDS
1 None None
2 True RTLS
3 False ASDS
4 True Ocean
5 False Ocean
6 None ASDS
7 False RTLS
###Markdown
We create a set of outcomes where the second stage did not land successfully:
###Code
bad_outcomes=set(landing_outcomes.keys()[[1,3,5,6,7]])
bad_outcomes
###Output
_____no_output_____
###Markdown
TASK 4: Create a landing outcome label from Outcome column Using the Outcome, create a list where the element is zero if the corresponding row in Outcome is in the set bad_outcome; otherwise, it's one. Then assign it to the variable landing_class:
###Code
landing_class =[]
for outcome in df['Outcome']:
if outcome in bad_outcomes:
landing_class.append(0)
else:
landing_class.append(1)
###Output
_____no_output_____
###Markdown
This variable will represent the classification variable that represents the outcome of each launch. If the value is zero, the first stage did not land successfully; one means the first stage landed Successfully
###Code
df['Class']=landing_class
df[['Class']].head(8)
df.head(5)
###Output
_____no_output_____
###Markdown
We can use the following line of code to determine the success rate:
###Code
df["Class"].mean()
###Output
_____no_output_____ |
tcga-series/post/02_unsupervised_learning_gene_expressions.ipynb | ###Markdown
Discovering genetic patterns of liver cancer - Unsupervised Approach TL;DRWe discussed in our first notebook (Link to first nbk) that liver cancer has the second highest mortality rate. Hence, we have explored and analyzed the publicly-available liver cancer dataset to identify candidate biomarkers related to disease progression using common bioinformatic Python and R toolkits. IntroductionThis notebook focuses on applying an unsupervised clustering approach to identify the underlying patterns between the RNA-Seq data representing the hallmarks of cancer and liver cancer progression (i.e. tumor stage). An unsupervised learning approach helps uncover structure within data to establish relationships without any previously assigned labels. We would also be exploring the hypothesis that an association exists between patient's cluster membership derived from gene expression data and patients' liver cancer stage. To validate this hypothesis, we have followed the following machine learning pipeline and established our conclusion.  Load libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import pickle
from sklearn import preprocessing
from sklearn.decomposition import PCA, SparsePCA
from sklearn.cluster import KMeans
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Set variables
###Code
data_dir=""
rnaseq_file=data_dir+"../workshop3/lihc_rnaseq.csv.gz"
clinical_file=data_dir+"../workshop3/revised_clinical.tsv"
###Output
_____no_output_____
###Markdown
Data loading and cleaning RNA Seq Data
###Code
rnaseq = (pd.
read_csv(rnaseq_file,compression="gzip").
set_index('bcr_patient_barcode').
applymap(lambda x : int(np.ceil(x)))
)
display(rnaseq.shape)
display(rnaseq.head())
gene_name_logical = [len(x[0])>1 for x in rnaseq.columns.str.split('|')]
sub = rnaseq.loc[:,gene_name_logical]
sub.columns = [x[0] for x in sub.columns.str.split('|')]
rnaseq_sub = sub.copy()
rnaseq_sub.head()
rnaseq_sub.index = rnaseq_sub.index.map(lambda x: '-'.join(x.split('-')[:3]).lower())
print(rnaseq_sub.shape)
rnaseq_sub.head()
###Output
(423, 20501)
###Markdown
Using only the genes from the hallmarks of cancerHere, we would be using the hallmarks of cancer geneset dictionary pickle file to limit the RNA gene expressions of patients only to those genes that are representative of the hallmarks of cancer. We ended up restricting our research from 20k+ gene expressions to around 4k+ which will probably have higher correlation with liver cancer stages.
###Code
geneset_dict = pickle.load(open('hallmarks_of_cancer_geneset_dictionary.pkl','rb'))
all_hallmark_genes = np.unique(np.concatenate([v for k,v in geneset_dict.items()]))
len(all_hallmark_genes)
rnaseq_sub = rnaseq_sub.loc[:,np.intersect1d(rnaseq_sub.columns.values,all_hallmark_genes)]
print(rnaseq_sub.shape)
rnaseq_sub.head()
###Output
(423, 4223)
###Markdown
Clinical
###Code
clinical = pd.read_csv(clinical_file, sep='\t')
clinical['submitter_id'] = clinical['submitter_id'].map(lambda x: x.lower())
clinical.head()
###Output
_____no_output_____
###Markdown
Merge RNASeq data and Clinical data to validate the hypothesisWe merged the two datasets. We have included the demographic information like gender, race, ethnicity, etc and also tumor stage information for each patient available in the clinical data. We have also formatted tumor stage names to standardize the nomenclature.
###Code
full_df_stage = pd.merge(rnaseq_sub.reset_index(), clinical[['submitter_id','gender','race','ethnicity','tumor_stage']], left_on='bcr_patient_barcode', right_on='submitter_id', how='inner') \
.set_index('bcr_patient_barcode') \
.drop('submitter_id', axis=1)
#ensuring ID uniqueness
full_df_stage.index = [x + '-' + str(i) for i,x in enumerate(full_df_stage.index)]
print(full_df_stage.shape)
full_df_stage.head()
# Subset out the recognizable stages
tumor_stages = clinical['tumor_stage'].value_counts()
tumor_stages[tumor_stages.index.str.startswith('stage')]
# Subset full dataframe for patient samples that have a corresponding tumor stage
full_df_stage = full_df_stage.loc[full_df_stage['tumor_stage'].str.startswith('stage')]
# Since there are substages (eg, stage iia and stage iib), we will conver them to the 4 main stages
full_df_stage['tumor_stage'] = full_df_stage['tumor_stage'] \
.str.replace('a', '') \
.str.replace('b', '') \
.str.replace('c', '') \
.str.replace('v', '') \
.str.replace('stge','stage')
df_stage = full_df_stage.reset_index()
df_stage.head()
###Output
_____no_output_____
###Markdown
Merged RNA Seq data with clinical demographic patient information for ClusteringWe merged the two datasets. We have only extracted the demographic information available in clinical data of the patients. This merged dataset does not have any liver cancer stage information (any labels). We will use this data as an input to our clustering pipeline as shown below:
###Code
# Merging demographic information like gender, race, ethnicity with gene expression data
full_df = pd.merge(rnaseq_sub.reset_index(), clinical[['submitter_id','gender','race','ethnicity']], left_on='bcr_patient_barcode', right_on='submitter_id', how='inner') \
.set_index('bcr_patient_barcode') \
.drop('submitter_id', axis=1)
#ensuring ID uniqueness
full_df.index = [x + '-' + str(i) for i,x in enumerate(full_df.index)]
full_df.head()
###Output
_____no_output_____
###Markdown
One-Hot Encoding (Categorical Encoding):As we see in the data frame obtained in previous step, there are some categorical fields like gender, race, ethnicity, etc. We one-hot encoded these categorical fields into new columns for use by our machine learning models.We see new fields like gender_female, gender_male, race_asian, etc. below after this transformation. This is also called categorical encoding. The pandas' get_dummies function helps to perform this transformation
###Code
# One hot encoding on full dataframe to convert categorical fields into binary fields
full_df_onehot = pd.get_dummies(full_df, drop_first=False)
full_df_onehot.head()
###Output
_____no_output_____
###Markdown
Filtering those fields that are not required.
###Code
# Filtering columns that are not required after one hot encoding
full_df_onehot_filter = full_df_onehot.drop(['race_not reported','ethnicity_not reported','gender_male'],axis=1)
full_df_onehot_filter.head()
###Output
_____no_output_____
###Markdown
Data Standardization & Scaling:There are more than 4200+ RNA gene expressions and also more than 10 binary demographic fields that were obtained after one-hot encoding. Hene, these fields would have a huge range in the values. To eliminate this bias introduced due to scale differencesin the data, we have used min-max scaler for standardizing the entire dataset. This feature scaling approach maintains all the feature values between a standard range of 0 and 1.
###Code
# Transforming the data such that the features are within a specific range e.g. [0, 1]. - Feature Scaling
x = full_df_onehot_filter #returns a numpy array
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
genome_clinic_df = pd.DataFrame(x_scaled,columns=full_df_onehot_filter\
.loc[:, full_df_onehot_filter.columns != 'index'].columns)
index_df = full_df_onehot_filter.reset_index()
genome_clinic_std_concat = pd.concat([index_df['index'],genome_clinic_df],axis=1)
genome_clinic_std_concat.head()
genome_clinic_std_concat.set_index('index', inplace=True)
genome_clinic_std_concat.head()
###Output
_____no_output_____
###Markdown
Dimensionality Reduction with sparse PCAThe dataset obtained after above data transformations contains more than 4200+ features. This is a very high dimensional feature space. However, we need to focus on lower dimension representation of the feature space for K-Means Clustering to function accurately. Hence, we utilize dimensionality reduction technique. There are thousands of genes expressed in any given sample, but a patient may have very different genes expressed based on factors such as the tissue type, the sampling technique, and the time of sampling. This implies a lot of sparseness in the feature space among our samples. We have leveraged the sparse PCA module available in the scikit learn library in Python. This reduces the feature space of our dataset from 4200+ columns to 2 Principal Components that capture the most varience explaining the structure of the dataset.
###Code
# Dimensionality Reduction using Principal Component Analysis to reduce high dimension of 25k+ feature space into 10 significant featue space
n=2
pcs = ['PC'+str(x) for x in range(n)]
pca = SparsePCA(n_components=n,max_iter=20,n_jobs=4)
principalComponents = pca.fit_transform(genome_clinic_std_concat)
#print(pca.explained_variance_)
principalDf = pd.DataFrame(data = principalComponents
, columns = pcs)
principalDfConcat = pd.concat([index_df['index'],principalDf],axis=1)
principalDfConcat.head()
###Output
_____no_output_____
###Markdown
K-Means Clustering:K-Means Clustering is a popular clustering algorithm that segments data into K groups based on the underlying data patterns. We will apply scikit-learn's K-Means module for applying clustering algorithm on the principal components on the 2-dimensional representation of the data set. We have combined the cluster labels for each patient_id with the prinicpal components.
###Code
# Clustering Model building using KMeans and concatenating labels with the corresponding patient
#from scipy import stats
kmeans = KMeans(n_clusters=3, random_state=0).fit(principalDfConcat.set_index('index'))
labels = kmeans.labels_
#Glue back to original data
principalDfConcat['clusters'] = labels
cols = ['patient_id']
cols.extend(pcs)
cols.extend(['clusters'])
principalDfConcat.columns= cols
principalDfConcat.head()
###Output
_____no_output_____
###Markdown
Determining optimal number of clusters - Elbow Method:The information on the best number of clusters,i.e. K needs to be known either by knowledge base or by Elbow method. We have approached the Elbow method to determine the best K clusters for our dataset. The Elbow Plot below does not display the exact structure of the data. However, the first bend is at K=3, which implies that it can be assumed to be the best chpice for the number of clusters.
###Code
# Checking for best K when number of groups or clusters are not known - Used Elbow Plot.
distortions = []
for k in range(1,11):
kmeans = KMeans(
n_clusters=k, init = "random",
n_init=10, max_iter=300, random_state=0
)
kmeans.fit(principalDfConcat.set_index('patient_id'))
distortions.append(kmeans.inertia_)
#plot
plt.plot(range(1,11), distortions, marker='o')
plt.xlabel("Number of clusters")
plt.ylabel("Distortions")
plt.show()
###Output
_____no_output_____
###Markdown
As we see in the Elbow plot, we do not get exact elbow shape. However, the first bend is at k=3. We thus, assume 3 clusters would be appropriate count fo the number of clusters. Assessing similarity between cluster outcomes and the cancer stages provided in clinical data:The clustering algorithm groups the patients into three groups 0,1 and 2, which represent the entire data. But, they still do not indicate the cancer stage information of the patient.
###Code
df_stage_valid = pd.merge(df_stage[['index','tumor_stage']], principalDfConcat[['patient_id','clusters']], right_on='patient_id', left_on='index', how='left') \
.set_index('patient_id') \
.drop('index', axis=1)
#ensuring ID uniqueness
df_stage_valid.index = [x + '-' + str(i) for i,x in enumerate(df_stage_valid.index)]
df_stage_valid.head()
###Output
_____no_output_____
###Markdown
We calculated the percentage of each cluster group and for each cancer stage to assess the initial hypothesis about the relationship between clusters representing the RNA gene expressions of the patients on one end and the liver cancer stages on the other end. We see how each representative cluster is spread across multiple cancer stages.
###Code
# Aggregating at clinical cancer stage level to check for similarity in both outcomes.
tmp = df_stage_valid.reset_index().groupby('tumor_stage')['clusters'].value_counts()
display(tmp)
(tmp/df_stage_valid.shape[0]).round(2)
###Output
_____no_output_____
###Markdown
Discovering genetic patterns of liver cancer - Unsupervised Approach TL;DRWe discussed in our first notebook (Link to first nbk) that liver cancer has the second highest mortality rate. Hence, we have explored and analyzed the publicly-available liver cancer dataset to identify candidate biomarkers related to disease progression using common bioinformatic Python and R toolkits. IntroductionThis notebook focuses on applying an unsupervised clustering approach to identify the underlying patterns between the RNA-Seq data representing the hallmarks of cancer and liver cancer progression (i.e. tumor stage). An unsupervised learning approach helps uncover structure within data to establish relationships without any previously assigned labels. We would also be exploring the hypothesis that an association exists between patient's cluster membership derived from gene expression data and patients' liver cancer stage. To validate this hypothesis, we have followed the following machine learning pipeline and established our conclusion.  Load libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import pickle
from sklearn import preprocessing
from sklearn.decomposition import PCA, SparsePCA
from sklearn.cluster import KMeans
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Set variables
###Code
data_dir=""
rnaseq_file=data_dir+"./workshop3/lihc_rnaseq.csv.gz"
clinical_file=data_dir+"./workshop3/revised_clinical.tsv"
###Output
_____no_output_____
###Markdown
Data loading and cleaning RNA Seq Data
###Code
rnaseq = (pd.
read_csv(rnaseq_file,compression="gzip").
set_index('bcr_patient_barcode').
applymap(lambda x : int(np.ceil(x)))
)
display(rnaseq.shape)
display(rnaseq.head())
gene_name_logical = [len(x[0])>1 for x in rnaseq.columns.str.split('|')]
sub = rnaseq.loc[:,gene_name_logical]
sub.columns = [x[0] for x in sub.columns.str.split('|')]
rnaseq_sub = sub.copy()
rnaseq_sub.head()
rnaseq_sub.index = rnaseq_sub.index.map(lambda x: '-'.join(x.split('-')[:3]).lower())
print(rnaseq_sub.shape)
rnaseq_sub.head()
###Output
(423, 20501)
###Markdown
Using only the genes from the hallmarks of cancerHere, we would be using the hallmarks of cancer geneset dictionary pickle file to limit the RNA gene expressions of patients only to those genes that are representative of the hallmarks of cancer. We ended up restricting our research from 20k+ gene expressions to around 4k+ which will probably have higher correlation with liver cancer stages.
###Code
geneset_dict = pickle.load(open('hallmarks_of_cancer_geneset_dictionary.pkl','rb'))
all_hallmark_genes = np.unique(np.concatenate([v for k,v in geneset_dict.items()]))
len(all_hallmark_genes)
rnaseq_sub = rnaseq_sub.loc[:,np.intersect1d(rnaseq_sub.columns.values,all_hallmark_genes)]
print(rnaseq_sub.shape)
rnaseq_sub.head()
###Output
(423, 4223)
###Markdown
Clinical
###Code
clinical = pd.read_csv(clinical_file, sep='\t')
clinical['submitter_id'] = clinical['submitter_id'].map(lambda x: x.lower())
clinical.head()
###Output
_____no_output_____
###Markdown
Merge RNASeq data and Clinical data to validate the hypothesisWe merged the two datasets. We have included the demographic information like gender, race, ethnicity, etc and also tumor stage information for each patient available in the clinical data. We have also formatted tumor stage names to standardize the nomenclature.
###Code
full_df_stage = pd.merge(rnaseq_sub.reset_index(), clinical[['submitter_id','gender','race','ethnicity','tumor_stage']], left_on='bcr_patient_barcode', right_on='submitter_id', how='inner') \
.set_index('bcr_patient_barcode') \
.drop('submitter_id', axis=1)
#ensuring ID uniqueness
full_df_stage.index = [x + '-' + str(i) for i,x in enumerate(full_df_stage.index)]
print(full_df_stage.shape)
full_df_stage.head()
# Subset out the recognizable stages
tumor_stages = clinical['tumor_stage'].value_counts()
tumor_stages[tumor_stages.index.str.startswith('stage')]
# Subset full dataframe for patient samples that have a corresponding tumor stage
full_df_stage = full_df_stage.loc[full_df_stage['tumor_stage'].str.startswith('stage')]
# Since there are substages (eg, stage iia and stage iib), we will conver them to the 4 main stages
full_df_stage['tumor_stage'] = full_df_stage['tumor_stage'] \
.str.replace('a', '') \
.str.replace('b', '') \
.str.replace('c', '') \
.str.replace('v', '') \
.str.replace('stge','stage')
df_stage = full_df_stage.reset_index()
df_stage.head()
###Output
_____no_output_____
###Markdown
Merged RNA Seq data with clinical demographic patient information for ClusteringWe merged the two datasets. We have only extracted the demographic information available in clinical data of the patients. This merged dataset does not have any liver cancer stage information (any labels). We will use this data as an input to our clustering pipeline as shown below:
###Code
# Merging demographic information like gender, race, ethnicity with gene expression data
full_df = pd.merge(rnaseq_sub.reset_index(), clinical[['submitter_id','gender','race','ethnicity']], left_on='bcr_patient_barcode', right_on='submitter_id', how='inner') \
.set_index('bcr_patient_barcode') \
.drop('submitter_id', axis=1)
#ensuring ID uniqueness
full_df.index = [x + '-' + str(i) for i,x in enumerate(full_df.index)]
full_df.head()
###Output
_____no_output_____
###Markdown
One-Hot Encoding (Categorical Encoding):As we see in the data frame obtained in previous step, there are some categorical fields like gender, race, ethnicity, etc. We one-hot encoded these categorical fields into new columns for use by our machine learning models.We see new fields like gender_female, gender_male, race_asian, etc. below after this transformation. This is also called categorical encoding. The pandas' get_dummies function helps to perform this transformation
###Code
# One hot encoding on full dataframe to convert categorical fields into binary fields
full_df_onehot = pd.get_dummies(full_df, drop_first=False)
full_df_onehot.head()
###Output
_____no_output_____
###Markdown
Filtering those fields that are not required.
###Code
# Filtering columns that are not required after one hot encoding
full_df_onehot_filter = full_df_onehot.drop(['race_not reported','ethnicity_not reported','gender_male'],axis=1)
full_df_onehot_filter.head()
###Output
_____no_output_____
###Markdown
Data Standardization & Scaling:There are more than 4200+ RNA gene expressions and also more than 10 binary demographic fields that were obtained after one-hot encoding. Hene, these fields would have a huge range in the values. To eliminate this bias introduced due to scale differencesin the data, we have used min-max scaler for standardizing the entire dataset. This feature scaling approach maintains all the feature values between a standard range of 0 and 1.
###Code
# Transforming the data such that the features are within a specific range e.g. [0, 1]. - Feature Scaling
x = full_df_onehot_filter #returns a numpy array
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
genome_clinic_df = pd.DataFrame(x_scaled,columns=full_df_onehot_filter\
.loc[:, full_df_onehot_filter.columns != 'index'].columns)
index_df = full_df_onehot_filter.reset_index()
genome_clinic_std_concat = pd.concat([index_df['index'],genome_clinic_df],axis=1)
genome_clinic_std_concat.head()
genome_clinic_std_concat.set_index('index', inplace=True)
genome_clinic_std_concat.head()
###Output
_____no_output_____
###Markdown
Dimensionality Reduction with sparse PCAThe dataset obtained after above data transformations contains more than 4200+ features. This is a very high dimensional feature space. However, we need to focus on lower dimension representation of the feature space for K-Means Clustering to function accurately. Hence, we utilize dimensionality reduction technique. There are thousands of genes expressed in any given sample, but a patient may have very different genes expressed based on factors such as the tissue type, the sampling technique, and the time of sampling. This implies a lot of sparseness in the feature space among our samples. We have leveraged the sparse PCA module available in the scikit learn library in Python. This reduces the feature space of our dataset from 4200+ columns to 2 Principal Components that capture the most varience explaining the structure of the dataset.
###Code
# Dimensionality Reduction using Principal Component Analysis to reduce high dimension of 25k+ feature space into 10 significant featue space
n=2
pcs = ['PC'+str(x) for x in range(n)]
pca = SparsePCA(n_components=n,max_iter=20,n_jobs=4)
principalComponents = pca.fit_transform(genome_clinic_std_concat)
#print(pca.explained_variance_)
principalDf = pd.DataFrame(data = principalComponents
, columns = pcs)
principalDfConcat = pd.concat([index_df['index'],principalDf],axis=1)
principalDfConcat.head()
###Output
_____no_output_____
###Markdown
K-Means Clustering:K-Means Clustering is a popular clustering algorithm that segments data into K groups based on the underlying data patterns. We will apply scikit-learn's K-Means module for applying clustering algorithm on the principal components on the 2-dimensional representation of the data set. We have combined the cluster labels for each patient_id with the prinicpal components.
###Code
# Clustering Model building using KMeans and concatenating labels with the corresponding patient
#from scipy import stats
kmeans = KMeans(n_clusters=3, random_state=0).fit(principalDfConcat.set_index('index'))
labels = kmeans.labels_
#Glue back to original data
principalDfConcat['clusters'] = labels
cols = ['patient_id']
cols.extend(pcs)
cols.extend(['clusters'])
principalDfConcat.columns= cols
principalDfConcat.head()
###Output
_____no_output_____
###Markdown
Determining optimal number of clusters - Elbow Method:The information on the best number of clusters,i.e. K needs to be known either by knowledge base or by Elbow method. We have approached the Elbow method to determine the best K clusters for our dataset. The Elbow Plot below does not display the exact structure of the data. However, the first bend is at K=3, which implies that it can be assumed to be the best chpice for the number of clusters.
###Code
# Checking for best K when number of groups or clusters are not known - Used Elbow Plot.
distortions = []
for k in range(1,11):
kmeans = KMeans(
n_clusters=k, init = "random",
n_init=10, max_iter=300, random_state=0
)
kmeans.fit(principalDfConcat.set_index('patient_id'))
distortions.append(kmeans.inertia_)
#plot
plt.plot(range(1,11), distortions, marker='o')
plt.xlabel("Number of clusters")
plt.ylabel("Distortions")
plt.show()
###Output
_____no_output_____
###Markdown
As we see in the Elbow plot, we do not get exact elbow shape. However, the first bend is at k=3. We thus, assume 3 clusters would be appropriate count fo the number of clusters. Assessing similarity between cluster outcomes and the cancer stages provided in clinical data:The clustering algorithm groups the patients into three groups 0,1 and 2, which represent the entire data. But, they still do not indicate the cancer stage information of the patient.
###Code
df_stage_valid = pd.merge(df_stage[['index','tumor_stage']], principalDfConcat[['patient_id','clusters']], right_on='patient_id', left_on='index', how='left') \
.set_index('patient_id') \
.drop('index', axis=1)
#ensuring ID uniqueness
df_stage_valid.index = [x + '-' + str(i) for i,x in enumerate(df_stage_valid.index)]
df_stage_valid.head()
###Output
_____no_output_____
###Markdown
We calculated the percentage of each cluster group and for each cancer stage to assess the initial hypothesis about the relationship between clusters representing the RNA gene expressions of the patients on one end and the liver cancer stages on the other end. We see how each representative cluster is spread across multiple cancer stages.
###Code
# Aggregating at clinical cancer stage level to check for similarity in both outcomes.
tmp = df_stage_valid.reset_index().groupby('tumor_stage')['clusters'].value_counts()
display(tmp)
(tmp/df_stage_valid.shape[0]).round(2)
###Output
_____no_output_____ |
docs/source/notebooks/survival_analysis.ipynb | ###Markdown
Bayesian Survival AnalysisAuthor: Austin Rochford[Survival analysis](https://en.wikipedia.org/wiki/Survival_analysis) studies the distribution of the time to an event. Its applications span many fields across medicine, biology, engineering, and social science. This tutorial shows how to fit and analyze a Bayesian survival model in Python using PyMC3.We illustrate these concepts by analyzing a [mastectomy data set](https://vincentarelbundock.github.io/Rdatasets/doc/HSAUR/mastectomy.html) from `R`'s [HSAUR](https://cran.r-project.org/web/packages/HSAUR/index.html) package.
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pymc3 as pm
from pymc3.distributions.timeseries import GaussianRandomWalk
import seaborn as sns
import pandas as pd
from theano import tensor as T
df = pd.read_csv(pm.get_data('mastectomy.csv'))
df.event = df.event.astype(np.int64)
df.metastized = (df.metastized == 'yes').astype(np.int64)
n_patients = df.shape[0]
patients = np.arange(n_patients)
df.head()
n_patients
###Output
_____no_output_____
###Markdown
Each row represents observations from a woman diagnosed with breast cancer that underwent a mastectomy. The column `time` represents the time (in months) post-surgery that the woman was observed. The column `event` indicates whether or not the woman died during the observation period. The column `metastized` represents whether the cancer had [metastized](https://en.wikipedia.org/wiki/Metastatic_breast_cancer) prior to surgery.This tutorial analyzes the relationship between survival time post-mastectomy and whether or not the cancer had metastized. A crash course in survival analysisFirst we introduce a (very little) bit of theory. If the random variable $T$ is the time to the event we are studying, survival analysis is primarily concerned with the survival function$$S(t) = P(T > t) = 1 - F(t),$$where $F$ is the [CDF](https://en.wikipedia.org/wiki/Cumulative_distribution_function) of $T$. It is mathematically convenient to express the survival function in terms of the [hazard rate](https://en.wikipedia.org/wiki/Survival_analysisHazard_function_and_cumulative_hazard_function), $\lambda(t)$. The hazard rate is the instantaneous probability that the event occurs at time $t$ given that it has not yet occured. That is,$$\begin{align*}\lambda(t) & = \lim_{\Delta t \to 0} \frac{P(t t)}{\Delta t} \\ & = \lim_{\Delta t \to 0} \frac{P(t t)} \\ & = \frac{1}{S(t)} \cdot \lim_{\Delta t \to 0} \frac{S(t + \Delta t) - S(t)}{\Delta t} = -\frac{S'(t)}{S(t)}.\end{align*}$$Solving this differential equation for the survival function shows that$$S(t) = \exp\left(-\int_0^s \lambda(s)\ ds\right).$$This representation of the survival function shows that the cumulative hazard function$$\Lambda(t) = \int_0^t \lambda(s)\ ds$$is an important quantity in survival analysis, since we may consicesly write $S(t) = \exp(-\Lambda(t)).$An important, but subtle, point in survival analysis is [censoring](https://en.wikipedia.org/wiki/Survival_analysisCensoring). Even though the quantity we are interested in estimating is the time between surgery and death, we do not observe the death of every subject. At the point in time that we perform our analysis, some of our subjects will thankfully still be alive. In the case of our mastectomy study, `df.event` is one if the subject's death was observed (the observation is not censored) and is zero if the death was not observed (the observation is censored).
###Code
df.event.mean()
###Output
_____no_output_____
###Markdown
Just over 40% of our observations are censored. We visualize the observed durations and indicate which observations are censored below.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
blue, _, red = sns.color_palette()[:3]
ax.hlines(patients[df.event.values == 0], 0, df[df.event.values == 0].time,
color=blue, label='Censored')
ax.hlines(patients[df.event.values == 1], 0, df[df.event.values == 1].time,
color=red, label='Uncensored')
ax.scatter(df[df.metastized.values == 1].time, patients[df.metastized.values == 1],
color='k', zorder=10, label='Metastized')
ax.set_xlim(left=0)
ax.set_xlabel('Months since mastectomy')
ax.set_yticks([])
ax.set_ylabel('Subject')
ax.set_ylim(-0.25, n_patients + 0.25)
ax.legend(loc='center right');
###Output
_____no_output_____
###Markdown
When an observation is censored (`df.event` is zero), `df.time` is not the subject's survival time. All we can conclude from such a censored obsevation is that the subject's true survival time exceeds `df.time`.This is enough basic surival analysis theory for the purposes of this tutorial; for a more extensive introduction, consult Aalen et al.^[Aalen, Odd, Ornulf Borgan, and Hakon Gjessing. Survival and event history analysis: a process point of view. Springer Science & Business Media, 2008.] Bayesian proportional hazards modelThe two most basic estimators in survial analysis are the [Kaplan-Meier estimator](https://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator) of the survival function and the [Nelson-Aalen estimator](https://en.wikipedia.org/wiki/Nelson%E2%80%93Aalen_estimator) of the cumulative hazard function. However, since we want to understand the impact of metastization on survival time, a risk regression model is more appropriate. Perhaps the most commonly used risk regression model is [Cox's proportional hazards model](https://en.wikipedia.org/wiki/Proportional_hazards_model). In this model, if we have covariates $\mathbf{x}$ and regression coefficients $\beta$, the hazard rate is modeled as$$\lambda(t) = \lambda_0(t) \exp(\mathbf{x} \beta).$$Here $\lambda_0(t)$ is the baseline hazard, which is independent of the covariates $\mathbf{x}$. In this example, the covariates are the one-dimensonal vector `df.metastized`.Unlike in many regression situations, $\mathbf{x}$ should not include a constant term corresponding to an intercept. If $\mathbf{x}$ includes a constant term corresponding to an intercept, the model becomes [unidentifiable](https://en.wikipedia.org/wiki/Identifiability). To illustrate this unidentifiability, suppose that$$\lambda(t) = \lambda_0(t) \exp(\beta_0 + \mathbf{x} \beta) = \lambda_0(t) \exp(\beta_0) \exp(\mathbf{x} \beta).$$If $\tilde{\beta}_0 = \beta_0 + \delta$ and $\tilde{\lambda}_0(t) = \lambda_0(t) \exp(-\delta)$, then $\lambda(t) = \tilde{\lambda}_0(t) \exp(\tilde{\beta}_0 + \mathbf{x} \beta)$ as well, making the model with $\beta_0$ unidentifiable.In order to perform Bayesian inference with the Cox model, we must specify priors on $\beta$ and $\lambda_0(t)$. We place a normal prior on $\beta$, $\beta \sim N(\mu_{\beta}, \sigma_{\beta}^2),$ where $\mu_{\beta} \sim N(0, 10^2)$ and $\sigma_{\beta} \sim U(0, 10)$.A suitable prior on $\lambda_0(t)$ is less obvious. We choose a semiparametric prior, where $\lambda_0(t)$ is a piecewise constant function. This prior requires us to partition the time range in question into intervals with endpoints $0 \leq s_1 < s_2 < \cdots < s_N$. With this partition, $\lambda_0 (t) = \lambda_j$ if $s_j \leq t < s_{j + 1}$. With $\lambda_0(t)$ constrained to have this form, all we need to do is choose priors for the $N - 1$ values $\lambda_j$. We use independent vague priors $\lambda_j \sim \operatorname{Gamma}(10^{-2}, 10^{-2}).$ For our mastectomy example, we make each interval three months long.
###Code
interval_length = 3
interval_bounds = np.arange(0, df.time.max() + interval_length + 1, interval_length)
n_intervals = interval_bounds.size - 1
intervals = np.arange(n_intervals)
###Output
_____no_output_____
###Markdown
We see how deaths and censored observations are distributed in these intervals.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(df[df.event == 1].time.values, bins=interval_bounds,
color=red, alpha=0.5, lw=0,
label='Uncensored');
ax.hist(df[df.event == 0].time.values, bins=interval_bounds,
color=blue, alpha=0.5, lw=0,
label='Censored');
ax.set_xlim(0, interval_bounds[-1]);
ax.set_xlabel('Months since mastectomy');
ax.set_yticks([0, 1, 2, 3]);
ax.set_ylabel('Number of observations');
ax.legend();
###Output
_____no_output_____
###Markdown
With the prior distributions on $\beta$ and $\lambda_0(t)$ chosen, we now show how the model may be fit using MCMC simulation with `pymc3`. The key observation is that the piecewise-constant proportional hazard model is [closely related](http://data.princeton.edu/wws509/notes/c7s4.html) to a Poisson regression model. (The models are not identical, but their likelihoods differ by a factor that depends only on the observed data and not the parameters $\beta$ and $\lambda_j$. For details, see Germán Rodríguez's WWS 509 [course notes](http://data.princeton.edu/wws509/notes/c7s4.html).)We define indicator variables based on whether or the $i$-th suject died in the $j$-th interval,$$d_{i, j} = \begin{cases} 1 & \textrm{if subject } i \textrm{ died in interval } j \\ 0 & \textrm{otherwise}\end{cases}.$$
###Code
last_period = np.floor((df.time - 0.01) / interval_length).astype(int)
death = np.zeros((n_patients, n_intervals))
death[patients, last_period] = df.event
###Output
_____no_output_____
###Markdown
We also define $t_{i, j}$ to be the amount of time the $i$-th subject was at risk in the $j$-th interval.
###Code
exposure = np.greater_equal.outer(df.time, interval_bounds[:-1]) * interval_length
exposure[patients, last_period] = df.time - interval_bounds[last_period]
###Output
_____no_output_____
###Markdown
Finally, denote the risk incurred by the $i$-th subject in the $j$-th interval as $\lambda_{i, j} = \lambda_j \exp(\mathbf{x}_i \beta)$.We may approximate $d_{i, j}$ with a Possion random variable with mean $t_{i, j}\ \lambda_{i, j}$. This approximation leads to the following `pymc3` model.
###Code
SEED = 5078864 # from random.org
with pm.Model() as model:
lambda0 = pm.Gamma('lambda0', 0.01, 0.01, shape=n_intervals)
beta = pm.Normal('beta', 0, sd=1000)
lambda_ = pm.Deterministic('lambda_', T.outer(T.exp(beta * df.metastized), lambda0))
mu = pm.Deterministic('mu', exposure * lambda_)
obs = pm.Poisson('obs', mu, observed=death)
###Output
_____no_output_____
###Markdown
We now sample from the model.
###Code
n_samples = 1000
n_tune = 1000
with model:
trace = pm.sample(n_samples, tune=n_tune, random_seed=SEED)
###Output
100%|██████████| 2000/2000 [15:44<00:00, 2.31it/s]
###Markdown
We see that the hazard rate for subjects whose cancer has metastized is about double the rate of those whose cancer has not metastized.
###Code
np.exp(trace['beta'].mean())
pm.plot_posterior(trace, varnames=['beta'], color='#87ceeb');
pm.autocorrplot(trace, varnames=['beta']);
###Output
_____no_output_____
###Markdown
We now examine the effect of metastization on both the cumulative hazard and on the survival function.
###Code
base_hazard = trace['lambda0']
met_hazard = trace['lambda0'] * np.exp(np.atleast_2d(trace['beta']).T)
def cum_hazard(hazard):
return (interval_length * hazard).cumsum(axis=-1)
def survival(hazard):
return np.exp(-cum_hazard(hazard))
def plot_with_hpd(x, hazard, f, ax, color=None, label=None, alpha=0.05):
mean = f(hazard.mean(axis=0))
percentiles = 100 * np.array([alpha / 2., 1. - alpha / 2.])
hpd = np.percentile(f(hazard), percentiles, axis=0)
ax.fill_between(x, hpd[0], hpd[1], color=color, alpha=0.25)
ax.step(x, mean, color=color, label=label);
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
plot_with_hpd(interval_bounds[:-1], base_hazard, cum_hazard,
hazard_ax, color=blue, label='Had not metastized')
plot_with_hpd(interval_bounds[:-1], met_hazard, cum_hazard,
hazard_ax, color=red, label='Metastized')
hazard_ax.set_xlim(0, df.time.max());
hazard_ax.set_xlabel('Months since mastectomy');
hazard_ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
hazard_ax.legend(loc=2);
plot_with_hpd(interval_bounds[:-1], base_hazard, survival,
surv_ax, color=blue)
plot_with_hpd(interval_bounds[:-1], met_hazard, survival,
surv_ax, color=red)
surv_ax.set_xlim(0, df.time.max());
surv_ax.set_xlabel('Months since mastectomy');
surv_ax.set_ylabel('Survival function $S(t)$');
fig.suptitle('Bayesian survival model');
###Output
_____no_output_____
###Markdown
We see that the cumulative hazard for metastized subjects increases more rapidly initially (through about seventy months), after which it increases roughly in parallel with the baseline cumulative hazard.These plots also show the pointwise 95% high posterior density interval for each function. One of the distinct advantages of the Bayesian model fit with `pymc3` is the inherent quantification of uncertainty in our estimates. Time varying effectsAnother of the advantages of the model we have built is its flexibility. From the plots above, we may reasonable believe that the additional hazard due to metastization varies over time; it seems plausible that cancer that has metastized increases the hazard rate immediately after the mastectomy, but that the risk due to metastization decreases over time. We can accomodate this mechanism in our model by allowing the regression coefficients to vary over time. In the time-varying coefficent model, if $s_j \leq t < s_{j + 1}$, we let $\lambda(t) = \lambda_j \exp(\mathbf{x} \beta_j).$ The sequence of regression coefficients $\beta_1, \beta_2, \ldots, \beta_{N - 1}$ form a normal random walk with $\beta_1 \sim N(0, 1)$, $\beta_j\ |\ \beta_{j - 1} \sim N(\beta_{j - 1}, 1)$.We implement this model in `pymc3` as follows.
###Code
with pm.Model() as time_varying_model:
lambda0 = pm.Gamma('lambda0', 0.01, 0.01, shape=n_intervals)
beta = GaussianRandomWalk('beta', tau=1., shape=n_intervals)
lambda_ = pm.Deterministic('h', lambda0 * T.exp(T.outer(T.constant(df.metastized), beta)))
mu = pm.Deterministic('mu', exposure * lambda_)
obs = pm.Poisson('obs', mu, observed=death)
###Output
_____no_output_____
###Markdown
We proceed to sample from this model.
###Code
with time_varying_model:
time_varying_trace = pm.sample(n_samples, tune=n_tune, random_seed=SEED)
pm.forestplot(time_varying_trace, varnames=['beta']);
###Output
_____no_output_____
###Markdown
We see from the plot of $\beta_j$ over time below that initially $\beta_j > 0$, indicating an elevated hazard rate due to metastization, but that this risk declines as $\beta_j < 0$ eventually.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
beta_hpd = np.percentile(time_varying_trace['beta'], [2.5, 97.5], axis=0)
beta_low = beta_hpd[0]
beta_high = beta_hpd[1]
ax.fill_between(interval_bounds[:-1], beta_low, beta_high,
color=blue, alpha=0.25);
beta_hat = time_varying_trace['beta'].mean(axis=0)
ax.step(interval_bounds[:-1], beta_hat, color=blue);
ax.scatter(interval_bounds[last_period[(df.event.values == 1) & (df.metastized == 1)]],
beta_hat[last_period[(df.event.values == 1) & (df.metastized == 1)]],
c=red, zorder=10, label='Died, cancer metastized');
ax.scatter(interval_bounds[last_period[(df.event.values == 0) & (df.metastized == 1)]],
beta_hat[last_period[(df.event.values == 0) & (df.metastized == 1)]],
c=blue, zorder=10, label='Censored, cancer metastized');
ax.set_xlim(0, df.time.max());
ax.set_xlabel('Months since mastectomy');
ax.set_ylabel(r'$\beta_j$');
ax.legend();
###Output
_____no_output_____
###Markdown
The coefficients $\beta_j$ begin declining rapidly around one hundred months post-mastectomy, which seems reasonable, given that only three of twelve subjects whose cancer had metastized lived past this point died during the study.The change in our estimate of the cumulative hazard and survival functions due to time-varying effects is also quite apparent in the following plots.
###Code
tv_base_hazard = time_varying_trace['lambda0']
tv_met_hazard = time_varying_trace['lambda0'] * np.exp(np.atleast_2d(time_varying_trace['beta']))
fig, ax = plt.subplots(figsize=(8, 6))
ax.step(interval_bounds[:-1], cum_hazard(base_hazard.mean(axis=0)),
color=blue, label='Had not metastized');
ax.step(interval_bounds[:-1], cum_hazard(met_hazard.mean(axis=0)),
color=red, label='Metastized');
ax.step(interval_bounds[:-1], cum_hazard(tv_base_hazard.mean(axis=0)),
color=blue, linestyle='--', label='Had not metastized (time varying effect)');
ax.step(interval_bounds[:-1], cum_hazard(tv_met_hazard.mean(axis=0)),
color=red, linestyle='--', label='Metastized (time varying effect)');
ax.set_xlim(0, df.time.max() - 4);
ax.set_xlabel('Months since mastectomy');
ax.set_ylim(0, 2);
ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
ax.legend(loc=2);
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
plot_with_hpd(interval_bounds[:-1], tv_base_hazard, cum_hazard,
hazard_ax, color=blue, label='Had not metastized')
plot_with_hpd(interval_bounds[:-1], tv_met_hazard, cum_hazard,
hazard_ax, color=red, label='Metastized')
hazard_ax.set_xlim(0, df.time.max());
hazard_ax.set_xlabel('Months since mastectomy');
hazard_ax.set_ylim(0, 2);
hazard_ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
hazard_ax.legend(loc=2);
plot_with_hpd(interval_bounds[:-1], tv_base_hazard, survival,
surv_ax, color=blue)
plot_with_hpd(interval_bounds[:-1], tv_met_hazard, survival,
surv_ax, color=red)
surv_ax.set_xlim(0, df.time.max());
surv_ax.set_xlabel('Months since mastectomy');
surv_ax.set_ylabel('Survival function $S(t)$');
fig.suptitle('Bayesian survival model with time varying effects');
###Output
_____no_output_____
###Markdown
Bayesian Survival AnalysisAuthor: Austin Rochford[Survival analysis](https://en.wikipedia.org/wiki/Survival_analysis) studies the distribution of the time to an event. Its applications span many fields across medicine, biology, engineering, and social science. This tutorial shows how to fit and analyze a Bayesian survival model in Python using PyMC3.We illustrate these concepts by analyzing a [mastectomy data set](https://vincentarelbundock.github.io/Rdatasets/doc/HSAUR/mastectomy.html) from `R`'s [HSAUR](https://cran.r-project.org/web/packages/HSAUR/index.html) package.
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pymc3 as pm
from pymc3.distributions.timeseries import GaussianRandomWalk
import seaborn as sns
from statsmodels import datasets
from theano import tensor as T
###Output
_____no_output_____
###Markdown
Fortunately, [statsmodels.datasets](http://statsmodels.sourceforge.net/0.6.0/datasets/index.html) makes it quite easy to load a number of data sets from `R`.
###Code
df = datasets.get_rdataset('mastectomy', 'HSAUR', cache=True).data
df.event = df.event.astype(np.int64)
df.metastized = (df.metastized == 'yes').astype(np.int64)
n_patients = df.shape[0]
patients = np.arange(n_patients)
df.head()
n_patients
###Output
_____no_output_____
###Markdown
Each row represents observations from a woman diagnosed with breast cancer that underwent a mastectomy. The column `time` represents the time (in months) post-surgery that the woman was observed. The column `event` indicates whether or not the woman died during the observation period. The column `metastized` represents whether the cancer had [metastized](https://en.wikipedia.org/wiki/Metastatic_breast_cancer) prior to surgery.This tutorial analyzes the relationship between survival time post-mastectomy and whether or not the cancer had metastized. A crash course in survival analysisFirst we introduce a (very little) bit of theory. If the random variable $T$ is the time to the event we are studying, survival analysis is primarily concerned with the survival function$$S(t) = P(T > t) = 1 - F(t),$$where $F$ is the [CDF](https://en.wikipedia.org/wiki/Cumulative_distribution_function) of $T$. It is mathematically convenient to express the survival function in terms of the [hazard rate](https://en.wikipedia.org/wiki/Survival_analysisHazard_function_and_cumulative_hazard_function), $\lambda(t)$. The hazard rate is the instantaneous probability that the event occurs at time $t$ given that it has not yet occured. That is,$$\begin{align*}\lambda(t) & = \lim_{\Delta t \to 0} \frac{P(t t)}{\Delta t} \\ & = \lim_{\Delta t \to 0} \frac{P(t t)} \\ & = \frac{1}{S(t)} \cdot \lim_{\Delta t \to 0} \frac{S(t + \Delta t) - S(t)}{\Delta t} = -\frac{S'(t)}{S(t)}.\end{align*}$$Solving this differential equation for the survival function shows that$$S(t) = \exp\left(-\int_0^s \lambda(s)\ ds\right).$$This representation of the survival function shows that the cumulative hazard function$$\Lambda(t) = \int_0^t \lambda(s)\ ds$$is an important quantity in survival analysis, since we may consicesly write $S(t) = \exp(-\Lambda(t)).$An important, but subtle, point in survival analysis is [censoring](https://en.wikipedia.org/wiki/Survival_analysisCensoring). Even though the quantity we are interested in estimating is the time between surgery and death, we do not observe the death of every subject. At the point in time that we perform our analysis, some of our subjects will thankfully still be alive. In the case of our mastectomy study, `df.event` is one if the subject's death was observed (the observation is not censored) and is zero if the death was not observed (the observation is censored).
###Code
df.event.mean()
###Output
_____no_output_____
###Markdown
Just over 40% of our observations are censored. We visualize the observed durations and indicate which observations are censored below.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
blue, _, red = sns.color_palette()[:3]
ax.hlines(patients[df.event.values == 0], 0, df[df.event.values == 0].time,
color=blue, label='Censored')
ax.hlines(patients[df.event.values == 1], 0, df[df.event.values == 1].time,
color=red, label='Uncensored')
ax.scatter(df[df.metastized.values == 1].time, patients[df.metastized.values == 1],
color='k', zorder=10, label='Metastized')
ax.set_xlim(left=0)
ax.set_xlabel('Months since mastectomy')
ax.set_yticks([])
ax.set_ylabel('Subject')
ax.set_ylim(-0.25, n_patients + 0.25)
ax.legend(loc='center right');
###Output
_____no_output_____
###Markdown
When an observation is censored (`df.event` is zero), `df.time` is not the subject's survival time. All we can conclude from such a censored obsevation is that the subject's true survival time exceeds `df.time`.This is enough basic surival analysis theory for the purposes of this tutorial; for a more extensive introduction, consult Aalen et al.^[Aalen, Odd, Ornulf Borgan, and Hakon Gjessing. Survival and event history analysis: a process point of view. Springer Science & Business Media, 2008.] Bayesian proportional hazards modelThe two most basic estimators in survial analysis are the [Kaplan-Meier estimator](https://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator) of the survival function and the [Nelson-Aalen estimator](https://en.wikipedia.org/wiki/Nelson%E2%80%93Aalen_estimator) of the cumulative hazard function. However, since we want to understand the impact of metastization on survival time, a risk regression model is more appropriate. Perhaps the most commonly used risk regression model is [Cox's proportional hazards model](https://en.wikipedia.org/wiki/Proportional_hazards_model). In this model, if we have covariates $\mathbf{x}$ and regression coefficients $\beta$, the hazard rate is modeled as$$\lambda(t) = \lambda_0(t) \exp(\mathbf{x} \beta).$$Here $\lambda_0(t)$ is the baseline hazard, which is independent of the covariates $\mathbf{x}$. In this example, the covariates are the one-dimensonal vector `df.metastized`.Unlike in many regression situations, $\mathbf{x}$ should not include a constant term corresponding to an intercept. If $\mathbf{x}$ includes a constant term corresponding to an intercept, the model becomes [unidentifiable](https://en.wikipedia.org/wiki/Identifiability). To illustrate this unidentifiability, suppose that$$\lambda(t) = \lambda_0(t) \exp(\beta_0 + \mathbf{x} \beta) = \lambda_0(t) \exp(\beta_0) \exp(\mathbf{x} \beta).$$If $\tilde{\beta}_0 = \beta_0 + \delta$ and $\tilde{\lambda}_0(t) = \lambda_0(t) \exp(-\delta)$, then $\lambda(t) = \tilde{\lambda}_0(t) \exp(\tilde{\beta}_0 + \mathbf{x} \beta)$ as well, making the model with $\beta_0$ unidentifiable.In order to perform Bayesian inference with the Cox model, we must specify priors on $\beta$ and $\lambda_0(t)$. We place a normal prior on $\beta$, $\beta \sim N(\mu_{\beta}, \sigma_{\beta}^2),$ where $\mu_{\beta} \sim N(0, 10^2)$ and $\sigma_{\beta} \sim U(0, 10)$.A suitable prior on $\lambda_0(t)$ is less obvious. We choose a semiparametric prior, where $\lambda_0(t)$ is a piecewise constant function. This prior requires us to partition the time range in question into intervals with endpoints $0 \leq s_1 < s_2 < \cdots < s_N$. With this partition, $\lambda_0 (t) = \lambda_j$ if $s_j \leq t < s_{j + 1}$. With $\lambda_0(t)$ constrained to have this form, all we need to do is choose priors for the $N - 1$ values $\lambda_j$. We use independent vague priors $\lambda_j \sim \operatorname{Gamma}(10^{-2}, 10^{-2}).$ For our mastectomy example, we make each interval three months long.
###Code
interval_length = 3
interval_bounds = np.arange(0, df.time.max() + interval_length + 1, interval_length)
n_intervals = interval_bounds.size - 1
intervals = np.arange(n_intervals)
###Output
_____no_output_____
###Markdown
We see how deaths and censored observations are distributed in these intervals.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(df[df.event == 1].time.values, bins=interval_bounds,
color=red, alpha=0.5, lw=0,
label='Uncensored');
ax.hist(df[df.event == 0].time.values, bins=interval_bounds,
color=blue, alpha=0.5, lw=0,
label='Censored');
ax.set_xlim(0, interval_bounds[-1]);
ax.set_xlabel('Months since mastectomy');
ax.set_yticks([0, 1, 2, 3]);
ax.set_ylabel('Number of observations');
ax.legend();
###Output
_____no_output_____
###Markdown
With the prior distributions on $\beta$ and $\lambda_0(t)$ chosen, we now show how the model may be fit using MCMC simulation with `pymc3`. The key observation is that the piecewise-constant proportional hazard model is [closely related](http://data.princeton.edu/wws509/notes/c7s4.html) to a Poisson regression model. (The models are not identical, but their likelihoods differ by a factor that depends only on the observed data and not the parameters $\beta$ and $\lambda_j$. For details, see Germán Rodríguez's WWS 509 [course notes](http://data.princeton.edu/wws509/notes/c7s4.html).)We define indicator variables based on whether or the $i$-th suject died in the $j$-th interval,$$d_{i, j} = \begin{cases} 1 & \textrm{if subject } i \textrm{ died in interval } j \\ 0 & \textrm{otherwise}\end{cases}.$$
###Code
last_period = np.floor((df.time - 0.01) / interval_length).astype(int)
death = np.zeros((n_patients, n_intervals))
death[patients, last_period] = df.event
###Output
_____no_output_____
###Markdown
We also define $t_{i, j}$ to be the amount of time the $i$-th subject was at risk in the $j$-th interval.
###Code
exposure = np.greater_equal.outer(df.time, interval_bounds[:-1]) * interval_length
exposure[patients, last_period] = df.time - interval_bounds[last_period]
###Output
_____no_output_____
###Markdown
Finally, denote the risk incurred by the $i$-th subject in the $j$-th interval as $\lambda_{i, j} = \lambda_j \exp(\mathbf{x}_i \beta)$.We may approximate $d_{i, j}$ with a Possion random variable with mean $t_{i, j}\ \lambda_{i, j}$. This approximation leads to the following `pymc3` model.
###Code
SEED = 5078864 # from random.org
with pm.Model() as model:
lambda0 = pm.Gamma('lambda0', 0.01, 0.01, shape=n_intervals)
beta = pm.Normal('beta', 0, sd=1000)
lambda_ = pm.Deterministic('lambda_', T.outer(T.exp(beta * df.metastized), lambda0))
mu = pm.Deterministic('mu', exposure * lambda_)
obs = pm.Poisson('obs', mu, observed=death)
###Output
_____no_output_____
###Markdown
We now sample from the model.
###Code
n_samples = 1000
n_tune = 1000
with model:
trace = pm.sample(n_samples, tune=n_tune, random_seed=SEED)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using ADVI...
Average Loss = 449.81: 19%|█▉ | 38808/200000 [00:06<00:37, 4245.50it/s]
Convergence archived at 39100
Interrupted at 39,100 [19%]: Average Loss = 756.28
100%|█████████▉| 1998/2000 [01:58<00:00, 15.05it/s]/Users/fonnescj/Repos/pymc3/pymc3/step_methods/hmc/nuts.py:456: UserWarning: Chain 0 contains 52 diverging samples after tuning. If increasing `target_accept` does not help try to reparameterize.
% (self._chain_id, n_diverging))
100%|██████████| 2000/2000 [01:58<00:00, 16.84it/s]
###Markdown
We see that the hazard rate for subjects whose cancer has metastized is about double the rate of those whose cancer has not metastized.
###Code
np.exp(trace['beta'].mean())
pm.plot_posterior(trace, varnames=['beta'], color='#87ceeb');
pm.autocorrplot(trace, varnames=['beta']);
###Output
_____no_output_____
###Markdown
We now examine the effect of metastization on both the cumulative hazard and on the survival function.
###Code
base_hazard = trace['lambda0']
met_hazard = trace['lambda0'] * np.exp(np.atleast_2d(trace['beta']).T)
def cum_hazard(hazard):
return (interval_length * hazard).cumsum(axis=-1)
def survival(hazard):
return np.exp(-cum_hazard(hazard))
def plot_with_hpd(x, hazard, f, ax, color=None, label=None, alpha=0.05):
mean = f(hazard.mean(axis=0))
percentiles = 100 * np.array([alpha / 2., 1. - alpha / 2.])
hpd = np.percentile(f(hazard), percentiles, axis=0)
ax.fill_between(x, hpd[0], hpd[1], color=color, alpha=0.25)
ax.step(x, mean, color=color, label=label);
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
plot_with_hpd(interval_bounds[:-1], base_hazard, cum_hazard,
hazard_ax, color=blue, label='Had not metastized')
plot_with_hpd(interval_bounds[:-1], met_hazard, cum_hazard,
hazard_ax, color=red, label='Metastized')
hazard_ax.set_xlim(0, df.time.max());
hazard_ax.set_xlabel('Months since mastectomy');
hazard_ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
hazard_ax.legend(loc=2);
plot_with_hpd(interval_bounds[:-1], base_hazard, survival,
surv_ax, color=blue)
plot_with_hpd(interval_bounds[:-1], met_hazard, survival,
surv_ax, color=red)
surv_ax.set_xlim(0, df.time.max());
surv_ax.set_xlabel('Months since mastectomy');
surv_ax.set_ylabel('Survival function $S(t)$');
fig.suptitle('Bayesian survival model');
###Output
_____no_output_____
###Markdown
We see that the cumulative hazard for metastized subjects increases more rapidly initially (through about seventy months), after which it increases roughly in parallel with the baseline cumulative hazard.These plots also show the pointwise 95% high posterior density interval for each function. One of the distinct advantages of the Bayesian model fit with `pymc3` is the inherent quantification of uncertainty in our estimates. Time varying effectsAnother of the advantages of the model we have built is its flexibility. From the plots above, we may reasonable believe that the additional hazard due to metastization varies over time; it seems plausible that cancer that has metastized increases the hazard rate immediately after the mastectomy, but that the risk due to metastization decreases over time. We can accomodate this mechanism in our model by allowing the regression coefficients to vary over time. In the time-varying coefficent model, if $s_j \leq t < s_{j + 1}$, we let $\lambda(t) = \lambda_j \exp(\mathbf{x} \beta_j).$ The sequence of regression coefficients $\beta_1, \beta_2, \ldots, \beta_{N - 1}$ form a normal random walk with $\beta_1 \sim N(0, 1)$, $\beta_j\ |\ \beta_{j - 1} \sim N(\beta_{j - 1}, 1)$.We implement this model in `pymc3` as follows.
###Code
with pm.Model() as time_varying_model:
lambda0 = pm.Gamma('lambda0', 0.01, 0.01, shape=n_intervals)
beta = GaussianRandomWalk('beta', tau=1., shape=n_intervals)
lambda_ = pm.Deterministic('h', lambda0 * T.exp(T.outer(T.constant(df.metastized), beta)))
mu = pm.Deterministic('mu', exposure * lambda_)
obs = pm.Poisson('obs', mu, observed=death)
###Output
_____no_output_____
###Markdown
We proceed to sample from this model.
###Code
with time_varying_model:
time_varying_trace = pm.sample(n_samples, tune=n_tune, random_seed=SEED)
pm.forestplot(time_varying_trace, varnames=['beta']);
###Output
_____no_output_____
###Markdown
We see from the plot of $\beta_j$ over time below that initially $\beta_j > 0$, indicating an elevated hazard rate due to metastization, but that this risk declines as $\beta_j < 0$ eventually.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
beta_hpd = np.percentile(time_varying_trace['beta'], [2.5, 97.5], axis=0)
beta_low = beta_hpd[0]
beta_high = beta_hpd[1]
ax.fill_between(interval_bounds[:-1], beta_low, beta_high,
color=blue, alpha=0.25);
beta_hat = time_varying_trace['beta'].mean(axis=0)
ax.step(interval_bounds[:-1], beta_hat, color=blue);
ax.scatter(interval_bounds[last_period[(df.event.values == 1) & (df.metastized == 1)]],
beta_hat[last_period[(df.event.values == 1) & (df.metastized == 1)]],
c=red, zorder=10, label='Died, cancer metastized');
ax.scatter(interval_bounds[last_period[(df.event.values == 0) & (df.metastized == 1)]],
beta_hat[last_period[(df.event.values == 0) & (df.metastized == 1)]],
c=blue, zorder=10, label='Censored, cancer metastized');
ax.set_xlim(0, df.time.max());
ax.set_xlabel('Months since mastectomy');
ax.set_ylabel(r'$\beta_j$');
ax.legend();
###Output
_____no_output_____
###Markdown
The coefficients $\beta_j$ begin declining rapidly around one hundred months post-mastectomy, which seems reasonable, given that only three of twelve subjects whose cancer had metastized lived past this point died during the study.The change in our estimate of the cumulative hazard and survival functions due to time-varying effects is also quite apparent in the following plots.
###Code
tv_base_hazard = time_varying_trace['lambda0']
tv_met_hazard = time_varying_trace['lambda0'] * np.exp(np.atleast_2d(time_varying_trace['beta']))
fig, ax = plt.subplots(figsize=(8, 6))
ax.step(interval_bounds[:-1], cum_hazard(base_hazard.mean(axis=0)),
color=blue, label='Had not metastized');
ax.step(interval_bounds[:-1], cum_hazard(met_hazard.mean(axis=0)),
color=red, label='Metastized');
ax.step(interval_bounds[:-1], cum_hazard(tv_base_hazard.mean(axis=0)),
color=blue, linestyle='--', label='Had not metastized (time varying effect)');
ax.step(interval_bounds[:-1], cum_hazard(tv_met_hazard.mean(axis=0)),
color=red, linestyle='--', label='Metastized (time varying effect)');
ax.set_xlim(0, df.time.max() - 4);
ax.set_xlabel('Months since mastectomy');
ax.set_ylim(0, 2);
ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
ax.legend(loc=2);
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
plot_with_hpd(interval_bounds[:-1], tv_base_hazard, cum_hazard,
hazard_ax, color=blue, label='Had not metastized')
plot_with_hpd(interval_bounds[:-1], tv_met_hazard, cum_hazard,
hazard_ax, color=red, label='Metastized')
hazard_ax.set_xlim(0, df.time.max());
hazard_ax.set_xlabel('Months since mastectomy');
hazard_ax.set_ylim(0, 2);
hazard_ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
hazard_ax.legend(loc=2);
plot_with_hpd(interval_bounds[:-1], tv_base_hazard, survival,
surv_ax, color=blue)
plot_with_hpd(interval_bounds[:-1], tv_met_hazard, survival,
surv_ax, color=red)
surv_ax.set_xlim(0, df.time.max());
surv_ax.set_xlabel('Months since mastectomy');
surv_ax.set_ylabel('Survival function $S(t)$');
fig.suptitle('Bayesian survival model with time varying effects');
###Output
_____no_output_____
###Markdown
We now examine the effect of metastization on both the cumulative hazard and on the survival function.
###Code
base_hazard = trace['lambda0']
met_hazard = trace['lambda0'] * np.exp(np.atleast_2d(trace['beta']).T)
def cum_hazard(hazard):
return (interval_length * hazard).cumsum(axis=-1)
def survival(hazard):
return np.exp(-cum_hazard(hazard))
def plot_with_hpd(x, hazard, f, ax, color=None, label=None, alpha=0.05):
mean = f(hazard.mean(axis=0))
percentiles = 100 * np.array([alpha / 2., 1. - alpha / 2.])
hpd = np.percentile(f(hazard), percentiles, axis=0)
ax.fill_between(x, hpd[0], hpd[1], color=color, alpha=0.25)
ax.step(x, mean, color=color, label=label);
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
plot_with_hpd(interval_bounds[:-1], base_hazard, cum_hazard,
hazard_ax, color=blue, label='Had not metastized')
plot_with_hpd(interval_bounds[:-1], met_hazard, cum_hazard,
hazard_ax, color=red, label='Metastized')
hazard_ax.set_xlim(0, df.time.max());
hazard_ax.set_xlabel('Months since mastectomy');
hazard_ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
hazard_ax.legend(loc=2);
plot_with_hpd(interval_bounds[:-1], base_hazard, survival,
surv_ax, color=blue)
plot_with_hpd(interval_bounds[:-1], met_hazard, survival,
surv_ax, color=red)
surv_ax.set_xlim(0, df.time.max());
surv_ax.set_xlabel('Months since mastectomy');
surv_ax.set_ylabel('Survival function $S(t)$');
fig.suptitle('Bayesian survival model');
###Output
_____no_output_____
###Markdown
We see that the cumulative hazard for metastized subjects increases more rapidly initially (through about seventy months), after which it increases roughly in parallel with the baseline cumulative hazard.These plots also show the pointwise 95% high posterior density interval for each function. One of the distinct advantages of the Bayesian model fit with `pymc3` is the inherent quantification of uncertainty in our estimates. Time varying effectsAnother of the advantages of the model we have built is its flexibility. From the plots above, we may reasonable believe that the additional hazard due to metastization varies over time; it seems plausible that cancer that has metastized increases the hazard rate immediately after the mastectomy, but that the risk due to metastization decreases over time. We can accomodate this mechanism in our model by allowing the regression coefficients to vary over time. In the time-varying coefficent model, if $s_j \leq t < s_{j + 1}$, we let $\lambda(t) = \lambda_j \exp(\mathbf{x} \beta_j).$ The sequence of regression coefficients $\beta_1, \beta_2, \ldots, \beta_{N - 1}$ form a normal random walk with $\beta_1 \sim N(0, 1)$, $\beta_j\ |\ \beta_{j - 1} \sim N(\beta_{j - 1}, 1)$.We implement this model in `pymc3` as follows.
###Code
with pm.Model() as time_varying_model:
lambda0 = pm.Gamma('lambda0', 0.01, 0.01, shape=n_intervals)
beta = GaussianRandomWalk('beta', tau=1., shape=n_intervals)
lambda_ = pm.Deterministic('h', lambda0 * T.exp(T.outer(T.constant(df.metastized), beta)))
mu = pm.Deterministic('mu', exposure * lambda_)
obs = pm.Poisson('obs', mu, observed=death)
###Output
_____no_output_____
###Markdown
We proceed to sample from this model.
###Code
with time_varying_model:
time_varying_trace = pm.sample(n_samples, tune=n_tune, random_seed=SEED)
pm.forestplot(time_varying_trace, var_names=['beta']);
###Output
_____no_output_____
###Markdown
We see from the plot of $\beta_j$ over time below that initially $\beta_j > 0$, indicating an elevated hazard rate due to metastization, but that this risk declines as $\beta_j < 0$ eventually.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
beta_hpd = np.percentile(time_varying_trace['beta'], [2.5, 97.5], axis=0)
beta_low = beta_hpd[0]
beta_high = beta_hpd[1]
ax.fill_between(interval_bounds[:-1], beta_low, beta_high,
color=blue, alpha=0.25);
beta_hat = time_varying_trace['beta'].mean(axis=0)
ax.step(interval_bounds[:-1], beta_hat, color=blue);
ax.scatter(interval_bounds[last_period[(df.event.values == 1) & (df.metastized == 1)]],
beta_hat[last_period[(df.event.values == 1) & (df.metastized == 1)]],
c=red, zorder=10, label='Died, cancer metastized');
ax.scatter(interval_bounds[last_period[(df.event.values == 0) & (df.metastized == 1)]],
beta_hat[last_period[(df.event.values == 0) & (df.metastized == 1)]],
c=blue, zorder=10, label='Censored, cancer metastized');
ax.set_xlim(0, df.time.max());
ax.set_xlabel('Months since mastectomy');
ax.set_ylabel(r'$\beta_j$');
ax.legend();
###Output
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
###Markdown
The coefficients $\beta_j$ begin declining rapidly around one hundred months post-mastectomy, which seems reasonable, given that only three of twelve subjects whose cancer had metastized lived past this point died during the study.The change in our estimate of the cumulative hazard and survival functions due to time-varying effects is also quite apparent in the following plots.
###Code
tv_base_hazard = time_varying_trace['lambda0']
tv_met_hazard = time_varying_trace['lambda0'] * np.exp(np.atleast_2d(time_varying_trace['beta']))
fig, ax = plt.subplots(figsize=(8, 6))
ax.step(interval_bounds[:-1], cum_hazard(base_hazard.mean(axis=0)),
color=blue, label='Had not metastized');
ax.step(interval_bounds[:-1], cum_hazard(met_hazard.mean(axis=0)),
color=red, label='Metastized');
ax.step(interval_bounds[:-1], cum_hazard(tv_base_hazard.mean(axis=0)),
color=blue, linestyle='--', label='Had not metastized (time varying effect)');
ax.step(interval_bounds[:-1], cum_hazard(tv_met_hazard.mean(axis=0)),
color=red, linestyle='--', label='Metastized (time varying effect)');
ax.set_xlim(0, df.time.max() - 4);
ax.set_xlabel('Months since mastectomy');
ax.set_ylim(0, 2);
ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
ax.legend(loc=2);
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
plot_with_hpd(interval_bounds[:-1], tv_base_hazard, cum_hazard,
hazard_ax, color=blue, label='Had not metastized')
plot_with_hpd(interval_bounds[:-1], tv_met_hazard, cum_hazard,
hazard_ax, color=red, label='Metastized')
hazard_ax.set_xlim(0, df.time.max());
hazard_ax.set_xlabel('Months since mastectomy');
hazard_ax.set_ylim(0, 2);
hazard_ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
hazard_ax.legend(loc=2);
plot_with_hpd(interval_bounds[:-1], tv_base_hazard, survival,
surv_ax, color=blue)
plot_with_hpd(interval_bounds[:-1], tv_met_hazard, survival,
surv_ax, color=red)
surv_ax.set_xlim(0, df.time.max());
surv_ax.set_xlabel('Months since mastectomy');
surv_ax.set_ylabel('Survival function $S(t)$');
fig.suptitle('Bayesian survival model with time varying effects');
###Output
_____no_output_____
###Markdown
We have really only scratched the surface of both survival analysis and the Bayesian approach to survival analysis. More information on Bayesian survival analysis is available in Ibrahim et al. (2005). (For example, we may want to account for individual frailty in either or original or time-varying models.)This tutorial is available as an [IPython](http://ipython.org/) notebook [here](https://gist.github.com/AustinRochford/4c6b07e51a2247d678d6). It is adapted from a blog post that first appeared [here](http://austinrochford.com/posts/2015-10-05-bayes-survival.html).
###Code
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
pymc3 3.8
arviz 0.7.0
pandas 0.25.3
seaborn 0.9.0
numpy 1.17.5
last updated: Wed Apr 22 2020
CPython 3.8.0
IPython 7.11.0
watermark 2.0.2
###Markdown
Bayesian Survival AnalysisAuthor: Austin Rochford[Survival analysis](https://en.wikipedia.org/wiki/Survival_analysis) studies the distribution of the time to an event. Its applications span many fields across medicine, biology, engineering, and social science. This tutorial shows how to fit and analyze a Bayesian survival model in Python using PyMC3.We illustrate these concepts by analyzing a [mastectomy data set](https://vincentarelbundock.github.io/Rdatasets/doc/HSAUR/mastectomy.html) from `R`'s [HSAUR](https://cran.r-project.org/web/packages/HSAUR/index.html) package.
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import pymc3 as pm
import seaborn as sns
from matplotlib import pyplot as plt
from pymc3.distributions.timeseries import GaussianRandomWalk
from theano import tensor as T
df = pd.read_csv(pm.get_data("mastectomy.csv"))
df.event = df.event.astype(np.int64)
df.metastized = (df.metastized == "yes").astype(np.int64)
n_patients = df.shape[0]
patients = np.arange(n_patients)
df.head()
n_patients
###Output
_____no_output_____
###Markdown
Each row represents observations from a woman diagnosed with breast cancer that underwent a mastectomy. The column `time` represents the time (in months) post-surgery that the woman was observed. The column `event` indicates whether or not the woman died during the observation period. The column `metastized` represents whether the cancer had [metastized](https://en.wikipedia.org/wiki/Metastatic_breast_cancer) prior to surgery.This tutorial analyzes the relationship between survival time post-mastectomy and whether or not the cancer had metastized. A crash course in survival analysisFirst we introduce a (very little) bit of theory. If the random variable $T$ is the time to the event we are studying, survival analysis is primarily concerned with the survival function$$S(t) = P(T > t) = 1 - F(t),$$where $F$ is the [CDF](https://en.wikipedia.org/wiki/Cumulative_distribution_function) of $T$. It is mathematically convenient to express the survival function in terms of the [hazard rate](https://en.wikipedia.org/wiki/Survival_analysisHazard_function_and_cumulative_hazard_function), $\lambda(t)$. The hazard rate is the instantaneous probability that the event occurs at time $t$ given that it has not yet occured. That is,$$\begin{align*}\lambda(t) & = \lim_{\Delta t \to 0} \frac{P(t t)}{\Delta t} \\ & = \lim_{\Delta t \to 0} \frac{P(t t)} \\ & = \frac{1}{S(t)} \cdot \lim_{\Delta t \to 0} \frac{S(t + \Delta t) - S(t)}{\Delta t} = -\frac{S'(t)}{S(t)}.\end{align*}$$Solving this differential equation for the survival function shows that$$S(t) = \exp\left(-\int_0^s \lambda(s)\ ds\right).$$This representation of the survival function shows that the cumulative hazard function$$\Lambda(t) = \int_0^t \lambda(s)\ ds$$is an important quantity in survival analysis, since we may consicesly write $S(t) = \exp(-\Lambda(t)).$An important, but subtle, point in survival analysis is [censoring](https://en.wikipedia.org/wiki/Survival_analysisCensoring). Even though the quantity we are interested in estimating is the time between surgery and death, we do not observe the death of every subject. At the point in time that we perform our analysis, some of our subjects will thankfully still be alive. In the case of our mastectomy study, `df.event` is one if the subject's death was observed (the observation is not censored) and is zero if the death was not observed (the observation is censored).
###Code
df.event.mean()
###Output
_____no_output_____
###Markdown
Just over 40% of our observations are censored. We visualize the observed durations and indicate which observations are censored below.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
blue, _, red = sns.color_palette()[:3]
ax.hlines(
patients[df.event.values == 0], 0, df[df.event.values == 0].time, color=blue, label="Censored"
)
ax.hlines(
patients[df.event.values == 1], 0, df[df.event.values == 1].time, color=red, label="Uncensored"
)
ax.scatter(
df[df.metastized.values == 1].time,
patients[df.metastized.values == 1],
color="k",
zorder=10,
label="Metastized",
)
ax.set_xlim(left=0)
ax.set_xlabel("Months since mastectomy")
ax.set_yticks([])
ax.set_ylabel("Subject")
ax.set_ylim(-0.25, n_patients + 0.25)
ax.legend(loc="center right");
###Output
_____no_output_____
###Markdown
When an observation is censored (`df.event` is zero), `df.time` is not the subject's survival time. All we can conclude from such a censored obsevation is that the subject's true survival time exceeds `df.time`.This is enough basic surival analysis theory for the purposes of this tutorial; for a more extensive introduction, consult Aalen et al.^[Aalen, Odd, Ornulf Borgan, and Hakon Gjessing. Survival and event history analysis: a process point of view. Springer Science & Business Media, 2008.] Bayesian proportional hazards modelThe two most basic estimators in survial analysis are the [Kaplan-Meier estimator](https://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator) of the survival function and the [Nelson-Aalen estimator](https://en.wikipedia.org/wiki/Nelson%E2%80%93Aalen_estimator) of the cumulative hazard function. However, since we want to understand the impact of metastization on survival time, a risk regression model is more appropriate. Perhaps the most commonly used risk regression model is [Cox's proportional hazards model](https://en.wikipedia.org/wiki/Proportional_hazards_model). In this model, if we have covariates $\mathbf{x}$ and regression coefficients $\beta$, the hazard rate is modeled as$$\lambda(t) = \lambda_0(t) \exp(\mathbf{x} \beta).$$Here $\lambda_0(t)$ is the baseline hazard, which is independent of the covariates $\mathbf{x}$. In this example, the covariates are the one-dimensonal vector `df.metastized`.Unlike in many regression situations, $\mathbf{x}$ should not include a constant term corresponding to an intercept. If $\mathbf{x}$ includes a constant term corresponding to an intercept, the model becomes [unidentifiable](https://en.wikipedia.org/wiki/Identifiability). To illustrate this unidentifiability, suppose that$$\lambda(t) = \lambda_0(t) \exp(\beta_0 + \mathbf{x} \beta) = \lambda_0(t) \exp(\beta_0) \exp(\mathbf{x} \beta).$$If $\tilde{\beta}_0 = \beta_0 + \delta$ and $\tilde{\lambda}_0(t) = \lambda_0(t) \exp(-\delta)$, then $\lambda(t) = \tilde{\lambda}_0(t) \exp(\tilde{\beta}_0 + \mathbf{x} \beta)$ as well, making the model with $\beta_0$ unidentifiable.In order to perform Bayesian inference with the Cox model, we must specify priors on $\beta$ and $\lambda_0(t)$. We place a normal prior on $\beta$, $\beta \sim N(\mu_{\beta}, \sigma_{\beta}^2),$ where $\mu_{\beta} \sim N(0, 10^2)$ and $\sigma_{\beta} \sim U(0, 10)$.A suitable prior on $\lambda_0(t)$ is less obvious. We choose a semiparametric prior, where $\lambda_0(t)$ is a piecewise constant function. This prior requires us to partition the time range in question into intervals with endpoints $0 \leq s_1 < s_2 < \cdots < s_N$. With this partition, $\lambda_0 (t) = \lambda_j$ if $s_j \leq t < s_{j + 1}$. With $\lambda_0(t)$ constrained to have this form, all we need to do is choose priors for the $N - 1$ values $\lambda_j$. We use independent vague priors $\lambda_j \sim \operatorname{Gamma}(10^{-2}, 10^{-2}).$ For our mastectomy example, we make each interval three months long.
###Code
interval_length = 3
interval_bounds = np.arange(0, df.time.max() + interval_length + 1, interval_length)
n_intervals = interval_bounds.size - 1
intervals = np.arange(n_intervals)
###Output
_____no_output_____
###Markdown
We see how deaths and censored observations are distributed in these intervals.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(
df[df.event == 1].time.values,
bins=interval_bounds,
color=red,
alpha=0.5,
lw=0,
label="Uncensored",
)
ax.hist(
df[df.event == 0].time.values,
bins=interval_bounds,
color=blue,
alpha=0.5,
lw=0,
label="Censored",
)
ax.set_xlim(0, interval_bounds[-1])
ax.set_xlabel("Months since mastectomy")
ax.set_yticks([0, 1, 2, 3])
ax.set_ylabel("Number of observations")
ax.legend();
###Output
_____no_output_____
###Markdown
With the prior distributions on $\beta$ and $\lambda_0(t)$ chosen, we now show how the model may be fit using MCMC simulation with `pymc3`. The key observation is that the piecewise-constant proportional hazard model is [closely related](http://data.princeton.edu/wws509/notes/c7s4.html) to a Poisson regression model. (The models are not identical, but their likelihoods differ by a factor that depends only on the observed data and not the parameters $\beta$ and $\lambda_j$. For details, see Germán Rodríguez's WWS 509 [course notes](http://data.princeton.edu/wws509/notes/c7s4.html).)We define indicator variables based on whether or the $i$-th suject died in the $j$-th interval,$$d_{i, j} = \begin{cases} 1 & \textrm{if subject } i \textrm{ died in interval } j \\ 0 & \textrm{otherwise}\end{cases}.$$
###Code
last_period = np.floor((df.time - 0.01) / interval_length).astype(int)
death = np.zeros((n_patients, n_intervals))
death[patients, last_period] = df.event
###Output
_____no_output_____
###Markdown
We also define $t_{i, j}$ to be the amount of time the $i$-th subject was at risk in the $j$-th interval.
###Code
exposure = np.greater_equal.outer(df.time, interval_bounds[:-1]) * interval_length
exposure[patients, last_period] = df.time - interval_bounds[last_period]
###Output
_____no_output_____
###Markdown
Finally, denote the risk incurred by the $i$-th subject in the $j$-th interval as $\lambda_{i, j} = \lambda_j \exp(\mathbf{x}_i \beta)$.We may approximate $d_{i, j}$ with a Possion random variable with mean $t_{i, j}\ \lambda_{i, j}$. This approximation leads to the following `pymc3` model.
###Code
SEED = 644567 # from random.org
with pm.Model() as model:
lambda0 = pm.Gamma("lambda0", 0.01, 0.01, shape=n_intervals)
beta = pm.Normal("beta", 0, sigma=1000)
lambda_ = pm.Deterministic("lambda_", T.outer(T.exp(beta * df.metastized), lambda0))
mu = pm.Deterministic("mu", exposure * lambda_)
obs = pm.Poisson("obs", mu, observed=death)
###Output
_____no_output_____
###Markdown
We now sample from the model.
###Code
n_samples = 1000
n_tune = 1000
with model:
trace = pm.sample(n_samples, tune=n_tune, random_seed=SEED)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [beta, lambda0]
Sampling 2 chains: 100%|██████████| 4000/4000 [05:04<00:00, 13.14draws/s]
There were 94 divergences after tuning. Increase `target_accept` or reparameterize.
There were 89 divergences after tuning. Increase `target_accept` or reparameterize.
The number of effective samples is smaller than 25% for some parameters.
###Markdown
We see that the hazard rate for subjects whose cancer has metastized is about double the rate of those whose cancer has not metastized.
###Code
np.exp(trace["beta"].mean())
pm.plot_posterior(trace, var_names=["beta"], color="#87ceeb");
pm.autocorrplot(trace, var_names=["beta"]);
###Output
_____no_output_____
###Markdown
We now examine the effect of metastization on both the cumulative hazard and on the survival function.
###Code
base_hazard = trace["lambda0"]
met_hazard = trace["lambda0"] * np.exp(np.atleast_2d(trace["beta"]).T)
def cum_hazard(hazard):
return (interval_length * hazard).cumsum(axis=-1)
def survival(hazard):
return np.exp(-cum_hazard(hazard))
def plot_with_hpd(x, hazard, f, ax, color=None, label=None, alpha=0.05):
mean = f(hazard.mean(axis=0))
percentiles = 100 * np.array([alpha / 2.0, 1.0 - alpha / 2.0])
hpd = np.percentile(f(hazard), percentiles, axis=0)
ax.fill_between(x, hpd[0], hpd[1], color=color, alpha=0.25)
ax.step(x, mean, color=color, label=label);
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
plot_with_hpd(
interval_bounds[:-1], base_hazard, cum_hazard, hazard_ax, color=blue, label="Had not metastized"
)
plot_with_hpd(
interval_bounds[:-1], met_hazard, cum_hazard, hazard_ax, color=red, label="Metastized"
)
hazard_ax.set_xlim(0, df.time.max())
hazard_ax.set_xlabel("Months since mastectomy")
hazard_ax.set_ylabel(r"Cumulative hazard $\Lambda(t)$")
hazard_ax.legend(loc=2)
plot_with_hpd(interval_bounds[:-1], base_hazard, survival, surv_ax, color=blue)
plot_with_hpd(interval_bounds[:-1], met_hazard, survival, surv_ax, color=red)
surv_ax.set_xlim(0, df.time.max())
surv_ax.set_xlabel("Months since mastectomy")
surv_ax.set_ylabel("Survival function $S(t)$")
fig.suptitle("Bayesian survival model");
###Output
_____no_output_____
###Markdown
We see that the cumulative hazard for metastized subjects increases more rapidly initially (through about seventy months), after which it increases roughly in parallel with the baseline cumulative hazard.These plots also show the pointwise 95% high posterior density interval for each function. One of the distinct advantages of the Bayesian model fit with `pymc3` is the inherent quantification of uncertainty in our estimates. Time varying effectsAnother of the advantages of the model we have built is its flexibility. From the plots above, we may reasonable believe that the additional hazard due to metastization varies over time; it seems plausible that cancer that has metastized increases the hazard rate immediately after the mastectomy, but that the risk due to metastization decreases over time. We can accomodate this mechanism in our model by allowing the regression coefficients to vary over time. In the time-varying coefficent model, if $s_j \leq t < s_{j + 1}$, we let $\lambda(t) = \lambda_j \exp(\mathbf{x} \beta_j).$ The sequence of regression coefficients $\beta_1, \beta_2, \ldots, \beta_{N - 1}$ form a normal random walk with $\beta_1 \sim N(0, 1)$, $\beta_j\ |\ \beta_{j - 1} \sim N(\beta_{j - 1}, 1)$.We implement this model in `pymc3` as follows.
###Code
with pm.Model() as time_varying_model:
lambda0 = pm.Gamma("lambda0", 0.01, 0.01, shape=n_intervals)
beta = GaussianRandomWalk("beta", tau=1.0, shape=n_intervals)
lambda_ = pm.Deterministic("h", lambda0 * T.exp(T.outer(T.constant(df.metastized), beta)))
mu = pm.Deterministic("mu", exposure * lambda_)
obs = pm.Poisson("obs", mu, observed=death)
###Output
_____no_output_____
###Markdown
We proceed to sample from this model.
###Code
with time_varying_model:
time_varying_trace = pm.sample(n_samples, tune=n_tune, random_seed=SEED)
pm.forestplot(time_varying_trace, var_names=["beta"]);
###Output
_____no_output_____
###Markdown
We see from the plot of $\beta_j$ over time below that initially $\beta_j > 0$, indicating an elevated hazard rate due to metastization, but that this risk declines as $\beta_j < 0$ eventually.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
beta_hpd = np.percentile(time_varying_trace["beta"], [2.5, 97.5], axis=0)
beta_low = beta_hpd[0]
beta_high = beta_hpd[1]
ax.fill_between(interval_bounds[:-1], beta_low, beta_high, color=blue, alpha=0.25)
beta_hat = time_varying_trace["beta"].mean(axis=0)
ax.step(interval_bounds[:-1], beta_hat, color=blue)
ax.scatter(
interval_bounds[last_period[(df.event.values == 1) & (df.metastized == 1)]],
beta_hat[last_period[(df.event.values == 1) & (df.metastized == 1)]],
c=red,
zorder=10,
label="Died, cancer metastized",
)
ax.scatter(
interval_bounds[last_period[(df.event.values == 0) & (df.metastized == 1)]],
beta_hat[last_period[(df.event.values == 0) & (df.metastized == 1)]],
c=blue,
zorder=10,
label="Censored, cancer metastized",
)
ax.set_xlim(0, df.time.max())
ax.set_xlabel("Months since mastectomy")
ax.set_ylabel(r"$\beta_j$")
ax.legend();
###Output
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
###Markdown
The coefficients $\beta_j$ begin declining rapidly around one hundred months post-mastectomy, which seems reasonable, given that only three of twelve subjects whose cancer had metastized lived past this point died during the study.The change in our estimate of the cumulative hazard and survival functions due to time-varying effects is also quite apparent in the following plots.
###Code
tv_base_hazard = time_varying_trace["lambda0"]
tv_met_hazard = time_varying_trace["lambda0"] * np.exp(np.atleast_2d(time_varying_trace["beta"]))
fig, ax = plt.subplots(figsize=(8, 6))
ax.step(
interval_bounds[:-1],
cum_hazard(base_hazard.mean(axis=0)),
color=blue,
label="Had not metastized",
)
ax.step(interval_bounds[:-1], cum_hazard(met_hazard.mean(axis=0)), color=red, label="Metastized")
ax.step(
interval_bounds[:-1],
cum_hazard(tv_base_hazard.mean(axis=0)),
color=blue,
linestyle="--",
label="Had not metastized (time varying effect)",
)
ax.step(
interval_bounds[:-1],
cum_hazard(tv_met_hazard.mean(axis=0)),
color=red,
linestyle="--",
label="Metastized (time varying effect)",
)
ax.set_xlim(0, df.time.max() - 4)
ax.set_xlabel("Months since mastectomy")
ax.set_ylim(0, 2)
ax.set_ylabel(r"Cumulative hazard $\Lambda(t)$")
ax.legend(loc=2);
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
plot_with_hpd(
interval_bounds[:-1],
tv_base_hazard,
cum_hazard,
hazard_ax,
color=blue,
label="Had not metastized",
)
plot_with_hpd(
interval_bounds[:-1], tv_met_hazard, cum_hazard, hazard_ax, color=red, label="Metastized"
)
hazard_ax.set_xlim(0, df.time.max())
hazard_ax.set_xlabel("Months since mastectomy")
hazard_ax.set_ylim(0, 2)
hazard_ax.set_ylabel(r"Cumulative hazard $\Lambda(t)$")
hazard_ax.legend(loc=2)
plot_with_hpd(interval_bounds[:-1], tv_base_hazard, survival, surv_ax, color=blue)
plot_with_hpd(interval_bounds[:-1], tv_met_hazard, survival, surv_ax, color=red)
surv_ax.set_xlim(0, df.time.max())
surv_ax.set_xlabel("Months since mastectomy")
surv_ax.set_ylabel("Survival function $S(t)$")
fig.suptitle("Bayesian survival model with time varying effects");
###Output
_____no_output_____
###Markdown
We have really only scratched the surface of both survival analysis and the Bayesian approach to survival analysis. More information on Bayesian survival analysis is available in Ibrahim et al. (2005). (For example, we may want to account for individual frailty in either or original or time-varying models.)This tutorial is available as an [IPython](http://ipython.org/) notebook [here](https://gist.github.com/AustinRochford/4c6b07e51a2247d678d6). It is adapted from a blog post that first appeared [here](http://austinrochford.com/posts/2015-10-05-bayes-survival.html).
###Code
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
pymc3 3.8
arviz 0.7.0
pandas 0.25.3
seaborn 0.9.0
numpy 1.17.5
last updated: Wed Apr 22 2020
CPython 3.8.0
IPython 7.11.0
watermark 2.0.2
###Markdown
Bayesian Survival AnalysisAuthor: Austin Rochford[Survival analysis](https://en.wikipedia.org/wiki/Survival_analysis) studies the distribution of the time to an event. Its applications span many fields across medicine, biology, engineering, and social science. This tutorial shows how to fit and analyze a Bayesian survival model in Python using [PyMC3](https://pymc-devs.github.io/pymc3).We illustrate these concepts by analyzing a [mastectomy data set](https://vincentarelbundock.github.io/Rdatasets/doc/HSAUR/mastectomy.html) from `R`'s [HSAUR](https://cran.r-project.org/web/packages/HSAUR/index.html) package.
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pymc3 as pm
from pymc3.distributions.timeseries import GaussianRandomWalk
import seaborn as sns
from statsmodels import datasets
from theano import tensor as T
###Output
Couldn't import dot_parser, loading of dot files will not be possible.
###Markdown
Fortunately, [statsmodels.datasets](http://statsmodels.sourceforge.net/0.6.0/datasets/index.html) makes it quite easy to load a number of data sets from `R`.
###Code
df = datasets.get_rdataset('mastectomy', 'HSAUR', cache=True).data
df.event = df.event.astype(np.int64)
df.metastized = (df.metastized == 'yes').astype(np.int64)
n_patients = df.shape[0]
patients = np.arange(n_patients)
df.head()
n_patients
###Output
_____no_output_____
###Markdown
Each row represents observations from a woman diagnosed with breast cancer that underwent a mastectomy. The column `time` represents the time (in months) post-surgery that the woman was observed. The column `event` indicates whether or not the woman died during the observation period. The column `metastized` represents whether the cancer had [metastized](https://en.wikipedia.org/wiki/Metastatic_breast_cancer) prior to surgery.This tutorial analyzes the relationship between survival time post-mastectomy and whether or not the cancer had metastized. A crash course in survival analysisFirst we introduce a (very little) bit of theory. If the random variable $T$ is the time to the event we are studying, survival analysis is primarily concerned with the survival function$$S(t) = P(T > t) = 1 - F(t),$$where $F$ is the [CDF](https://en.wikipedia.org/wiki/Cumulative_distribution_function) of $T$. It is mathematically convenient to express the survival function in terms of the [hazard rate](https://en.wikipedia.org/wiki/Survival_analysisHazard_function_and_cumulative_hazard_function), $\lambda(t)$. The hazard rate is the instantaneous probability that the event occurs at time $t$ given that it has not yet occured. That is,$$\begin{align*}\lambda(t) & = \lim_{\Delta t \to 0} \frac{P(t t)}{\Delta t} \\ & = \lim_{\Delta t \to 0} \frac{P(t t)} \\ & = \frac{1}{S(t)} \cdot \lim_{\Delta t \to 0} \frac{S(t + \Delta t) - S(t)}{\Delta t} = -\frac{S'(t)}{S(t)}.\end{align*}$$Solving this differential equation for the survival function shows that$$S(t) = \exp\left(-\int_0^s \lambda(s)\ ds\right).$$This representation of the survival function shows that the cumulative hazard function$$\Lambda(t) = \int_0^t \lambda(s)\ ds$$is an important quantity in survival analysis, since we may consicesly write $S(t) = \exp(-\Lambda(t)).$An important, but subtle, point in survival analysis is [censoring](https://en.wikipedia.org/wiki/Survival_analysisCensoring). Even though the quantity we are interested in estimating is the time between surgery and death, we do not observe the death of every subject. At the point in time that we perform our analysis, some of our subjects will thankfully still be alive. In the case of our mastectomy study, `df.event` is one if the subject's death was observed (the observation is not censored) and is zero if the death was not observed (the observation is censored).
###Code
df.event.mean()
###Output
_____no_output_____
###Markdown
Just over 40% of our observations are censored. We visualize the observed durations and indicate which observations are censored below.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
blue, _, red = sns.color_palette()[:3]
ax.hlines(patients[df.event.values == 0], 0, df[df.event.values == 0].time,
color=blue, label='Censored')
ax.hlines(patients[df.event.values == 1], 0, df[df.event.values == 1].time,
color=red, label='Uncensored')
ax.scatter(df[df.metastized.values == 1].time, patients[df.metastized.values == 1],
color='k', zorder=10, label='Metastized')
ax.set_xlim(left=0)
ax.set_xlabel('Months since mastectomy')
ax.set_yticks([])
ax.set_ylabel('Subject')
ax.set_ylim(-0.25, n_patients + 0.25)
ax.legend(loc='center right');
###Output
_____no_output_____
###Markdown
When an observation is censored (`df.event` is zero), `df.time` is not the subject's survival time. All we can conclude from such a censored obsevation is that the subject's true survival time exceeds `df.time`.This is enough basic surival analysis theory for the purposes of this tutorial; for a more extensive introduction, consult Aalen et al.^[Aalen, Odd, Ornulf Borgan, and Hakon Gjessing. Survival and event history analysis: a process point of view. Springer Science & Business Media, 2008.] Bayesian proportional hazards modelThe two most basic estimators in survial analysis are the [Kaplan-Meier estimator](https://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator) of the survival function and the [Nelson-Aalen estimator](https://en.wikipedia.org/wiki/Nelson%E2%80%93Aalen_estimator) of the cumulative hazard function. However, since we want to understand the impact of metastization on survival time, a risk regression model is more appropriate. Perhaps the most commonly used risk regression model is [Cox's proportional hazards model](https://en.wikipedia.org/wiki/Proportional_hazards_model). In this model, if we have covariates $\mathbf{x}$ and regression coefficients $\beta$, the hazard rate is modeled as$$\lambda(t) = \lambda_0(t) \exp(\mathbf{x} \beta).$$Here $\lambda_0(t)$ is the baseline hazard, which is independent of the covariates $\mathbf{x}$. In this example, the covariates are the one-dimensonal vector `df.metastized`.Unlike in many regression situations, $\mathbf{x}$ should not include a constant term corresponding to an intercept. If $\mathbf{x}$ includes a constant term corresponding to an intercept, the model becomes [unidentifiable](https://en.wikipedia.org/wiki/Identifiability). To illustrate this unidentifiability, suppose that$$\lambda(t) = \lambda_0(t) \exp(\beta_0 + \mathbf{x} \beta) = \lambda_0(t) \exp(\beta_0) \exp(\mathbf{x} \beta).$$If $\tilde{\beta}_0 = \beta_0 + \delta$ and $\tilde{\lambda}_0(t) = \lambda_0(t) \exp(-\delta)$, then $\lambda(t) = \tilde{\lambda}_0(t) \exp(\tilde{\beta}_0 + \mathbf{x} \beta)$ as well, making the model with $\beta_0$ unidentifiable.In order to perform Bayesian inference with the Cox model, we must specify priors on $\beta$ and $\lambda_0(t)$. We place a normal prior on $\beta$, $\beta \sim N(\mu_{\beta}, \sigma_{\beta}^2),$ where $\mu_{\beta} \sim N(0, 10^2)$ and $\sigma_{\beta} \sim U(0, 10)$.A suitable prior on $\lambda_0(t)$ is less obvious. We choose a semiparametric prior, where $\lambda_0(t)$ is a piecewise constant function. This prior requires us to partition the time range in question into intervals with endpoints $0 \leq s_1 < s_2 < \cdots < s_N$. With this partition, $\lambda_0 (t) = \lambda_j$ if $s_j \leq t < s_{j + 1}$. With $\lambda_0(t)$ constrained to have this form, all we need to do is choose priors for the $N - 1$ values $\lambda_j$. We use independent vague priors $\lambda_j \sim \operatorname{Gamma}(10^{-2}, 10^{-2}).$ For our mastectomy example, we make each interval three months long.
###Code
interval_length = 3
interval_bounds = np.arange(0, df.time.max() + interval_length + 1, interval_length)
n_intervals = interval_bounds.size - 1
intervals = np.arange(n_intervals)
###Output
_____no_output_____
###Markdown
We see how deaths and censored observations are distributed in these intervals.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(df[df.event == 1].time.values, bins=interval_bounds,
color=red, alpha=0.5, lw=0,
label='Uncensored');
ax.hist(df[df.event == 0].time.values, bins=interval_bounds,
color=blue, alpha=0.5, lw=0,
label='Censored');
ax.set_xlim(0, interval_bounds[-1]);
ax.set_xlabel('Months since mastectomy');
ax.set_yticks([0, 1, 2, 3]);
ax.set_ylabel('Number of observations');
ax.legend();
###Output
_____no_output_____
###Markdown
With the prior distributions on $\beta$ and $\lambda_0(t)$ chosen, we now show how the model may be fit using MCMC simulation with `pymc3`. The key observation is that the piecewise-constant proportional hazard model is [closely related](http://data.princeton.edu/wws509/notes/c7s4.html) to a Poisson regression model. (The models are not identical, but their likelihoods differ by a factor that depends only on the observed data and not the parameters $\beta$ and $\lambda_j$. For details, see Germán Rodríguez's WWS 509 [course notes](http://data.princeton.edu/wws509/notes/c7s4.html).)We define indicator variables based on whether or the $i$-th suject died in the $j$-th interval,$$d_{i, j} = \begin{cases} 1 & \textrm{if subject } i \textrm{ died in interval } j \\ 0 & \textrm{otherwise}\end{cases}.$$
###Code
last_period = np.floor((df.time - 0.01) / interval_length).astype(int)
death = np.zeros((n_patients, n_intervals))
death[patients, last_period] = df.event
###Output
_____no_output_____
###Markdown
We also define $t_{i, j}$ to be the amount of time the $i$-th subject was at risk in the $j$-th interval.
###Code
exposure = np.greater_equal.outer(df.time, interval_bounds[:-1]) * interval_length
exposure[patients, last_period] = df.time - interval_bounds[last_period]
###Output
_____no_output_____
###Markdown
Finally, denote the risk incurred by the $i$-th subject in the $j$-th interval as $\lambda_{i, j} = \lambda_j \exp(\mathbf{x}_i \beta)$.We may approximate $d_{i, j}$ with a Possion random variable with mean $t_{i, j}\ \lambda_{i, j}$. This approximation leads to the following `pymc3` model.
###Code
SEED = 5078864 # from random.org
with pm.Model() as model:
lambda0 = pm.Gamma('lambda0', 0.01, 0.01, shape=n_intervals)
beta = pm.Normal('beta', 0, sd=1000)
lambda_ = pm.Deterministic('lambda_', T.outer(T.exp(beta * df.metastized), lambda0))
mu = pm.Deterministic('mu', exposure * lambda_)
obs = pm.Poisson('obs', mu, observed=death)
###Output
Applied log-transform to lambda0 and added transformed lambda0_log to model.
###Markdown
We now sample from the model.
###Code
n_samples = 1000
with model:
trace_ = pm.sample(n_samples,random_seed=SEED)
trace = trace_[100:]
###Output
_____no_output_____
###Markdown
We see that the hazard rate for subjects whose cancer has metastized is about double the rate of those whose cancer has not metastized.
###Code
np.exp(trace['beta'].mean())
pm.plot_posterior(trace, varnames=['beta'], color='#87ceeb');
pm.autocorrplot(trace, varnames=['beta']);
###Output
_____no_output_____
###Markdown
We now examine the effect of metastization on both the cumulative hazard and on the survival function.
###Code
base_hazard = trace['lambda0']
met_hazard = trace['lambda0'] * np.exp(np.atleast_2d(trace['beta']).T)
def cum_hazard(hazard):
return (interval_length * hazard).cumsum(axis=-1)
def survival(hazard):
return np.exp(-cum_hazard(hazard))
def plot_with_hpd(x, hazard, f, ax, color=None, label=None, alpha=0.05):
mean = f(hazard.mean(axis=0))
percentiles = 100 * np.array([alpha / 2., 1. - alpha / 2.])
hpd = np.percentile(f(hazard), percentiles, axis=0)
ax.fill_between(x, hpd[0], hpd[1], color=color, alpha=0.25)
ax.step(x, mean, color=color, label=label);
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
plot_with_hpd(interval_bounds[:-1], base_hazard, cum_hazard,
hazard_ax, color=blue, label='Had not metastized')
plot_with_hpd(interval_bounds[:-1], met_hazard, cum_hazard,
hazard_ax, color=red, label='Metastized')
hazard_ax.set_xlim(0, df.time.max());
hazard_ax.set_xlabel('Months since mastectomy');
hazard_ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
hazard_ax.legend(loc=2);
plot_with_hpd(interval_bounds[:-1], base_hazard, survival,
surv_ax, color=blue)
plot_with_hpd(interval_bounds[:-1], met_hazard, survival,
surv_ax, color=red)
surv_ax.set_xlim(0, df.time.max());
surv_ax.set_xlabel('Months since mastectomy');
surv_ax.set_ylabel('Survival function $S(t)$');
fig.suptitle('Bayesian survival model');
###Output
_____no_output_____
###Markdown
We see that the cumulative hazard for metastized subjects increases more rapidly initially (through about seventy months), after which it increases roughly in parallel with the baseline cumulative hazard.These plots also show the pointwise 95% high posterior density interval for each function. One of the distinct advantages of the Bayesian model fit with `pymc3` is the inherent quantification of uncertainty in our estimates. Time varying effectsAnother of the advantages of the model we have built is its flexibility. From the plots above, we may reasonable believe that the additional hazard due to metastization varies over time; it seems plausible that cancer that has metastized increases the hazard rate immediately after the mastectomy, but that the risk due to metastization decreases over time. We can accomodate this mechanism in our model by allowing the regression coefficients to vary over time. In the time-varying coefficent model, if $s_j \leq t < s_{j + 1}$, we let $\lambda(t) = \lambda_j \exp(\mathbf{x} \beta_j).$ The sequence of regression coefficients $\beta_1, \beta_2, \ldots, \beta_{N - 1}$ form a normal random walk with $\beta_1 \sim N(0, 1)$, $\beta_j\ |\ \beta_{j - 1} \sim N(\beta_{j - 1}, 1)$.We implement this model in `pymc3` as follows.
###Code
with pm.Model() as time_varying_model:
lambda0 = pm.Gamma('lambda0', 0.01, 0.01, shape=n_intervals)
beta = GaussianRandomWalk('beta', tau=1., shape=n_intervals)
lambda_ = pm.Deterministic('h', lambda0 * T.exp(T.outer(T.constant(df.metastized), beta)))
mu = pm.Deterministic('mu', exposure * lambda_)
obs = pm.Poisson('obs', mu, observed=death)
###Output
Applied log-transform to lambda0 and added transformed lambda0_log to model.
###Markdown
We proceed to sample from this model.
###Code
with time_varying_model:
time_varying_trace_ = pm.sample(n_samples, random_seed=SEED)
time_varying_trace = time_varying_trace_[100:]
pm.forestplot(time_varying_trace, varnames=['beta']);
###Output
_____no_output_____
###Markdown
We see from the plot of $\beta_j$ over time below that initially $\beta_j > 0$, indicating an elevated hazard rate due to metastization, but that this risk declines as $\beta_j < 0$ eventually.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
beta_hpd = np.percentile(time_varying_trace['beta'], [2.5, 97.5], axis=0)
beta_low = beta_hpd[0]
beta_high = beta_hpd[1]
ax.fill_between(interval_bounds[:-1], beta_low, beta_high,
color=blue, alpha=0.25);
beta_hat = time_varying_trace['beta'].mean(axis=0)
ax.step(interval_bounds[:-1], beta_hat, color=blue);
ax.scatter(interval_bounds[last_period[(df.event.values == 1) & (df.metastized == 1)]],
beta_hat[last_period[(df.event.values == 1) & (df.metastized == 1)]],
c=red, zorder=10, label='Died, cancer metastized');
ax.scatter(interval_bounds[last_period[(df.event.values == 0) & (df.metastized == 1)]],
beta_hat[last_period[(df.event.values == 0) & (df.metastized == 1)]],
c=blue, zorder=10, label='Censored, cancer metastized');
ax.set_xlim(0, df.time.max());
ax.set_xlabel('Months since mastectomy');
ax.set_ylabel(r'$\beta_j$');
ax.legend();
###Output
_____no_output_____
###Markdown
The coefficients $\beta_j$ begin declining rapidly around one hundred months post-mastectomy, which seems reasonable, given that only three of twelve subjects whose cancer had metastized lived past this point died during the study.The change in our estimate of the cumulative hazard and survival functions due to time-varying effects is also quite apparent in the following plots.
###Code
tv_base_hazard = time_varying_trace['lambda0']
tv_met_hazard = time_varying_trace['lambda0'] * np.exp(np.atleast_2d(time_varying_trace['beta']))
fig, ax = plt.subplots(figsize=(8, 6))
ax.step(interval_bounds[:-1], cum_hazard(base_hazard.mean(axis=0)),
color=blue, label='Had not metastized');
ax.step(interval_bounds[:-1], cum_hazard(met_hazard.mean(axis=0)),
color=red, label='Metastized');
ax.step(interval_bounds[:-1], cum_hazard(tv_base_hazard.mean(axis=0)),
color=blue, linestyle='--', label='Had not metastized (time varying effect)');
ax.step(interval_bounds[:-1], cum_hazard(tv_met_hazard.mean(axis=0)),
color=red, linestyle='--', label='Metastized (time varying effect)');
ax.set_xlim(0, df.time.max() - 4);
ax.set_xlabel('Months since mastectomy');
ax.set_ylim(0, 2);
ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
ax.legend(loc=2);
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
plot_with_hpd(interval_bounds[:-1], tv_base_hazard, cum_hazard,
hazard_ax, color=blue, label='Had not metastized')
plot_with_hpd(interval_bounds[:-1], tv_met_hazard, cum_hazard,
hazard_ax, color=red, label='Metastized')
hazard_ax.set_xlim(0, df.time.max());
hazard_ax.set_xlabel('Months since mastectomy');
hazard_ax.set_ylim(0, 2);
hazard_ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
hazard_ax.legend(loc=2);
plot_with_hpd(interval_bounds[:-1], tv_base_hazard, survival,
surv_ax, color=blue)
plot_with_hpd(interval_bounds[:-1], tv_met_hazard, survival,
surv_ax, color=red)
surv_ax.set_xlim(0, df.time.max());
surv_ax.set_xlabel('Months since mastectomy');
surv_ax.set_ylabel('Survival function $S(t)$');
fig.suptitle('Bayesian survival model with time varying effects');
###Output
_____no_output_____
###Markdown
Bayesian Survival AnalysisAuthor: Austin Rochford[Survival analysis](https://en.wikipedia.org/wiki/Survival_analysis) studies the distribution of the time to an event. Its applications span many fields across medicine, biology, engineering, and social science. This tutorial shows how to fit and analyze a Bayesian survival model in Python using PyMC3.We illustrate these concepts by analyzing a [mastectomy data set](https://vincentarelbundock.github.io/Rdatasets/doc/HSAUR/mastectomy.html) from `R`'s [HSAUR](https://cran.r-project.org/web/packages/HSAUR/index.html) package.
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pymc3 as pm
from pymc3.distributions.timeseries import GaussianRandomWalk
import seaborn as sns
import pandas as pd
from theano import tensor as T
df = pd.read_csv(pm.get_data('mastectomy.csv'))
df.event = df.event.astype(np.int64)
df.metastized = (df.metastized == 'yes').astype(np.int64)
n_patients = df.shape[0]
patients = np.arange(n_patients)
df.head()
n_patients
###Output
_____no_output_____
###Markdown
Each row represents observations from a woman diagnosed with breast cancer that underwent a mastectomy. The column `time` represents the time (in months) post-surgery that the woman was observed. The column `event` indicates whether or not the woman died during the observation period. The column `metastized` represents whether the cancer had [metastized](https://en.wikipedia.org/wiki/Metastatic_breast_cancer) prior to surgery.This tutorial analyzes the relationship between survival time post-mastectomy and whether or not the cancer had metastized. A crash course in survival analysisFirst we introduce a (very little) bit of theory. If the random variable $T$ is the time to the event we are studying, survival analysis is primarily concerned with the survival function$$S(t) = P(T > t) = 1 - F(t),$$where $F$ is the [CDF](https://en.wikipedia.org/wiki/Cumulative_distribution_function) of $T$. It is mathematically convenient to express the survival function in terms of the [hazard rate](https://en.wikipedia.org/wiki/Survival_analysisHazard_function_and_cumulative_hazard_function), $\lambda(t)$. The hazard rate is the instantaneous probability that the event occurs at time $t$ given that it has not yet occured. That is,$$\begin{align*}\lambda(t) & = \lim_{\Delta t \to 0} \frac{P(t t)}{\Delta t} \\ & = \lim_{\Delta t \to 0} \frac{P(t t)} \\ & = \frac{1}{S(t)} \cdot \lim_{\Delta t \to 0} \frac{S(t + \Delta t) - S(t)}{\Delta t} = -\frac{S'(t)}{S(t)}.\end{align*}$$Solving this differential equation for the survival function shows that$$S(t) = \exp\left(-\int_0^s \lambda(s)\ ds\right).$$This representation of the survival function shows that the cumulative hazard function$$\Lambda(t) = \int_0^t \lambda(s)\ ds$$is an important quantity in survival analysis, since we may consicesly write $S(t) = \exp(-\Lambda(t)).$An important, but subtle, point in survival analysis is [censoring](https://en.wikipedia.org/wiki/Survival_analysisCensoring). Even though the quantity we are interested in estimating is the time between surgery and death, we do not observe the death of every subject. At the point in time that we perform our analysis, some of our subjects will thankfully still be alive. In the case of our mastectomy study, `df.event` is one if the subject's death was observed (the observation is not censored) and is zero if the death was not observed (the observation is censored).
###Code
df.event.mean()
###Output
_____no_output_____
###Markdown
Just over 40% of our observations are censored. We visualize the observed durations and indicate which observations are censored below.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
blue, _, red = sns.color_palette()[:3]
ax.hlines(patients[df.event.values == 0], 0, df[df.event.values == 0].time,
color=blue, label='Censored')
ax.hlines(patients[df.event.values == 1], 0, df[df.event.values == 1].time,
color=red, label='Uncensored')
ax.scatter(df[df.metastized.values == 1].time, patients[df.metastized.values == 1],
color='k', zorder=10, label='Metastized')
ax.set_xlim(left=0)
ax.set_xlabel('Months since mastectomy')
ax.set_yticks([])
ax.set_ylabel('Subject')
ax.set_ylim(-0.25, n_patients + 0.25)
ax.legend(loc='center right');
###Output
_____no_output_____
###Markdown
When an observation is censored (`df.event` is zero), `df.time` is not the subject's survival time. All we can conclude from such a censored obsevation is that the subject's true survival time exceeds `df.time`.This is enough basic surival analysis theory for the purposes of this tutorial; for a more extensive introduction, consult Aalen et al.^[Aalen, Odd, Ornulf Borgan, and Hakon Gjessing. Survival and event history analysis: a process point of view. Springer Science & Business Media, 2008.] Bayesian proportional hazards modelThe two most basic estimators in survial analysis are the [Kaplan-Meier estimator](https://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator) of the survival function and the [Nelson-Aalen estimator](https://en.wikipedia.org/wiki/Nelson%E2%80%93Aalen_estimator) of the cumulative hazard function. However, since we want to understand the impact of metastization on survival time, a risk regression model is more appropriate. Perhaps the most commonly used risk regression model is [Cox's proportional hazards model](https://en.wikipedia.org/wiki/Proportional_hazards_model). In this model, if we have covariates $\mathbf{x}$ and regression coefficients $\beta$, the hazard rate is modeled as$$\lambda(t) = \lambda_0(t) \exp(\mathbf{x} \beta).$$Here $\lambda_0(t)$ is the baseline hazard, which is independent of the covariates $\mathbf{x}$. In this example, the covariates are the one-dimensonal vector `df.metastized`.Unlike in many regression situations, $\mathbf{x}$ should not include a constant term corresponding to an intercept. If $\mathbf{x}$ includes a constant term corresponding to an intercept, the model becomes [unidentifiable](https://en.wikipedia.org/wiki/Identifiability). To illustrate this unidentifiability, suppose that$$\lambda(t) = \lambda_0(t) \exp(\beta_0 + \mathbf{x} \beta) = \lambda_0(t) \exp(\beta_0) \exp(\mathbf{x} \beta).$$If $\tilde{\beta}_0 = \beta_0 + \delta$ and $\tilde{\lambda}_0(t) = \lambda_0(t) \exp(-\delta)$, then $\lambda(t) = \tilde{\lambda}_0(t) \exp(\tilde{\beta}_0 + \mathbf{x} \beta)$ as well, making the model with $\beta_0$ unidentifiable.In order to perform Bayesian inference with the Cox model, we must specify priors on $\beta$ and $\lambda_0(t)$. We place a normal prior on $\beta$, $\beta \sim N(\mu_{\beta}, \sigma_{\beta}^2),$ where $\mu_{\beta} \sim N(0, 10^2)$ and $\sigma_{\beta} \sim U(0, 10)$.A suitable prior on $\lambda_0(t)$ is less obvious. We choose a semiparametric prior, where $\lambda_0(t)$ is a piecewise constant function. This prior requires us to partition the time range in question into intervals with endpoints $0 \leq s_1 < s_2 < \cdots < s_N$. With this partition, $\lambda_0 (t) = \lambda_j$ if $s_j \leq t < s_{j + 1}$. With $\lambda_0(t)$ constrained to have this form, all we need to do is choose priors for the $N - 1$ values $\lambda_j$. We use independent vague priors $\lambda_j \sim \operatorname{Gamma}(10^{-2}, 10^{-2}).$ For our mastectomy example, we make each interval three months long.
###Code
interval_length = 3
interval_bounds = np.arange(0, df.time.max() + interval_length + 1, interval_length)
n_intervals = interval_bounds.size - 1
intervals = np.arange(n_intervals)
###Output
_____no_output_____
###Markdown
We see how deaths and censored observations are distributed in these intervals.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(df[df.event == 1].time.values, bins=interval_bounds,
color=red, alpha=0.5, lw=0,
label='Uncensored');
ax.hist(df[df.event == 0].time.values, bins=interval_bounds,
color=blue, alpha=0.5, lw=0,
label='Censored');
ax.set_xlim(0, interval_bounds[-1]);
ax.set_xlabel('Months since mastectomy');
ax.set_yticks([0, 1, 2, 3]);
ax.set_ylabel('Number of observations');
ax.legend();
###Output
_____no_output_____
###Markdown
With the prior distributions on $\beta$ and $\lambda_0(t)$ chosen, we now show how the model may be fit using MCMC simulation with `pymc3`. The key observation is that the piecewise-constant proportional hazard model is [closely related](http://data.princeton.edu/wws509/notes/c7s4.html) to a Poisson regression model. (The models are not identical, but their likelihoods differ by a factor that depends only on the observed data and not the parameters $\beta$ and $\lambda_j$. For details, see Germán Rodríguez's WWS 509 [course notes](http://data.princeton.edu/wws509/notes/c7s4.html).)We define indicator variables based on whether or the $i$-th suject died in the $j$-th interval,$$d_{i, j} = \begin{cases} 1 & \textrm{if subject } i \textrm{ died in interval } j \\ 0 & \textrm{otherwise}\end{cases}.$$
###Code
last_period = np.floor((df.time - 0.01) / interval_length).astype(int)
death = np.zeros((n_patients, n_intervals))
death[patients, last_period] = df.event
###Output
_____no_output_____
###Markdown
We also define $t_{i, j}$ to be the amount of time the $i$-th subject was at risk in the $j$-th interval.
###Code
exposure = np.greater_equal.outer(df.time, interval_bounds[:-1]) * interval_length
exposure[patients, last_period] = df.time - interval_bounds[last_period]
###Output
_____no_output_____
###Markdown
Finally, denote the risk incurred by the $i$-th subject in the $j$-th interval as $\lambda_{i, j} = \lambda_j \exp(\mathbf{x}_i \beta)$.We may approximate $d_{i, j}$ with a Possion random variable with mean $t_{i, j}\ \lambda_{i, j}$. This approximation leads to the following `pymc3` model.
###Code
SEED = 644567 # from random.org
with pm.Model() as model:
lambda0 = pm.Gamma('lambda0', 0.01, 0.01, shape=n_intervals)
beta = pm.Normal('beta', 0, sigma=1000)
lambda_ = pm.Deterministic('lambda_', T.outer(T.exp(beta * df.metastized), lambda0))
mu = pm.Deterministic('mu', exposure * lambda_)
obs = pm.Poisson('obs', mu, observed=death)
###Output
_____no_output_____
###Markdown
We now sample from the model.
###Code
n_samples = 1000
n_tune = 1000
with model:
trace = pm.sample(n_samples, tune=n_tune, random_seed=SEED)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [beta, lambda0]
Sampling 2 chains: 100%|██████████| 4000/4000 [05:04<00:00, 13.14draws/s]
There were 94 divergences after tuning. Increase `target_accept` or reparameterize.
There were 89 divergences after tuning. Increase `target_accept` or reparameterize.
The number of effective samples is smaller than 25% for some parameters.
###Markdown
We see that the hazard rate for subjects whose cancer has metastized is about double the rate of those whose cancer has not metastized.
###Code
np.exp(trace['beta'].mean())
pm.plot_posterior(trace, var_names=['beta'], color='#87ceeb');
pm.autocorrplot(trace, var_names=['beta']);
###Output
_____no_output_____
###Markdown
We now examine the effect of metastization on both the cumulative hazard and on the survival function.
###Code
base_hazard = trace['lambda0']
met_hazard = trace['lambda0'] * np.exp(np.atleast_2d(trace['beta']).T)
def cum_hazard(hazard):
return (interval_length * hazard).cumsum(axis=-1)
def survival(hazard):
return np.exp(-cum_hazard(hazard))
def plot_with_hpd(x, hazard, f, ax, color=None, label=None, alpha=0.05):
mean = f(hazard.mean(axis=0))
percentiles = 100 * np.array([alpha / 2., 1. - alpha / 2.])
hpd = np.percentile(f(hazard), percentiles, axis=0)
ax.fill_between(x, hpd[0], hpd[1], color=color, alpha=0.25)
ax.step(x, mean, color=color, label=label);
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
plot_with_hpd(interval_bounds[:-1], base_hazard, cum_hazard,
hazard_ax, color=blue, label='Had not metastized')
plot_with_hpd(interval_bounds[:-1], met_hazard, cum_hazard,
hazard_ax, color=red, label='Metastized')
hazard_ax.set_xlim(0, df.time.max());
hazard_ax.set_xlabel('Months since mastectomy');
hazard_ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
hazard_ax.legend(loc=2);
plot_with_hpd(interval_bounds[:-1], base_hazard, survival,
surv_ax, color=blue)
plot_with_hpd(interval_bounds[:-1], met_hazard, survival,
surv_ax, color=red)
surv_ax.set_xlim(0, df.time.max());
surv_ax.set_xlabel('Months since mastectomy');
surv_ax.set_ylabel('Survival function $S(t)$');
fig.suptitle('Bayesian survival model');
###Output
_____no_output_____
###Markdown
We see that the cumulative hazard for metastized subjects increases more rapidly initially (through about seventy months), after which it increases roughly in parallel with the baseline cumulative hazard.These plots also show the pointwise 95% high posterior density interval for each function. One of the distinct advantages of the Bayesian model fit with `pymc3` is the inherent quantification of uncertainty in our estimates. Time varying effectsAnother of the advantages of the model we have built is its flexibility. From the plots above, we may reasonable believe that the additional hazard due to metastization varies over time; it seems plausible that cancer that has metastized increases the hazard rate immediately after the mastectomy, but that the risk due to metastization decreases over time. We can accomodate this mechanism in our model by allowing the regression coefficients to vary over time. In the time-varying coefficent model, if $s_j \leq t < s_{j + 1}$, we let $\lambda(t) = \lambda_j \exp(\mathbf{x} \beta_j).$ The sequence of regression coefficients $\beta_1, \beta_2, \ldots, \beta_{N - 1}$ form a normal random walk with $\beta_1 \sim N(0, 1)$, $\beta_j\ |\ \beta_{j - 1} \sim N(\beta_{j - 1}, 1)$.We implement this model in `pymc3` as follows.
###Code
with pm.Model() as time_varying_model:
lambda0 = pm.Gamma('lambda0', 0.01, 0.01, shape=n_intervals)
beta = GaussianRandomWalk('beta', tau=1., shape=n_intervals)
lambda_ = pm.Deterministic('h', lambda0 * T.exp(T.outer(T.constant(df.metastized), beta)))
mu = pm.Deterministic('mu', exposure * lambda_)
obs = pm.Poisson('obs', mu, observed=death)
###Output
_____no_output_____
###Markdown
We proceed to sample from this model.
###Code
with time_varying_model:
time_varying_trace = pm.sample(n_samples, tune=n_tune, random_seed=SEED)
pm.forestplot(time_varying_trace, var_names=['beta']);
###Output
_____no_output_____
###Markdown
We see from the plot of $\beta_j$ over time below that initially $\beta_j > 0$, indicating an elevated hazard rate due to metastization, but that this risk declines as $\beta_j < 0$ eventually.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
beta_hpd = np.percentile(time_varying_trace['beta'], [2.5, 97.5], axis=0)
beta_low = beta_hpd[0]
beta_high = beta_hpd[1]
ax.fill_between(interval_bounds[:-1], beta_low, beta_high,
color=blue, alpha=0.25);
beta_hat = time_varying_trace['beta'].mean(axis=0)
ax.step(interval_bounds[:-1], beta_hat, color=blue);
ax.scatter(interval_bounds[last_period[(df.event.values == 1) & (df.metastized == 1)]],
beta_hat[last_period[(df.event.values == 1) & (df.metastized == 1)]],
c=red, zorder=10, label='Died, cancer metastized');
ax.scatter(interval_bounds[last_period[(df.event.values == 0) & (df.metastized == 1)]],
beta_hat[last_period[(df.event.values == 0) & (df.metastized == 1)]],
c=blue, zorder=10, label='Censored, cancer metastized');
ax.set_xlim(0, df.time.max());
ax.set_xlabel('Months since mastectomy');
ax.set_ylabel(r'$\beta_j$');
ax.legend();
###Output
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
###Markdown
The coefficients $\beta_j$ begin declining rapidly around one hundred months post-mastectomy, which seems reasonable, given that only three of twelve subjects whose cancer had metastized lived past this point died during the study.The change in our estimate of the cumulative hazard and survival functions due to time-varying effects is also quite apparent in the following plots.
###Code
tv_base_hazard = time_varying_trace['lambda0']
tv_met_hazard = time_varying_trace['lambda0'] * np.exp(np.atleast_2d(time_varying_trace['beta']))
fig, ax = plt.subplots(figsize=(8, 6))
ax.step(interval_bounds[:-1], cum_hazard(base_hazard.mean(axis=0)),
color=blue, label='Had not metastized');
ax.step(interval_bounds[:-1], cum_hazard(met_hazard.mean(axis=0)),
color=red, label='Metastized');
ax.step(interval_bounds[:-1], cum_hazard(tv_base_hazard.mean(axis=0)),
color=blue, linestyle='--', label='Had not metastized (time varying effect)');
ax.step(interval_bounds[:-1], cum_hazard(tv_met_hazard.mean(axis=0)),
color=red, linestyle='--', label='Metastized (time varying effect)');
ax.set_xlim(0, df.time.max() - 4);
ax.set_xlabel('Months since mastectomy');
ax.set_ylim(0, 2);
ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
ax.legend(loc=2);
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
plot_with_hpd(interval_bounds[:-1], tv_base_hazard, cum_hazard,
hazard_ax, color=blue, label='Had not metastized')
plot_with_hpd(interval_bounds[:-1], tv_met_hazard, cum_hazard,
hazard_ax, color=red, label='Metastized')
hazard_ax.set_xlim(0, df.time.max());
hazard_ax.set_xlabel('Months since mastectomy');
hazard_ax.set_ylim(0, 2);
hazard_ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
hazard_ax.legend(loc=2);
plot_with_hpd(interval_bounds[:-1], tv_base_hazard, survival,
surv_ax, color=blue)
plot_with_hpd(interval_bounds[:-1], tv_met_hazard, survival,
surv_ax, color=red)
surv_ax.set_xlim(0, df.time.max());
surv_ax.set_xlabel('Months since mastectomy');
surv_ax.set_ylabel('Survival function $S(t)$');
fig.suptitle('Bayesian survival model with time varying effects');
###Output
_____no_output_____
###Markdown
Bayesian Survival AnalysisAuthor: Austin Rochford[Survival analysis](https://en.wikipedia.org/wiki/Survival_analysis) studies the distribution of the time to an event. Its applications span many fields across medicine, biology, engineering, and social science. This tutorial shows how to fit and analyze a Bayesian survival model in Python using PyMC3.We illustrate these concepts by analyzing a [mastectomy data set](https://vincentarelbundock.github.io/Rdatasets/doc/HSAUR/mastectomy.html) from `R`'s [HSAUR](https://cran.r-project.org/web/packages/HSAUR/index.html) package.
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pymc3 as pm
from pymc3.distributions.timeseries import GaussianRandomWalk
import seaborn as sns
import pandas as pd
from theano import tensor as T
df = pd.read_csv(pm.get_data('mastectomy.csv'))
df.event = df.event.astype(np.int64)
df.metastized = (df.metastized == 'yes').astype(np.int64)
n_patients = df.shape[0]
patients = np.arange(n_patients)
df.head()
n_patients
###Output
_____no_output_____
###Markdown
Each row represents observations from a woman diagnosed with breast cancer that underwent a mastectomy. The column `time` represents the time (in months) post-surgery that the woman was observed. The column `event` indicates whether or not the woman died during the observation period. The column `metastized` represents whether the cancer had [metastized](https://en.wikipedia.org/wiki/Metastatic_breast_cancer) prior to surgery.This tutorial analyzes the relationship between survival time post-mastectomy and whether or not the cancer had metastized. A crash course in survival analysisFirst we introduce a (very little) bit of theory. If the random variable $T$ is the time to the event we are studying, survival analysis is primarily concerned with the survival function$$S(t) = P(T > t) = 1 - F(t),$$where $F$ is the [CDF](https://en.wikipedia.org/wiki/Cumulative_distribution_function) of $T$. It is mathematically convenient to express the survival function in terms of the [hazard rate](https://en.wikipedia.org/wiki/Survival_analysisHazard_function_and_cumulative_hazard_function), $\lambda(t)$. The hazard rate is the instantaneous probability that the event occurs at time $t$ given that it has not yet occured. That is,$$\begin{align*}\lambda(t) & = \lim_{\Delta t \to 0} \frac{P(t t)}{\Delta t} \\ & = \lim_{\Delta t \to 0} \frac{P(t t)} \\ & = \frac{1}{S(t)} \cdot \lim_{\Delta t \to 0} \frac{S(t + \Delta t) - S(t)}{\Delta t} = -\frac{S'(t)}{S(t)}.\end{align*}$$Solving this differential equation for the survival function shows that$$S(t) = \exp\left(-\int_0^s \lambda(s)\ ds\right).$$This representation of the survival function shows that the cumulative hazard function$$\Lambda(t) = \int_0^t \lambda(s)\ ds$$is an important quantity in survival analysis, since we may consicesly write $S(t) = \exp(-\Lambda(t)).$An important, but subtle, point in survival analysis is [censoring](https://en.wikipedia.org/wiki/Survival_analysisCensoring). Even though the quantity we are interested in estimating is the time between surgery and death, we do not observe the death of every subject. At the point in time that we perform our analysis, some of our subjects will thankfully still be alive. In the case of our mastectomy study, `df.event` is one if the subject's death was observed (the observation is not censored) and is zero if the death was not observed (the observation is censored).
###Code
df.event.mean()
###Output
_____no_output_____
###Markdown
Just over 40% of our observations are censored. We visualize the observed durations and indicate which observations are censored below.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
blue, _, red = sns.color_palette()[:3]
ax.hlines(patients[df.event.values == 0], 0, df[df.event.values == 0].time,
color=blue, label='Censored')
ax.hlines(patients[df.event.values == 1], 0, df[df.event.values == 1].time,
color=red, label='Uncensored')
ax.scatter(df[df.metastized.values == 1].time, patients[df.metastized.values == 1],
color='k', zorder=10, label='Metastized')
ax.set_xlim(left=0)
ax.set_xlabel('Months since mastectomy')
ax.set_yticks([])
ax.set_ylabel('Subject')
ax.set_ylim(-0.25, n_patients + 0.25)
ax.legend(loc='center right');
###Output
_____no_output_____
###Markdown
When an observation is censored (`df.event` is zero), `df.time` is not the subject's survival time. All we can conclude from such a censored obsevation is that the subject's true survival time exceeds `df.time`.This is enough basic surival analysis theory for the purposes of this tutorial; for a more extensive introduction, consult Aalen et al.^[Aalen, Odd, Ornulf Borgan, and Hakon Gjessing. Survival and event history analysis: a process point of view. Springer Science & Business Media, 2008.] Bayesian proportional hazards modelThe two most basic estimators in survial analysis are the [Kaplan-Meier estimator](https://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator) of the survival function and the [Nelson-Aalen estimator](https://en.wikipedia.org/wiki/Nelson%E2%80%93Aalen_estimator) of the cumulative hazard function. However, since we want to understand the impact of metastization on survival time, a risk regression model is more appropriate. Perhaps the most commonly used risk regression model is [Cox's proportional hazards model](https://en.wikipedia.org/wiki/Proportional_hazards_model). In this model, if we have covariates $\mathbf{x}$ and regression coefficients $\beta$, the hazard rate is modeled as$$\lambda(t) = \lambda_0(t) \exp(\mathbf{x} \beta).$$Here $\lambda_0(t)$ is the baseline hazard, which is independent of the covariates $\mathbf{x}$. In this example, the covariates are the one-dimensonal vector `df.metastized`.Unlike in many regression situations, $\mathbf{x}$ should not include a constant term corresponding to an intercept. If $\mathbf{x}$ includes a constant term corresponding to an intercept, the model becomes [unidentifiable](https://en.wikipedia.org/wiki/Identifiability). To illustrate this unidentifiability, suppose that$$\lambda(t) = \lambda_0(t) \exp(\beta_0 + \mathbf{x} \beta) = \lambda_0(t) \exp(\beta_0) \exp(\mathbf{x} \beta).$$If $\tilde{\beta}_0 = \beta_0 + \delta$ and $\tilde{\lambda}_0(t) = \lambda_0(t) \exp(-\delta)$, then $\lambda(t) = \tilde{\lambda}_0(t) \exp(\tilde{\beta}_0 + \mathbf{x} \beta)$ as well, making the model with $\beta_0$ unidentifiable.In order to perform Bayesian inference with the Cox model, we must specify priors on $\beta$ and $\lambda_0(t)$. We place a normal prior on $\beta$, $\beta \sim N(\mu_{\beta}, \sigma_{\beta}^2),$ where $\mu_{\beta} \sim N(0, 10^2)$ and $\sigma_{\beta} \sim U(0, 10)$.A suitable prior on $\lambda_0(t)$ is less obvious. We choose a semiparametric prior, where $\lambda_0(t)$ is a piecewise constant function. This prior requires us to partition the time range in question into intervals with endpoints $0 \leq s_1 < s_2 < \cdots < s_N$. With this partition, $\lambda_0 (t) = \lambda_j$ if $s_j \leq t < s_{j + 1}$. With $\lambda_0(t)$ constrained to have this form, all we need to do is choose priors for the $N - 1$ values $\lambda_j$. We use independent vague priors $\lambda_j \sim \operatorname{Gamma}(10^{-2}, 10^{-2}).$ For our mastectomy example, we make each interval three months long.
###Code
interval_length = 3
interval_bounds = np.arange(0, df.time.max() + interval_length + 1, interval_length)
n_intervals = interval_bounds.size - 1
intervals = np.arange(n_intervals)
###Output
_____no_output_____
###Markdown
We see how deaths and censored observations are distributed in these intervals.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(df[df.event == 1].time.values, bins=interval_bounds,
color=red, alpha=0.5, lw=0,
label='Uncensored');
ax.hist(df[df.event == 0].time.values, bins=interval_bounds,
color=blue, alpha=0.5, lw=0,
label='Censored');
ax.set_xlim(0, interval_bounds[-1]);
ax.set_xlabel('Months since mastectomy');
ax.set_yticks([0, 1, 2, 3]);
ax.set_ylabel('Number of observations');
ax.legend();
###Output
_____no_output_____
###Markdown
With the prior distributions on $\beta$ and $\lambda_0(t)$ chosen, we now show how the model may be fit using MCMC simulation with `pymc3`. The key observation is that the piecewise-constant proportional hazard model is [closely related](http://data.princeton.edu/wws509/notes/c7s4.html) to a Poisson regression model. (The models are not identical, but their likelihoods differ by a factor that depends only on the observed data and not the parameters $\beta$ and $\lambda_j$. For details, see Germán Rodríguez's WWS 509 [course notes](http://data.princeton.edu/wws509/notes/c7s4.html).)We define indicator variables based on whether or the $i$-th suject died in the $j$-th interval,$$d_{i, j} = \begin{cases} 1 & \textrm{if subject } i \textrm{ died in interval } j \\ 0 & \textrm{otherwise}\end{cases}.$$
###Code
last_period = np.floor((df.time - 0.01) / interval_length).astype(int)
death = np.zeros((n_patients, n_intervals))
death[patients, last_period] = df.event
###Output
_____no_output_____
###Markdown
We also define $t_{i, j}$ to be the amount of time the $i$-th subject was at risk in the $j$-th interval.
###Code
exposure = np.greater_equal.outer(df.time, interval_bounds[:-1]) * interval_length
exposure[patients, last_period] = df.time - interval_bounds[last_period]
###Output
_____no_output_____
###Markdown
Finally, denote the risk incurred by the $i$-th subject in the $j$-th interval as $\lambda_{i, j} = \lambda_j \exp(\mathbf{x}_i \beta)$.We may approximate $d_{i, j}$ with a Possion random variable with mean $t_{i, j}\ \lambda_{i, j}$. This approximation leads to the following `pymc3` model.
###Code
SEED = 644567 # from random.org
with pm.Model() as model:
lambda0 = pm.Gamma('lambda0', 0.01, 0.01, shape=n_intervals)
beta = pm.Normal('beta', 0, sigma=1000)
lambda_ = pm.Deterministic('lambda_', T.outer(T.exp(beta * df.metastized), lambda0))
mu = pm.Deterministic('mu', exposure * lambda_)
obs = pm.Poisson('obs', mu, observed=death)
###Output
_____no_output_____
###Markdown
We now sample from the model.
###Code
n_samples = 1000
n_tune = 1000
with model:
trace = pm.sample(n_samples, tune=n_tune, random_seed=SEED)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [beta, lambda0]
Sampling 2 chains: 100%|██████████| 4000/4000 [05:04<00:00, 13.14draws/s]
There were 94 divergences after tuning. Increase `target_accept` or reparameterize.
There were 89 divergences after tuning. Increase `target_accept` or reparameterize.
The number of effective samples is smaller than 25% for some parameters.
###Markdown
We see that the hazard rate for subjects whose cancer has metastized is about double the rate of those whose cancer has not metastized.
###Code
np.exp(trace['beta'].mean())
pm.plot_posterior(trace, var_names=['beta'], color='#87ceeb');
pm.autocorrplot(trace, var_names=['beta']);
###Output
_____no_output_____
###Markdown
Bayesian Survival AnalysisAuthor: Austin Rochford[Survival analysis](https://en.wikipedia.org/wiki/Survival_analysis) studies the distribution of the time to an event. Its applications span many fields across medicine, biology, engineering, and social science. This tutorial shows how to fit and analyze a Bayesian survival model in Python using PyMC3.We illustrate these concepts by analyzing a [mastectomy data set](https://vincentarelbundock.github.io/Rdatasets/doc/HSAUR/mastectomy.html) from `R`'s [HSAUR](https://cran.r-project.org/web/packages/HSAUR/index.html) package.
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import pymc3 as pm
import seaborn as sns
from matplotlib import pyplot as plt
from pymc3.distributions.timeseries import GaussianRandomWalk
from theano import tensor as T
df = pd.read_csv(pm.get_data('mastectomy.csv'))
df.event = df.event.astype(np.int64)
df.metastized = (df.metastized == 'yes').astype(np.int64)
n_patients = df.shape[0]
patients = np.arange(n_patients)
df.head()
n_patients
###Output
_____no_output_____
###Markdown
Each row represents observations from a woman diagnosed with breast cancer that underwent a mastectomy. The column `time` represents the time (in months) post-surgery that the woman was observed. The column `event` indicates whether or not the woman died during the observation period. The column `metastized` represents whether the cancer had [metastized](https://en.wikipedia.org/wiki/Metastatic_breast_cancer) prior to surgery.This tutorial analyzes the relationship between survival time post-mastectomy and whether or not the cancer had metastized. A crash course in survival analysisFirst we introduce a (very little) bit of theory. If the random variable $T$ is the time to the event we are studying, survival analysis is primarily concerned with the survival function$$S(t) = P(T > t) = 1 - F(t),$$where $F$ is the [CDF](https://en.wikipedia.org/wiki/Cumulative_distribution_function) of $T$. It is mathematically convenient to express the survival function in terms of the [hazard rate](https://en.wikipedia.org/wiki/Survival_analysisHazard_function_and_cumulative_hazard_function), $\lambda(t)$. The hazard rate is the instantaneous probability that the event occurs at time $t$ given that it has not yet occured. That is,$$\begin{align*}\lambda(t) & = \lim_{\Delta t \to 0} \frac{P(t t)}{\Delta t} \\ & = \lim_{\Delta t \to 0} \frac{P(t t)} \\ & = \frac{1}{S(t)} \cdot \lim_{\Delta t \to 0} \frac{S(t + \Delta t) - S(t)}{\Delta t} = -\frac{S'(t)}{S(t)}.\end{align*}$$Solving this differential equation for the survival function shows that$$S(t) = \exp\left(-\int_0^s \lambda(s)\ ds\right).$$This representation of the survival function shows that the cumulative hazard function$$\Lambda(t) = \int_0^t \lambda(s)\ ds$$is an important quantity in survival analysis, since we may consicesly write $S(t) = \exp(-\Lambda(t)).$An important, but subtle, point in survival analysis is [censoring](https://en.wikipedia.org/wiki/Survival_analysisCensoring). Even though the quantity we are interested in estimating is the time between surgery and death, we do not observe the death of every subject. At the point in time that we perform our analysis, some of our subjects will thankfully still be alive. In the case of our mastectomy study, `df.event` is one if the subject's death was observed (the observation is not censored) and is zero if the death was not observed (the observation is censored).
###Code
df.event.mean()
###Output
_____no_output_____
###Markdown
Just over 40% of our observations are censored. We visualize the observed durations and indicate which observations are censored below.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
blue, _, red = sns.color_palette()[:3]
ax.hlines(patients[df.event.values == 0], 0, df[df.event.values == 0].time,
color=blue, label='Censored')
ax.hlines(patients[df.event.values == 1], 0, df[df.event.values == 1].time,
color=red, label='Uncensored')
ax.scatter(df[df.metastized.values == 1].time, patients[df.metastized.values == 1],
color='k', zorder=10, label='Metastized')
ax.set_xlim(left=0)
ax.set_xlabel('Months since mastectomy')
ax.set_yticks([])
ax.set_ylabel('Subject')
ax.set_ylim(-0.25, n_patients + 0.25)
ax.legend(loc='center right');
###Output
_____no_output_____
###Markdown
When an observation is censored (`df.event` is zero), `df.time` is not the subject's survival time. All we can conclude from such a censored obsevation is that the subject's true survival time exceeds `df.time`.This is enough basic surival analysis theory for the purposes of this tutorial; for a more extensive introduction, consult Aalen et al.^[Aalen, Odd, Ornulf Borgan, and Hakon Gjessing. Survival and event history analysis: a process point of view. Springer Science & Business Media, 2008.] Bayesian proportional hazards modelThe two most basic estimators in survial analysis are the [Kaplan-Meier estimator](https://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator) of the survival function and the [Nelson-Aalen estimator](https://en.wikipedia.org/wiki/Nelson%E2%80%93Aalen_estimator) of the cumulative hazard function. However, since we want to understand the impact of metastization on survival time, a risk regression model is more appropriate. Perhaps the most commonly used risk regression model is [Cox's proportional hazards model](https://en.wikipedia.org/wiki/Proportional_hazards_model). In this model, if we have covariates $\mathbf{x}$ and regression coefficients $\beta$, the hazard rate is modeled as$$\lambda(t) = \lambda_0(t) \exp(\mathbf{x} \beta).$$Here $\lambda_0(t)$ is the baseline hazard, which is independent of the covariates $\mathbf{x}$. In this example, the covariates are the one-dimensonal vector `df.metastized`.Unlike in many regression situations, $\mathbf{x}$ should not include a constant term corresponding to an intercept. If $\mathbf{x}$ includes a constant term corresponding to an intercept, the model becomes [unidentifiable](https://en.wikipedia.org/wiki/Identifiability). To illustrate this unidentifiability, suppose that$$\lambda(t) = \lambda_0(t) \exp(\beta_0 + \mathbf{x} \beta) = \lambda_0(t) \exp(\beta_0) \exp(\mathbf{x} \beta).$$If $\tilde{\beta}_0 = \beta_0 + \delta$ and $\tilde{\lambda}_0(t) = \lambda_0(t) \exp(-\delta)$, then $\lambda(t) = \tilde{\lambda}_0(t) \exp(\tilde{\beta}_0 + \mathbf{x} \beta)$ as well, making the model with $\beta_0$ unidentifiable.In order to perform Bayesian inference with the Cox model, we must specify priors on $\beta$ and $\lambda_0(t)$. We place a normal prior on $\beta$, $\beta \sim N(\mu_{\beta}, \sigma_{\beta}^2),$ where $\mu_{\beta} \sim N(0, 10^2)$ and $\sigma_{\beta} \sim U(0, 10)$.A suitable prior on $\lambda_0(t)$ is less obvious. We choose a semiparametric prior, where $\lambda_0(t)$ is a piecewise constant function. This prior requires us to partition the time range in question into intervals with endpoints $0 \leq s_1 < s_2 < \cdots < s_N$. With this partition, $\lambda_0 (t) = \lambda_j$ if $s_j \leq t < s_{j + 1}$. With $\lambda_0(t)$ constrained to have this form, all we need to do is choose priors for the $N - 1$ values $\lambda_j$. We use independent vague priors $\lambda_j \sim \operatorname{Gamma}(10^{-2}, 10^{-2}).$ For our mastectomy example, we make each interval three months long.
###Code
interval_length = 3
interval_bounds = np.arange(0, df.time.max() + interval_length + 1, interval_length)
n_intervals = interval_bounds.size - 1
intervals = np.arange(n_intervals)
###Output
_____no_output_____
###Markdown
We see how deaths and censored observations are distributed in these intervals.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(df[df.event == 1].time.values, bins=interval_bounds,
color=red, alpha=0.5, lw=0,
label='Uncensored');
ax.hist(df[df.event == 0].time.values, bins=interval_bounds,
color=blue, alpha=0.5, lw=0,
label='Censored');
ax.set_xlim(0, interval_bounds[-1]);
ax.set_xlabel('Months since mastectomy');
ax.set_yticks([0, 1, 2, 3]);
ax.set_ylabel('Number of observations');
ax.legend();
###Output
_____no_output_____
###Markdown
With the prior distributions on $\beta$ and $\lambda_0(t)$ chosen, we now show how the model may be fit using MCMC simulation with `pymc3`. The key observation is that the piecewise-constant proportional hazard model is [closely related](http://data.princeton.edu/wws509/notes/c7s4.html) to a Poisson regression model. (The models are not identical, but their likelihoods differ by a factor that depends only on the observed data and not the parameters $\beta$ and $\lambda_j$. For details, see Germán Rodríguez's WWS 509 [course notes](http://data.princeton.edu/wws509/notes/c7s4.html).)We define indicator variables based on whether or the $i$-th suject died in the $j$-th interval,$$d_{i, j} = \begin{cases} 1 & \textrm{if subject } i \textrm{ died in interval } j \\ 0 & \textrm{otherwise}\end{cases}.$$
###Code
last_period = np.floor((df.time - 0.01) / interval_length).astype(int)
death = np.zeros((n_patients, n_intervals))
death[patients, last_period] = df.event
###Output
_____no_output_____
###Markdown
We also define $t_{i, j}$ to be the amount of time the $i$-th subject was at risk in the $j$-th interval.
###Code
exposure = np.greater_equal.outer(df.time, interval_bounds[:-1]) * interval_length
exposure[patients, last_period] = df.time - interval_bounds[last_period]
###Output
_____no_output_____
###Markdown
Finally, denote the risk incurred by the $i$-th subject in the $j$-th interval as $\lambda_{i, j} = \lambda_j \exp(\mathbf{x}_i \beta)$.We may approximate $d_{i, j}$ with a Possion random variable with mean $t_{i, j}\ \lambda_{i, j}$. This approximation leads to the following `pymc3` model.
###Code
SEED = 644567 # from random.org
with pm.Model() as model:
lambda0 = pm.Gamma('lambda0', 0.01, 0.01, shape=n_intervals)
beta = pm.Normal('beta', 0, sigma=1000)
lambda_ = pm.Deterministic('lambda_', T.outer(T.exp(beta * df.metastized), lambda0))
mu = pm.Deterministic('mu', exposure * lambda_)
obs = pm.Poisson('obs', mu, observed=death)
###Output
_____no_output_____
###Markdown
We now sample from the model.
###Code
n_samples = 1000
n_tune = 1000
with model:
trace = pm.sample(n_samples, tune=n_tune, random_seed=SEED)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [beta, lambda0]
Sampling 2 chains: 100%|██████████| 4000/4000 [05:04<00:00, 13.14draws/s]
There were 94 divergences after tuning. Increase `target_accept` or reparameterize.
There were 89 divergences after tuning. Increase `target_accept` or reparameterize.
The number of effective samples is smaller than 25% for some parameters.
###Markdown
We see that the hazard rate for subjects whose cancer has metastized is about double the rate of those whose cancer has not metastized.
###Code
np.exp(trace['beta'].mean())
pm.plot_posterior(trace, var_names=['beta'], color='#87ceeb');
pm.autocorrplot(trace, var_names=['beta']);
###Output
_____no_output_____
###Markdown
We now examine the effect of metastization on both the cumulative hazard and on the survival function.
###Code
base_hazard = trace['lambda0']
met_hazard = trace['lambda0'] * np.exp(np.atleast_2d(trace['beta']).T)
def cum_hazard(hazard):
return (interval_length * hazard).cumsum(axis=-1)
def survival(hazard):
return np.exp(-cum_hazard(hazard))
def plot_with_hpd(x, hazard, f, ax, color=None, label=None, alpha=0.05):
mean = f(hazard.mean(axis=0))
percentiles = 100 * np.array([alpha / 2., 1. - alpha / 2.])
hpd = np.percentile(f(hazard), percentiles, axis=0)
ax.fill_between(x, hpd[0], hpd[1], color=color, alpha=0.25)
ax.step(x, mean, color=color, label=label);
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
plot_with_hpd(interval_bounds[:-1], base_hazard, cum_hazard,
hazard_ax, color=blue, label='Had not metastized')
plot_with_hpd(interval_bounds[:-1], met_hazard, cum_hazard,
hazard_ax, color=red, label='Metastized')
hazard_ax.set_xlim(0, df.time.max());
hazard_ax.set_xlabel('Months since mastectomy');
hazard_ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
hazard_ax.legend(loc=2);
plot_with_hpd(interval_bounds[:-1], base_hazard, survival,
surv_ax, color=blue)
plot_with_hpd(interval_bounds[:-1], met_hazard, survival,
surv_ax, color=red)
surv_ax.set_xlim(0, df.time.max());
surv_ax.set_xlabel('Months since mastectomy');
surv_ax.set_ylabel('Survival function $S(t)$');
fig.suptitle('Bayesian survival model');
###Output
_____no_output_____
###Markdown
We see that the cumulative hazard for metastized subjects increases more rapidly initially (through about seventy months), after which it increases roughly in parallel with the baseline cumulative hazard.These plots also show the pointwise 95% high posterior density interval for each function. One of the distinct advantages of the Bayesian model fit with `pymc3` is the inherent quantification of uncertainty in our estimates. Time varying effectsAnother of the advantages of the model we have built is its flexibility. From the plots above, we may reasonable believe that the additional hazard due to metastization varies over time; it seems plausible that cancer that has metastized increases the hazard rate immediately after the mastectomy, but that the risk due to metastization decreases over time. We can accomodate this mechanism in our model by allowing the regression coefficients to vary over time. In the time-varying coefficent model, if $s_j \leq t < s_{j + 1}$, we let $\lambda(t) = \lambda_j \exp(\mathbf{x} \beta_j).$ The sequence of regression coefficients $\beta_1, \beta_2, \ldots, \beta_{N - 1}$ form a normal random walk with $\beta_1 \sim N(0, 1)$, $\beta_j\ |\ \beta_{j - 1} \sim N(\beta_{j - 1}, 1)$.We implement this model in `pymc3` as follows.
###Code
with pm.Model() as time_varying_model:
lambda0 = pm.Gamma('lambda0', 0.01, 0.01, shape=n_intervals)
beta = GaussianRandomWalk('beta', tau=1., shape=n_intervals)
lambda_ = pm.Deterministic('h', lambda0 * T.exp(T.outer(T.constant(df.metastized), beta)))
mu = pm.Deterministic('mu', exposure * lambda_)
obs = pm.Poisson('obs', mu, observed=death)
###Output
_____no_output_____
###Markdown
We proceed to sample from this model.
###Code
with time_varying_model:
time_varying_trace = pm.sample(n_samples, tune=n_tune, random_seed=SEED)
pm.forestplot(time_varying_trace, var_names=['beta']);
###Output
_____no_output_____
###Markdown
We see from the plot of $\beta_j$ over time below that initially $\beta_j > 0$, indicating an elevated hazard rate due to metastization, but that this risk declines as $\beta_j < 0$ eventually.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
beta_hpd = np.percentile(time_varying_trace['beta'], [2.5, 97.5], axis=0)
beta_low = beta_hpd[0]
beta_high = beta_hpd[1]
ax.fill_between(interval_bounds[:-1], beta_low, beta_high,
color=blue, alpha=0.25);
beta_hat = time_varying_trace['beta'].mean(axis=0)
ax.step(interval_bounds[:-1], beta_hat, color=blue);
ax.scatter(interval_bounds[last_period[(df.event.values == 1) & (df.metastized == 1)]],
beta_hat[last_period[(df.event.values == 1) & (df.metastized == 1)]],
c=red, zorder=10, label='Died, cancer metastized');
ax.scatter(interval_bounds[last_period[(df.event.values == 0) & (df.metastized == 1)]],
beta_hat[last_period[(df.event.values == 0) & (df.metastized == 1)]],
c=blue, zorder=10, label='Censored, cancer metastized');
ax.set_xlim(0, df.time.max());
ax.set_xlabel('Months since mastectomy');
ax.set_ylabel(r'$\beta_j$');
ax.legend();
###Output
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
###Markdown
The coefficients $\beta_j$ begin declining rapidly around one hundred months post-mastectomy, which seems reasonable, given that only three of twelve subjects whose cancer had metastized lived past this point died during the study.The change in our estimate of the cumulative hazard and survival functions due to time-varying effects is also quite apparent in the following plots.
###Code
tv_base_hazard = time_varying_trace['lambda0']
tv_met_hazard = time_varying_trace['lambda0'] * np.exp(np.atleast_2d(time_varying_trace['beta']))
fig, ax = plt.subplots(figsize=(8, 6))
ax.step(interval_bounds[:-1], cum_hazard(base_hazard.mean(axis=0)),
color=blue, label='Had not metastized');
ax.step(interval_bounds[:-1], cum_hazard(met_hazard.mean(axis=0)),
color=red, label='Metastized');
ax.step(interval_bounds[:-1], cum_hazard(tv_base_hazard.mean(axis=0)),
color=blue, linestyle='--', label='Had not metastized (time varying effect)');
ax.step(interval_bounds[:-1], cum_hazard(tv_met_hazard.mean(axis=0)),
color=red, linestyle='--', label='Metastized (time varying effect)');
ax.set_xlim(0, df.time.max() - 4);
ax.set_xlabel('Months since mastectomy');
ax.set_ylim(0, 2);
ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
ax.legend(loc=2);
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
plot_with_hpd(interval_bounds[:-1], tv_base_hazard, cum_hazard,
hazard_ax, color=blue, label='Had not metastized')
plot_with_hpd(interval_bounds[:-1], tv_met_hazard, cum_hazard,
hazard_ax, color=red, label='Metastized')
hazard_ax.set_xlim(0, df.time.max());
hazard_ax.set_xlabel('Months since mastectomy');
hazard_ax.set_ylim(0, 2);
hazard_ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
hazard_ax.legend(loc=2);
plot_with_hpd(interval_bounds[:-1], tv_base_hazard, survival,
surv_ax, color=blue)
plot_with_hpd(interval_bounds[:-1], tv_met_hazard, survival,
surv_ax, color=red)
surv_ax.set_xlim(0, df.time.max());
surv_ax.set_xlabel('Months since mastectomy');
surv_ax.set_ylabel('Survival function $S(t)$');
fig.suptitle('Bayesian survival model with time varying effects');
###Output
_____no_output_____
###Markdown
We have really only scratched the surface of both survival analysis and the Bayesian approach to survival analysis. More information on Bayesian survival analysis is available in Ibrahim et al. (2005). (For example, we may want to account for individual frailty in either or original or time-varying models.)This tutorial is available as an [IPython](http://ipython.org/) notebook [here](https://gist.github.com/AustinRochford/4c6b07e51a2247d678d6). It is adapted from a blog post that first appeared [here](http://austinrochford.com/posts/2015-10-05-bayes-survival.html).
###Code
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
pymc3 3.8
arviz 0.7.0
pandas 0.25.3
seaborn 0.9.0
numpy 1.17.5
last updated: Wed Apr 22 2020
CPython 3.8.0
IPython 7.11.0
watermark 2.0.2
###Markdown
Bayesian Survival AnalysisAuthor: Austin Rochford[Survival analysis](https://en.wikipedia.org/wiki/Survival_analysis) studies the distribution of the time to an event. Its applications span many fields across medicine, biology, engineering, and social science. This tutorial shows how to fit and analyze a Bayesian survival model in Python using PyMC3.We illustrate these concepts by analyzing a [mastectomy data set](https://vincentarelbundock.github.io/Rdatasets/doc/HSAUR/mastectomy.html) from `R`'s [HSAUR](https://cran.r-project.org/web/packages/HSAUR/index.html) package.
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pymc3 as pm
from pymc3.distributions.timeseries import GaussianRandomWalk
import seaborn as sns
import pandas as pd
from theano import tensor as T
df = pd.read_csv(pm.get_data('mastectomy.csv'))
df.event = df.event.astype(np.int64)
df.metastized = (df.metastized == 'yes').astype(np.int64)
n_patients = df.shape[0]
patients = np.arange(n_patients)
df.head()
n_patients
###Output
_____no_output_____
###Markdown
Each row represents observations from a woman diagnosed with breast cancer that underwent a mastectomy. The column `time` represents the time (in months) post-surgery that the woman was observed. The column `event` indicates whether or not the woman died during the observation period. The column `metastized` represents whether the cancer had [metastized](https://en.wikipedia.org/wiki/Metastatic_breast_cancer) prior to surgery.This tutorial analyzes the relationship between survival time post-mastectomy and whether or not the cancer had metastized. A crash course in survival analysisFirst we introduce a (very little) bit of theory. If the random variable $T$ is the time to the event we are studying, survival analysis is primarily concerned with the survival function$$S(t) = P(T > t) = 1 - F(t),$$where $F$ is the [CDF](https://en.wikipedia.org/wiki/Cumulative_distribution_function) of $T$. It is mathematically convenient to express the survival function in terms of the [hazard rate](https://en.wikipedia.org/wiki/Survival_analysisHazard_function_and_cumulative_hazard_function), $\lambda(t)$. The hazard rate is the instantaneous probability that the event occurs at time $t$ given that it has not yet occured. That is,$$\begin{align*}\lambda(t) & = \lim_{\Delta t \to 0} \frac{P(t t)}{\Delta t} \\ & = \lim_{\Delta t \to 0} \frac{P(t t)} \\ & = \frac{1}{S(t)} \cdot \lim_{\Delta t \to 0} \frac{S(t + \Delta t) - S(t)}{\Delta t} = -\frac{S'(t)}{S(t)}.\end{align*}$$Solving this differential equation for the survival function shows that$$S(t) = \exp\left(-\int_0^s \lambda(s)\ ds\right).$$This representation of the survival function shows that the cumulative hazard function$$\Lambda(t) = \int_0^t \lambda(s)\ ds$$is an important quantity in survival analysis, since we may consicesly write $S(t) = \exp(-\Lambda(t)).$An important, but subtle, point in survival analysis is [censoring](https://en.wikipedia.org/wiki/Survival_analysisCensoring). Even though the quantity we are interested in estimating is the time between surgery and death, we do not observe the death of every subject. At the point in time that we perform our analysis, some of our subjects will thankfully still be alive. In the case of our mastectomy study, `df.event` is one if the subject's death was observed (the observation is not censored) and is zero if the death was not observed (the observation is censored).
###Code
df.event.mean()
###Output
_____no_output_____
###Markdown
Just over 40% of our observations are censored. We visualize the observed durations and indicate which observations are censored below.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
blue, _, red = sns.color_palette()[:3]
ax.hlines(patients[df.event.values == 0], 0, df[df.event.values == 0].time,
color=blue, label='Censored')
ax.hlines(patients[df.event.values == 1], 0, df[df.event.values == 1].time,
color=red, label='Uncensored')
ax.scatter(df[df.metastized.values == 1].time, patients[df.metastized.values == 1],
color='k', zorder=10, label='Metastized')
ax.set_xlim(left=0)
ax.set_xlabel('Months since mastectomy')
ax.set_yticks([])
ax.set_ylabel('Subject')
ax.set_ylim(-0.25, n_patients + 0.25)
ax.legend(loc='center right');
###Output
_____no_output_____
###Markdown
When an observation is censored (`df.event` is zero), `df.time` is not the subject's survival time. All we can conclude from such a censored obsevation is that the subject's true survival time exceeds `df.time`.This is enough basic surival analysis theory for the purposes of this tutorial; for a more extensive introduction, consult Aalen et al.^[Aalen, Odd, Ornulf Borgan, and Hakon Gjessing. Survival and event history analysis: a process point of view. Springer Science & Business Media, 2008.] Bayesian proportional hazards modelThe two most basic estimators in survial analysis are the [Kaplan-Meier estimator](https://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator) of the survival function and the [Nelson-Aalen estimator](https://en.wikipedia.org/wiki/Nelson%E2%80%93Aalen_estimator) of the cumulative hazard function. However, since we want to understand the impact of metastization on survival time, a risk regression model is more appropriate. Perhaps the most commonly used risk regression model is [Cox's proportional hazards model](https://en.wikipedia.org/wiki/Proportional_hazards_model). In this model, if we have covariates $\mathbf{x}$ and regression coefficients $\beta$, the hazard rate is modeled as$$\lambda(t) = \lambda_0(t) \exp(\mathbf{x} \beta).$$Here $\lambda_0(t)$ is the baseline hazard, which is independent of the covariates $\mathbf{x}$. In this example, the covariates are the one-dimensonal vector `df.metastized`.Unlike in many regression situations, $\mathbf{x}$ should not include a constant term corresponding to an intercept. If $\mathbf{x}$ includes a constant term corresponding to an intercept, the model becomes [unidentifiable](https://en.wikipedia.org/wiki/Identifiability). To illustrate this unidentifiability, suppose that$$\lambda(t) = \lambda_0(t) \exp(\beta_0 + \mathbf{x} \beta) = \lambda_0(t) \exp(\beta_0) \exp(\mathbf{x} \beta).$$If $\tilde{\beta}_0 = \beta_0 + \delta$ and $\tilde{\lambda}_0(t) = \lambda_0(t) \exp(-\delta)$, then $\lambda(t) = \tilde{\lambda}_0(t) \exp(\tilde{\beta}_0 + \mathbf{x} \beta)$ as well, making the model with $\beta_0$ unidentifiable.In order to perform Bayesian inference with the Cox model, we must specify priors on $\beta$ and $\lambda_0(t)$. We place a normal prior on $\beta$, $\beta \sim N(\mu_{\beta}, \sigma_{\beta}^2),$ where $\mu_{\beta} \sim N(0, 10^2)$ and $\sigma_{\beta} \sim U(0, 10)$.A suitable prior on $\lambda_0(t)$ is less obvious. We choose a semiparametric prior, where $\lambda_0(t)$ is a piecewise constant function. This prior requires us to partition the time range in question into intervals with endpoints $0 \leq s_1 < s_2 < \cdots < s_N$. With this partition, $\lambda_0 (t) = \lambda_j$ if $s_j \leq t < s_{j + 1}$. With $\lambda_0(t)$ constrained to have this form, all we need to do is choose priors for the $N - 1$ values $\lambda_j$. We use independent vague priors $\lambda_j \sim \operatorname{Gamma}(10^{-2}, 10^{-2}).$ For our mastectomy example, we make each interval three months long.
###Code
interval_length = 3
interval_bounds = np.arange(0, df.time.max() + interval_length + 1, interval_length)
n_intervals = interval_bounds.size - 1
intervals = np.arange(n_intervals)
###Output
_____no_output_____
###Markdown
We see how deaths and censored observations are distributed in these intervals.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(df[df.event == 1].time.values, bins=interval_bounds,
color=red, alpha=0.5, lw=0,
label='Uncensored');
ax.hist(df[df.event == 0].time.values, bins=interval_bounds,
color=blue, alpha=0.5, lw=0,
label='Censored');
ax.set_xlim(0, interval_bounds[-1]);
ax.set_xlabel('Months since mastectomy');
ax.set_yticks([0, 1, 2, 3]);
ax.set_ylabel('Number of observations');
ax.legend();
###Output
_____no_output_____
###Markdown
With the prior distributions on $\beta$ and $\lambda_0(t)$ chosen, we now show how the model may be fit using MCMC simulation with `pymc3`. The key observation is that the piecewise-constant proportional hazard model is [closely related](http://data.princeton.edu/wws509/notes/c7s4.html) to a Poisson regression model. (The models are not identical, but their likelihoods differ by a factor that depends only on the observed data and not the parameters $\beta$ and $\lambda_j$. For details, see Germán Rodríguez's WWS 509 [course notes](http://data.princeton.edu/wws509/notes/c7s4.html).)We define indicator variables based on whether or the $i$-th suject died in the $j$-th interval,$$d_{i, j} = \begin{cases} 1 & \textrm{if subject } i \textrm{ died in interval } j \\ 0 & \textrm{otherwise}\end{cases}.$$
###Code
last_period = np.floor((df.time - 0.01) / interval_length).astype(int)
death = np.zeros((n_patients, n_intervals))
death[patients, last_period] = df.event
###Output
_____no_output_____
###Markdown
We also define $t_{i, j}$ to be the amount of time the $i$-th subject was at risk in the $j$-th interval.
###Code
exposure = np.greater_equal.outer(df.time, interval_bounds[:-1]) * interval_length
exposure[patients, last_period] = df.time - interval_bounds[last_period]
###Output
_____no_output_____
###Markdown
Finally, denote the risk incurred by the $i$-th subject in the $j$-th interval as $\lambda_{i, j} = \lambda_j \exp(\mathbf{x}_i \beta)$.We may approximate $d_{i, j}$ with a Possion random variable with mean $t_{i, j}\ \lambda_{i, j}$. This approximation leads to the following `pymc3` model.
###Code
SEED = 5078864 # from random.org
with pm.Model() as model:
lambda0 = pm.Gamma('lambda0', 0.01, 0.01, shape=n_intervals)
beta = pm.Normal('beta', 0, sigma=1000)
lambda_ = pm.Deterministic('lambda_', T.outer(T.exp(beta * df.metastized), lambda0))
mu = pm.Deterministic('mu', exposure * lambda_)
obs = pm.Poisson('obs', mu, observed=death)
###Output
_____no_output_____
###Markdown
We now sample from the model.
###Code
n_samples = 1000
n_tune = 1000
with model:
trace = pm.sample(n_samples, tune=n_tune, random_seed=SEED)
###Output
100%|██████████| 2000/2000 [15:44<00:00, 2.31it/s]
###Markdown
We see that the hazard rate for subjects whose cancer has metastized is about double the rate of those whose cancer has not metastized.
###Code
np.exp(trace['beta'].mean())
pm.plot_posterior(trace, varnames=['beta'], color='#87ceeb');
pm.autocorrplot(trace, varnames=['beta']);
###Output
_____no_output_____
###Markdown
We now examine the effect of metastization on both the cumulative hazard and on the survival function.
###Code
base_hazard = trace['lambda0']
met_hazard = trace['lambda0'] * np.exp(np.atleast_2d(trace['beta']).T)
def cum_hazard(hazard):
return (interval_length * hazard).cumsum(axis=-1)
def survival(hazard):
return np.exp(-cum_hazard(hazard))
def plot_with_hpd(x, hazard, f, ax, color=None, label=None, alpha=0.05):
mean = f(hazard.mean(axis=0))
percentiles = 100 * np.array([alpha / 2., 1. - alpha / 2.])
hpd = np.percentile(f(hazard), percentiles, axis=0)
ax.fill_between(x, hpd[0], hpd[1], color=color, alpha=0.25)
ax.step(x, mean, color=color, label=label);
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
plot_with_hpd(interval_bounds[:-1], base_hazard, cum_hazard,
hazard_ax, color=blue, label='Had not metastized')
plot_with_hpd(interval_bounds[:-1], met_hazard, cum_hazard,
hazard_ax, color=red, label='Metastized')
hazard_ax.set_xlim(0, df.time.max());
hazard_ax.set_xlabel('Months since mastectomy');
hazard_ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
hazard_ax.legend(loc=2);
plot_with_hpd(interval_bounds[:-1], base_hazard, survival,
surv_ax, color=blue)
plot_with_hpd(interval_bounds[:-1], met_hazard, survival,
surv_ax, color=red)
surv_ax.set_xlim(0, df.time.max());
surv_ax.set_xlabel('Months since mastectomy');
surv_ax.set_ylabel('Survival function $S(t)$');
fig.suptitle('Bayesian survival model');
###Output
_____no_output_____
###Markdown
We see that the cumulative hazard for metastized subjects increases more rapidly initially (through about seventy months), after which it increases roughly in parallel with the baseline cumulative hazard.These plots also show the pointwise 95% high posterior density interval for each function. One of the distinct advantages of the Bayesian model fit with `pymc3` is the inherent quantification of uncertainty in our estimates. Time varying effectsAnother of the advantages of the model we have built is its flexibility. From the plots above, we may reasonable believe that the additional hazard due to metastization varies over time; it seems plausible that cancer that has metastized increases the hazard rate immediately after the mastectomy, but that the risk due to metastization decreases over time. We can accomodate this mechanism in our model by allowing the regression coefficients to vary over time. In the time-varying coefficent model, if $s_j \leq t < s_{j + 1}$, we let $\lambda(t) = \lambda_j \exp(\mathbf{x} \beta_j).$ The sequence of regression coefficients $\beta_1, \beta_2, \ldots, \beta_{N - 1}$ form a normal random walk with $\beta_1 \sim N(0, 1)$, $\beta_j\ |\ \beta_{j - 1} \sim N(\beta_{j - 1}, 1)$.We implement this model in `pymc3` as follows.
###Code
with pm.Model() as time_varying_model:
lambda0 = pm.Gamma('lambda0', 0.01, 0.01, shape=n_intervals)
beta = GaussianRandomWalk('beta', tau=1., shape=n_intervals)
lambda_ = pm.Deterministic('h', lambda0 * T.exp(T.outer(T.constant(df.metastized), beta)))
mu = pm.Deterministic('mu', exposure * lambda_)
obs = pm.Poisson('obs', mu, observed=death)
###Output
_____no_output_____
###Markdown
We proceed to sample from this model.
###Code
with time_varying_model:
time_varying_trace = pm.sample(n_samples, tune=n_tune, random_seed=SEED)
pm.forestplot(time_varying_trace, varnames=['beta']);
###Output
_____no_output_____
###Markdown
We see from the plot of $\beta_j$ over time below that initially $\beta_j > 0$, indicating an elevated hazard rate due to metastization, but that this risk declines as $\beta_j < 0$ eventually.
###Code
fig, ax = plt.subplots(figsize=(8, 6))
beta_hpd = np.percentile(time_varying_trace['beta'], [2.5, 97.5], axis=0)
beta_low = beta_hpd[0]
beta_high = beta_hpd[1]
ax.fill_between(interval_bounds[:-1], beta_low, beta_high,
color=blue, alpha=0.25);
beta_hat = time_varying_trace['beta'].mean(axis=0)
ax.step(interval_bounds[:-1], beta_hat, color=blue);
ax.scatter(interval_bounds[last_period[(df.event.values == 1) & (df.metastized == 1)]],
beta_hat[last_period[(df.event.values == 1) & (df.metastized == 1)]],
c=red, zorder=10, label='Died, cancer metastized');
ax.scatter(interval_bounds[last_period[(df.event.values == 0) & (df.metastized == 1)]],
beta_hat[last_period[(df.event.values == 0) & (df.metastized == 1)]],
c=blue, zorder=10, label='Censored, cancer metastized');
ax.set_xlim(0, df.time.max());
ax.set_xlabel('Months since mastectomy');
ax.set_ylabel(r'$\beta_j$');
ax.legend();
###Output
_____no_output_____
###Markdown
The coefficients $\beta_j$ begin declining rapidly around one hundred months post-mastectomy, which seems reasonable, given that only three of twelve subjects whose cancer had metastized lived past this point died during the study.The change in our estimate of the cumulative hazard and survival functions due to time-varying effects is also quite apparent in the following plots.
###Code
tv_base_hazard = time_varying_trace['lambda0']
tv_met_hazard = time_varying_trace['lambda0'] * np.exp(np.atleast_2d(time_varying_trace['beta']))
fig, ax = plt.subplots(figsize=(8, 6))
ax.step(interval_bounds[:-1], cum_hazard(base_hazard.mean(axis=0)),
color=blue, label='Had not metastized');
ax.step(interval_bounds[:-1], cum_hazard(met_hazard.mean(axis=0)),
color=red, label='Metastized');
ax.step(interval_bounds[:-1], cum_hazard(tv_base_hazard.mean(axis=0)),
color=blue, linestyle='--', label='Had not metastized (time varying effect)');
ax.step(interval_bounds[:-1], cum_hazard(tv_met_hazard.mean(axis=0)),
color=red, linestyle='--', label='Metastized (time varying effect)');
ax.set_xlim(0, df.time.max() - 4);
ax.set_xlabel('Months since mastectomy');
ax.set_ylim(0, 2);
ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
ax.legend(loc=2);
fig, (hazard_ax, surv_ax) = plt.subplots(ncols=2, sharex=True, sharey=False, figsize=(16, 6))
plot_with_hpd(interval_bounds[:-1], tv_base_hazard, cum_hazard,
hazard_ax, color=blue, label='Had not metastized')
plot_with_hpd(interval_bounds[:-1], tv_met_hazard, cum_hazard,
hazard_ax, color=red, label='Metastized')
hazard_ax.set_xlim(0, df.time.max());
hazard_ax.set_xlabel('Months since mastectomy');
hazard_ax.set_ylim(0, 2);
hazard_ax.set_ylabel(r'Cumulative hazard $\Lambda(t)$');
hazard_ax.legend(loc=2);
plot_with_hpd(interval_bounds[:-1], tv_base_hazard, survival,
surv_ax, color=blue)
plot_with_hpd(interval_bounds[:-1], tv_met_hazard, survival,
surv_ax, color=red)
surv_ax.set_xlim(0, df.time.max());
surv_ax.set_xlabel('Months since mastectomy');
surv_ax.set_ylabel('Survival function $S(t)$');
fig.suptitle('Bayesian survival model with time varying effects');
###Output
_____no_output_____ |
notebooks/scop-class-prediction.ipynb | ###Markdown
SCOP Class Prediction using Machine learning This notebook is an example of a workflow for a simple machine learning problem. In particular, we will be looking at protein classification according to SCOP1 class.The problem is formulated as a binary classification problem, in which we ask whether a template protein is from the same SCOP classification as the target protein.In the dataset provided, each sample has 8 pairwise sequence-based features between the target and template proteins.1. Murzin A. G., Brenner S. E., Hubbard T., Chothia C. (1995). SCOP: a structural classification of proteins database for the investigation of sequences and structures. J. Mol. Biol. 247, 536-540 Imports* numpy: Matrix algebra and numerical methods.* pandas: Data frames for manipulating and visualising data as tables.* matplotlib: Everybody's favourite Python plotting library.* seaborn: Statistical visualisation library built on matplotlib and pandas. Lots of high-level functions for data visualisation.* scikit-learn (sklearn): Machine learning library. Today we'll use its implementations of logistic regression and random forest.* plotting: python file containing helping functions for plotting graphs.
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import MinMaxScaler
#import plotting
sns.set(context='notebook', style='white', font_scale=1.8)
%matplotlib inline
###Output
_____no_output_____
###Markdown
Data exploration We read our data straight into a dataframe using Pandas. Jupyter renders dataframes as nice tables, allowing us to look at our data as soon as we load it.
###Code
all_data = pd.read_csv('data/Data_ISMB2019.txt', sep=' ')
all_data.dropna(axis='index', how='any', inplace=True)
###Output
_____no_output_____
###Markdown
The data has 8 pairwise features between the template and the target protein and a label.* Target_Length: Length of target sequence* Template_Length: Length of template sequence* Contact_PPV: % of predicted contacts (using metaPSICOV) for the target present in the template* Contact_TP: of predicted contacts for the target present in the template* Contact_P: of predicted contacts for the target successfully mapped to the template* Contact_All: of predicted contacts for the target.* Neff: of Effective Sequences* SeqID: Sequence Identity (calculated from the NW sequence alignment)* Label: Fam (same family), SFam (same superfamily), Fold (same fold), Random
###Code
all_data.head()
split = pd.read_csv('./data/Data_Split.txt', sep=' ', header=None)
split.columns = ['Protein', 'Split']
# Select target-template pairs where both proteins belong to the 'same family' cluster
targets = split[split['Split']=='FAMILY']['Protein']
data = all_data[(all_data['Target'].isin(targets)) & (all_data['Template'].isin(targets))].copy()
# Drop duplicated rows since we don't know which entry is correct
#
duplicate_idx = data.duplicated(subset=['Template', 'Target'], keep=False)
duplicates = data.loc[duplicate_idx].sort_values(by=['Target', 'Template'])
#
data.drop_duplicates(subset=['Template', 'Target'], keep=False, inplace=True)
###Output
_____no_output_____
###Markdown
Many of the plotting functions available through seaborn can operate directly on a pandas dataframe and use the row and column names to automatically annotate the plot. This is a very powerful way to rapidly visualise data during the exploration stage. Here we create a bar plot of the number of examples of each class in the data set, as data imbalance is an important consideration when training and testing a classifier.
###Code
fig, ax = plt.subplots(1, 1, figsize=(5,5))
sns.countplot(data=data, x='Label', order=['Random', 'Fam', 'SFam', 'Fold'], ax=ax);
#fig.savefig('bars.pdf', dpi=300)
###Output
_____no_output_____
###Markdown
We're going to focus on the toy problem of distinguishing between proteins that are in the same family and proteins that are completely unrelated. It's worth noting that pandas is not always clear about whether or not it is returning a view or a copy of the contents of a dataframe, so I'm explicitly creating a copy of the subset of the data we want. This lets us play with the data without modifying the original dataframe. In an interactive environment like Jupyter it pays to be careful when manipulating data and carefully document any changes made, as we want to take advatage of the ability to modify individual cells without re-running the entire notebook every time we make a change.
###Code
family_data = data[data['Label'].isin(['Fam', 'Random'])].copy()
fig, ax = plt.subplots(1, 1, figsize=(5,5))
sns.countplot(data=family_data, x='Label', order=['Random', 'Fam'], ax=ax);
###Output
_____no_output_____
###Markdown
Training-test split As our visualisation shows, the data set we'll be using has a good balance of examples of proteins from the same families and proteins that are unrelated, so we don't need to worry about data imbalance here. As in any machine learning project, it's important that we decide on our test set before going any further. We adopt an 80/20 split, using 80% of the data for training and reserving 20% for testing. For the workshop we're also using a subset of 5000 randomly-chosen proteins just to speed up model training. For this problem, we're splitting the data *by target* rather than simply splitting the examples to ensures that all examples for a target are in the same set.Also, for reproducibility, we set the seed for our random number generator.
###Code
np.random.seed(42)
n_samples = 5000
sample = np.random.choice(family_data['Target'].unique(), size=n_samples, replace=False)
n_train = int(0.8*n_samples)
n_test = int(0.2*n_samples)
train = sample[:n_train]
test = sample[n_train:]
feature_names = ['Target_Length', 'Template_Length', 'Contact_PPV', 'Contact_TP', 'Contact_P', 'Contact_All', 'Neff', 'SeqID']
train_idx = family_data['Target'].isin(train)
test_idx = family_data['Target'].isin(test)
X_train = family_data[train_idx][feature_names].values
X_test = family_data[test_idx][feature_names].values
y_train = family_data[train_idx]['Label'].replace({'Random': 0, 'Fam': 1}).values
y_test = family_data[test_idx]['Label'].replace({'Random': 0, 'Fam': 1}).values
###Output
_____no_output_____
###Markdown
Confirm that our training and test sets all have a similar balance of positive and negative examples.
###Code
fig, axes = plt.subplots(1, 2, figsize=(10, 5))
sns.countplot(x=y_train, ax=axes[0])
sns.countplot(x=y_test, ax=axes[1])
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Training a random forest classifier
###Code
rf = RandomForestClassifier(n_estimators=50, random_state=42, n_jobs=-1)
rf.fit(X_train, y_train)
print(f'Accuracy score: {rf.score(X_test, y_test):.3f}')
###Output
Accuracy score: 0.880
###Markdown
Visualising the results
###Code
predicted = rf.predict(X_test)
test_probs = rf.predict_proba(X_test)[:,1]
fig, ax = plt.subplots(figsize=(5,5))
plotting.draw_confusion_matrix(y_test, predicted, class_labels={0: 'Random', 1: 'Fam'}, ax=ax)
fig.tight_layout()
#fig.savefig('rf_confusion_matrix.png', dpi=300)
###Output
_____no_output_____
###Markdown
Training another machine learning algorithm - Logistic regression
###Code
logistic = LogisticRegression(C=1e5, random_state=42, solver='liblinear')
logistic.fit(X_train, y_train)
logistic_test_probs = logistic.predict_proba(X_test)[:,1]
print(f'Accuracy score: {logistic.score(X_test, y_test):.3f}')
fig, ax = plt.subplots(figsize=(8, 8))
plotting.draw_roc_curve(y_test, test_probs, name='RF Test', ax=ax)
plotting.draw_roc_curve(y_test, logistic_test_probs, name='Logistic Test', ax=ax)
ax.plot([0,1],[0,1], 'k--', label='Random classifier AUC = 0.5')
ax.legend(loc='best')
fig.tight_layout()
#fig.savefig('roc_curve.png', dpi=300)
###Output
_____no_output_____
###Markdown
Frequently scaling of variables is important in machine learning projects (beyond the scope of this workshop). Let's see how logistic regression performs once we've scaled our variables.
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
logistic.fit(X_train_scaled, y_train)
logistic_test_probs_scaled = logistic.predict_proba(X_test_scaled)[:,1]
print(f'Accuracy score: {logistic.score(X_test_scaled, y_test):.3f}')
fig, ax = plt.subplots(figsize=(8, 8))
plotting.draw_roc_curve(y_test, test_probs, name='RF Test', ax=ax)
plotting.draw_roc_curve(y_test, logistic_test_probs, name='Logistic Test', ax=ax)
plotting.draw_roc_curve(y_test, logistic_test_probs_scaled, name='Logistic Test (scaled)', ax=ax)
ax.plot([0,1],[0,1], 'k--', label='Random classifier AUC = 0.5')
ax.legend(loc='best')
fig.tight_layout()
#fig.savefig('roc_curve.png', dpi=300)
###Output
_____no_output_____
###Markdown
SCOP Class Prediction using Machine learning This notebook is an example of a workflow for a simple machine learning problem. In particular, we will be looking at protein classification according to SCOP1 class.The problem is formulated as a binary classification problem, in which we ask whether a template protein is from the same SCOP classification as the target protein.In the dataset provided, each sample has 8 pairwise sequence-based features between the target and template proteins.1. Murzin A. G., Brenner S. E., Hubbard T., Chothia C. (1995). SCOP: a structural classification of proteins database for the investigation of sequences and structures. J. Mol. Biol. 247, 536-540 Imports* numpy: Matrix algebra and numerical methods.* pandas: Data frames for manipulating and visualising data as tables.* matplotlib: Everybody's favourite Python plotting library.* seaborn: Statistical visualisation library built on matplotlib and pandas. Lots of high-level functions for data visualisation.* scikit-learn (sklearn): Machine learning library. Today we'll use its implementations of logistic regression and random forest.* plotting: python file containing helping functions for plotting graphs.
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import MinMaxScaler
import plotting
sns.set(context='notebook', style='white', font_scale=1.8)
%matplotlib inline
###Output
_____no_output_____
###Markdown
Data exploration We read our data straight into a dataframe using Pandas. Jupyter renders dataframes as nice tables, allowing us to look at our data as soon as we load it.
###Code
all_data = pd.read_csv('./data/Data_ISMB2019.txt', sep=' ')
all_data.dropna(axis='index', how='any', inplace=True)
###Output
_____no_output_____
###Markdown
The data has 8 pairwise features between the template and the target protein and a label.* Target_Length: Length of target sequence* Template_Length: Length of template sequence* Contact_PPV: % of predicted contacts (using metaPSICOV) for the target present in the template* Contact_TP: of predicted contacts for the target present in the template* Contact_P: of predicted contacts for the target successfully mapped to the template* Contact_All: of predicted contacts for the target.* Neff: of Effective Sequences* SeqID: Sequence Identity (calculated from the NW sequence alignment)* Label: Fam (same family), SFam (same superfamily), Fold (same fold), Random
###Code
all_data.head()
split = pd.read_csv('./data/Data_Split.txt', sep=' ', header=None)
split.columns = ['Protein', 'Split']
# Select target-template pairs where both proteins belong to the 'same family' cluster
targets = split[split['Split']=='FAMILY']['Protein']
data = all_data[(all_data['Target'].isin(targets)) & (all_data['Template'].isin(targets))].copy()
# Drop duplicated rows since we don't know which entry is correct
#
duplicate_idx = data.duplicated(subset=['Template', 'Target'], keep=False)
duplicates = data.loc[duplicate_idx].sort_values(by=['Target', 'Template'])
#
data.drop_duplicates(subset=['Template', 'Target'], keep=False, inplace=True)
###Output
_____no_output_____
###Markdown
Many of the plotting functions available through seaborn can operate directly on a pandas dataframe and use the row and column names to automatically annotate the plot. This is a very powerful way to rapidly visualise data during the exploration stage. Here we create a bar plot of the number of examples of each class in the data set, as data imbalance is an important consideration when training and testing a classifier.
###Code
fig, ax = plt.subplots(1, 1, figsize=(5,5))
sns.countplot(data=data, x='Label', order=['Random', 'Fam', 'SFam', 'Fold'], ax=ax);
#fig.savefig('bars.pdf', dpi=300)
###Output
_____no_output_____
###Markdown
We're going to focus on the toy problem of distinguishing between proteins that are in the same family and proteins that are completely unrelated. It's worth noting that pandas is not always clear about whether or not it is returning a view or a copy of the contents of a dataframe, so I'm explicitly creating a copy of the subset of the data we want. This lets us play with the data without modifying the original dataframe. In an interactive environment like Jupyter it pays to be careful when manipulating data and carefully document any changes made, as we want to take advatage of the ability to modify individual cells without re-running the entire notebook every time we make a change.
###Code
family_data = data[data['Label'].isin(['Fam', 'Random'])].copy()
fig, ax = plt.subplots(1, 1, figsize=(5,5))
sns.countplot(data=family_data, x='Label', order=['Random', 'Fam'], ax=ax);
###Output
_____no_output_____
###Markdown
Training-test split As our visualisation shows, the data set we'll be using has a good balance of examples of proteins from the same families and proteins that are unrelated, so we don't need to worry about data imbalance here. As in any machine learning project, it's important that we decide on our test set before going any further. We adopt an 80/20 split, using 80% of the data for training and reserving 20% for testing. For the workshop we're also using a subset of 5000 randomly-chosen proteins just to speed up model training. For this problem, we're splitting the data *by target* rather than simply splitting the examples to ensures that all examples for a target are in the same set.Also, for reproducibility, we set the seed for our random number generator.
###Code
np.random.seed(42)
n_samples = 5000
sample = np.random.choice(family_data['Target'].unique(), size=n_samples, replace=False)
n_train = int(0.8*n_samples)
n_test = int(0.2*n_samples)
train = sample[:n_train]
test = sample[n_train:]
feature_names = ['Target_Length', 'Template_Length', 'Contact_PPV', 'Contact_TP', 'Contact_P', 'Contact_All', 'Neff', 'SeqID']
train_idx = family_data['Target'].isin(train)
test_idx = family_data['Target'].isin(test)
X_train = family_data[train_idx][feature_names].values
X_test = family_data[test_idx][feature_names].values
y_train = family_data[train_idx]['Label'].replace({'Random': 0, 'Fam': 1}).values
y_test = family_data[test_idx]['Label'].replace({'Random': 0, 'Fam': 1}).values
###Output
_____no_output_____
###Markdown
Confirm that our training and test sets all have a similar balance of positive and negative examples.
###Code
fig, axes = plt.subplots(1, 2, figsize=(10, 5))
sns.countplot(x=y_train, ax=axes[0])
sns.countplot(x=y_test, ax=axes[1])
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Training a random forest classifier
###Code
rf = RandomForestClassifier(n_estimators=50, random_state=42, n_jobs=-1)
rf.fit(X_train, y_train)
print(f'Accuracy score: {rf.score(X_test, y_test):.3f}')
###Output
Accuracy score: 0.880
###Markdown
Visualising the results
###Code
predicted = rf.predict(X_test)
test_probs = rf.predict_proba(X_test)[:,1]
fig, ax = plt.subplots(figsize=(5,5))
plotting.draw_confusion_matrix(y_test, predicted, class_labels={0: 'Random', 1: 'Fam'}, ax=ax)
fig.tight_layout()
#fig.savefig('rf_confusion_matrix.png', dpi=300)
###Output
_____no_output_____
###Markdown
Training another machine learning algorithm - Logistic regression
###Code
logistic = LogisticRegression(C=1e5, random_state=42, solver='liblinear')
logistic.fit(X_train, y_train)
logistic_test_probs = logistic.predict_proba(X_test)[:,1]
print(f'Accuracy score: {logistic.score(X_test, y_test):.3f}')
fig, ax = plt.subplots(figsize=(8, 8))
plotting.draw_roc_curve(y_test, test_probs, name='RF Test', ax=ax)
plotting.draw_roc_curve(y_test, logistic_test_probs, name='Logistic Test', ax=ax)
ax.plot([0,1],[0,1], 'k--', label='Random classifier AUC = 0.5')
ax.legend(loc='best')
fig.tight_layout()
#fig.savefig('roc_curve.png', dpi=300)
###Output
_____no_output_____
###Markdown
Frequently scaling of variables is important in machine learning projects (beyond the scope of this workshop). Let's see how logistic regression performs once we've scaled our variables.
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
logistic.fit(X_train_scaled, y_train)
logistic_test_probs_scaled = logistic.predict_proba(X_test_scaled)[:,1]
print(f'Accuracy score: {logistic.score(X_test_scaled, y_test):.3f}')
fig, ax = plt.subplots(figsize=(8, 8))
plotting.draw_roc_curve(y_test, test_probs, name='RF Test', ax=ax)
plotting.draw_roc_curve(y_test, logistic_test_probs, name='Logistic Test', ax=ax)
plotting.draw_roc_curve(y_test, logistic_test_probs_scaled, name='Logistic Test (scaled)', ax=ax)
ax.plot([0,1],[0,1], 'k--', label='Random classifier AUC = 0.5')
ax.legend(loc='best')
fig.tight_layout()
#fig.savefig('roc_curve.png', dpi=300)
###Output
_____no_output_____ |
Adversarial_Autoencoder_en_Colab.ipynb | ###Markdown
###Code
!pip install nibabel
import os
%matplotlib inline
%reload_ext autoreload
%autoreload 2
!git clone https://github.com/danielcanueto/abide
os.chdir("abide")
!python3 download_abide_preproc.py -d reho -p cpac -s nofilt_noglobal -o '/content/ABIDE_data'
os.chdir("..")
!rm -r Adversarial_Autoencoder
# import os
!git clone https://github.com/Naresh1318/Adversarial_Autoencoder
#!python3 Adversarial_Autoencoder/psy_manifold.py --train True
!python3 Adversarial_Autoencoder/psy_manifold_v2.py --train True
!python3 Adversarial_Autoencoder/psy_manifold_v2.py --train False
os.chdir('Adversarial_Autoencoder')
!python3 adversarial_autoencoder.py
generate_image_grid(sess, op=decoder_image)
os.listdir()
###Output
_____no_output_____ |
Taller_Intro_a_la_probabilidad_Juan_Sarmiento.ipynb | ###Markdown
Taller en clase 1 - Juan Camilo SarmientoGenerando numeros aleatorios en ptython.Objetivos:1. Familiarizar al estudiante con la generación de numeros aleatorios de diferentes distribuciones en python.2. Usar las herramientas de python de manejo de arreglos.3. Usar las herramientas de generación de gráficas de python. EntregaEn U-virtual, antes de la siguiente clase en el link designado como Taller en clase 1.
###Code
import numpy #as np# si no lo tiene instalado por favor correr en su consola $pip install numpy
import scipy #as sp# si no lo tiene instalado por favor correr en su consola $pip install scipy
import matplotlib #si no lo tiene instalado por favor correr en su consola $pip install matplotlib
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
1. Distribuciones discretas.1. Cree un arreglo de N muestras con N a su selección, pero mayor a 1000, de números distribuidos con las correspondientes distribuciones, para cada arreglo de números: 2. Grafique los números aleatorios, 3. Ordene de menor a mayor y grafique,4. Encuentre los valores teoricos de media y varianza, y comparelos con los estimados.
###Code
#Para todas las simulaciones:
N=1000
m=30
numpy.random.seed(seed=2**32 - 1)#Para fijar una semilla para los experimentos y que los numeros generados siempre sean los mismos en cada corrida completa de este fuente.
###Output
_____no_output_____
###Markdown
1.1. Distribución Bernoulli
###Code
# bernoulli con cierto p entre 0 y 1
p=0.3
from scipy.stats import bernoulli
mean, var, skew, kurt = bernoulli.stats(p, moments='mvsk')
r = bernoulli.rvs(p, size=N)
#ver https://docs.scipy.org/doc/scipy-0.14.0/reference/stats.html
plt.plot(r)
plt.title(str(N)+" puntos distribuidos bernoulli con p="+str(p))
plt.show()
r.sort()
plt.plot(r)
plt.title(str(N)+" puntos distribuidos bernoulli ordenados con p="+str(p))
plt.show()
plt.hist(r, bins = m)
plt.title("Histograma de los "+str(N)+" puntos distribuidos bernoulli con p="+str(p))
plt.show()
print("Note que hasta ahora no sabemos el tipo de nuestro arreglo r:")
print(type(r))
print("Vamos a hallar algunas estimaciones de estadisticas de r")
# ver https://numpy.org/doc/stable/reference/routines.statistics.html
avg=r.mean()
varest=r.var()
print("Los valores teoricos de la media y varianza para la bernoulli con p=",p,"son u=" ,mean,"var=",var)
print("Los valores estimados en nuestro experimento de la media y varianza para la bernoulli con p=",p,"son u_est=" ,avg,"var_est=",varest)
###Output
_____no_output_____
###Markdown
1.2. Distribución binomial
###Code
# binomial con cierto p entre 0 y 1 y cierto n
n,p = 10, 0.5
from scipy.stats import binom
mean, var, skew, kurt = binom.stats(n, p, moments='mvsk')
r=binom.rvs(n,p,size=N)
plt.plot(r)
plt.title(str(N)+" puntos con distribución binomial con p="+str(p)+"y con n="+str(n))
plt.show()
r.sort()
plt.plot(r)
plt.title(str(N)+" puntos con distribución binomial ordenados con p="+str(p)+"y con n="+str(n))
plt.show()
plt.hist(r,bins=m)
plt.title("Histograma de los "+str(N)+" puntos con distribución binomial con p="+str(p)+"y con n="+str(n))
plt.show()
varest=r.var()
#print("El tipo de dato de r es:")
#print(type(r))
print("Los valores teoricos de la media y varianza para la binomial con p=",p,"y con n=",n,"son u=" ,mean,"var=",var)
print("Los valores estimados en nuestro experimento de la media y varianza para la binomial con p=",p,"y con n=",n,"son u_est=" ,avg,"var_est=",varest)
###Output
_____no_output_____
###Markdown
1.3. Distribución geométrica
###Code
# Geométrica con cierto p entre 0 y 1
p = 0.3
from scipy.stats import geom
mean, var, skew, kurt = geom.stats(p, moments='mvsk')
r=geom.rvs(p,size=N)
plt.plot(r)
plt.title(str(N)+" puntos con distribución geométrica con p="+str(p))
plt.show()
r.sort()
plt.plot(r)
plt.title(str(N)+" puntos con distribución geométrica ordenados con p="+str(p))
plt.show()
plt.hist(r,bins=m)
plt.title("Histograma de los "+str(N)+" puntos con distribución geométrica con p="+str(p))
plt.show()
varest=r.var()
print("Los valores teoricos de la media y varianza para la geométrica con p=",p,"son u=" ,mean,"var=",var)
print("Los valores estimados en nuestro experimento de la media y varianza para la geométrica con p=",p,"son u_est=" ,avg,"var_est=",varest)
###Output
_____no_output_____
###Markdown
1.4. Distribución Poisson
###Code
# Poisson con cierto Lamnda L
L = 5
from scipy.stats import poisson
mean, var, skew, kurt = poisson.stats(L, moments='mvsk')
r=poisson.rvs(L,size=N)
plt.plot(r)
plt.title(str(N)+" puntos con distribución geométrica con L (L=mu)="+str(L))
plt.show()
r.sort()
plt.plot(r)
plt.title(str(N)+" puntos con distribución geométrica ordenados con L="+str(L))
plt.show()
plt.hist(r,bins=m)
plt.title("Histograma de los "+str(N)+" puntos con distribución geométrica con L="+str(L))
plt.show()
varest=r.var()
print("Los valores teoricos de la media y varianza para la geométrica con L=",L,"son u=" ,mean,"var=",var)
print("Los valores estimados en nuestro experimento de la media y varianza para la geométrica con L=",L,"son u_est=" ,avg,"var_est=",varest)
###Output
_____no_output_____
###Markdown
1.5. Distribución binomial negativa
###Code
#Binomial negativa con parametro p después de r fallos
r,p = 5,0.5
from scipy.stats import nbinom
mean, var, skew, kurt = nbinom.stats(r, p, moments='mvsk')
a=nbinom.rvs(r,p,size=N)
plt.plot(a)
plt.title(str(N)+" puntos con distribución binomial con r="+str(r)+"y con p="+str(p))
plt.show()
a.sort()
plt.plot(a)
plt.title(str(N)+" puntos con distribución binomial ordenados con r="+str(r)+"y con p="+str(p))
plt.show()
plt.hist(a,bins=m)
plt.title("Histograma de los "+str(N)+" puntos con distribución binomial con r="+str(r)+"y con p="+str(p))
plt.show()
varest=a.var()
print("Los valores teoricos de la media y varianza para la binomial con r=",r,"y con p=",p,"son u=" ,mean,"var=",var)
print("Los valores estimados en nuestro experimento de la media y varianza para la binomial con r=",r,"y con p=",p,"son u_est=" ,avg,"var_est=",varest)
###Output
_____no_output_____
###Markdown
2. Distribuciones contínuas.
###Code
#Para todas las simulaciones:
N=1000
m=30
numpy.random.seed(seed=0)#Para fijar una semilla para los experimentos y que los numeros generados siempre sean los mismos en cada corrida completa de este fuente.
###Output
_____no_output_____
###Markdown
2.1. Distribución Normal
###Code
# Normal de media 5 y desviación estandar 0.5
u=5
std=0.5
mean, var, skew, kurt = scipy.stats.norm.stats(moments='mvsk',loc=u,scale=std)
r = scipy.stats.norm.rvs (loc=u,scale=std, size=N)
#ver https://docs.scipy.org/doc/scipy-0.14.0/reference/stats.html
plt.plot(r)
plt.title(str(N)+" puntos distribuidos normales con media="+str(u)+" y std="+str(std))
plt.show()
r.sort()
plt.plot(r)
plt.title(str(N)+" puntos distribuidos normales ordenados con p="+str(p))
plt.show()
plt.hist(r, bins = m)
plt.title("Histograma de los "+str(N)+" puntos distribuidos normales con u="+str(u)+ " y std= "+str(std))
plt.show()
print("Note que hasta ahora no sabemos el tipo de nuestro arreglo r:")
print(type(r))
print("Vamos a hallar algunas estimaciones de estadisticas de r")
# ver https://numpy.org/doc/stable/reference/routines.statistics.html
avg=r.mean()
varest=r.var()
print("Los valores teoricos de la media y varianza para la normal son u=" ,mean,"var=",var)
print("Los valores estimados en nuestro experimento de la media y varianza son u_est=" ,avg,"var_est=",varest)
###Output
_____no_output_____
###Markdown
2.2. Distribución Uniforme
###Code
# Uniforme entre 0 y 255
a,b=0,255
from scipy.stats import uniform
mean, var, skew, kurt = uniform.stats(moments='mvsk',loc=a,scale=b)
r = uniform.rvs(loc=a,scale=b,size=N)
plt.plot(r)
plt.title(str(N)+" puntos con distribución uniforme")
plt.show()
r.sort()
plt.plot(r)
plt.title(str(N)+" puntos con distribución uniforme ordenados")
plt.show()
plt.hist(r, bins = m)
plt.title("Histograma de los "+str(N)+" puntos distribuidos uniformes en un rango entre a="+str(a)+" y b="+str(b))
plt.show()
avg=r.mean()
varest=r.var()
print("Los valores teoricos de la media y varianza para la normal son u=",mean,"var=",var)
print("Los valores estimados en nuestro experimento de la media y varianza son u_est=",avg,"var_est=",varest)
###Output
_____no_output_____
###Markdown
2.3. Distribución T de Student
###Code
#T de Student con paramero K=1
K=1.0
from scipy.stats import t
mean, var, skew, kurt = t.stats(K, moments='mvsk')
r = t.rvs(K, size=N)
plt.plot(r)
plt.title(str(N)+" puntos con distribución T con K="+str(K)+" (K=DoF)")
plt.show()
r.sort()
plt.plot(r)
plt.title(str(N)+" puntos con distribución T ordenados con DoF=K="+str(K))
plt.show()
plt.hist(r,bins=m)
plt.title("Histograma de los "+str(N)+" puntos con distribución T de Student con DoF=K="+str(K))
plt.show()
avg=r.mean()
varest=r.var()
print("Los valores teoricos de la media y varianza para la t de student son u=",mean,"var=",var)
print("Los valores estimados en nuestro experimento de la media y varianza son u_est=",avg,"var_est=",varest)
#T de Student variando el numero de m y el numero de muestras y K=1
N=10000
m=300
r = t.rvs(K, size=N)
plt.plot(r)
plt.title(str(N)+" puntos con distribución T con K="+str(K)+" (K=DoF)")
plt.show()
r.sort()
plt.plot(r)
plt.title(str(N)+" puntos con distribución T ordenados con K(K=DoF)="+str(K))
plt.show()
plt.hist(r,bins=m)
plt.title("Histograma de los "+str(N)+" puntos con distribución T de Student con DoF=K="+str(K))
plt.show()
avg=r.mean()
varest=r.var()
print("Los valores teoricos de la media y varianza para la t de student son u=",mean,"var=",var)
print("Los valores estimados en nuestro experimento de la media y varianza son u_est=",avg,"var_est=",varest)
#T de Student variando m, numero de muestras, y la semilla, se deja K=1
N=10000
m=300
numpy.random.seed(seed=2**16 -1)
r = t.rvs(K, size=N)
plt.plot(r)
plt.title(str(N)+" puntos con distribución T con K="+str(K)+" (K=DoF)")
plt.show()
r.sort()
plt.plot(r)
plt.title(str(N)+" puntos con distribución T ordenados con K(K=DoF)="+str(K))
plt.show()
plt.hist(r,bins=m)
plt.title("Histograma de los "+str(N)+" puntos con distribución T de Student con DoF=K="+str(K))
plt.show()
avg=r.mean()
varest=r.var()
print("Los valores teoricos de la media y varianza para la t de student son u=",mean,"var=",var)
print("Los valores estimados en nuestro experimento de la media y varianza son u_est=",avg,"var_est=",varest)
###Output
_____no_output_____
###Markdown
2.4. Distribución Exponencial
###Code
# Exponencial con Lamnda L
L=0.3
from scipy.stats import expon
mean, var, skew, kurt = expon.stats(moments='mvsk', loc=L, scale=1/L)
r = expon.rvs(loc=L, scale=(1/L), size=N)
plt.plot(r)
plt.title(str(N)+" puntos distribuidos exponencial con lamnda="+str(L))
plt.show()
r.sort()
plt.plot(r)
plt.title(str(N)+" puntos distribuidos normales ordenados con lamnda="+str(L))
plt.show()
plt.hist(r, bins = m)
plt.title("Histograma de los "+str(N)+" puntos distribuidos normales con lamnda="+str(L))
plt.show()
avg=r.mean()
varest=r.var()
print("Los valores teoricos de la media y varianza para la exponencial son u=" ,mean,"var=",var)
print("Los valores estimados en nuestro experimento de la media y varianza son u_est=" ,avg,"var_est=",varest)
###Output
_____no_output_____
###Markdown
2.5. Distribución Chi cuadrada
###Code
#Chi cuadrada con K grados de libertad
K=3.0
from scipy.stats import chi2
mean, var, skew, kurt = chi2.stats(K, moments='mvsk')
r = chi2.rvs (K, size=N)
plt.plot(r)
plt.title(str(N)+" puntos distribuidos chi cuadrado con "+str(K)+" DoF")
plt.show()
r.sort()
plt.plot(r)
plt.title(str(N)+" puntos distribuidos chi cuadrado con "+str(K)+" DoF")
plt.show()
plt.hist(r, bins = m)
plt.title("Histograma de los "+str(N)+" puntos distribuidos chi cuadrado con "+str(K)+" DoF")
plt.show()
avg=r.mean()
varest=r.var()
print("Los valores teoricos de la media y varianza para la chi cuadrado son u=" ,mean,"var=",var)
print("Los valores estimados en nuestro experimento de la media y varianza son u_est=" ,avg,"var_est=",varest)
###Output
_____no_output_____
###Markdown
2.6. Distribución Gamma
###Code
#Gamma con parametros k y theta t
k,t=2.54,0.43 #k=shape,t=scale o a=shape,b=scale (b=1/t)
from scipy.stats import gamma
mean,var,skew,kurt = gamma.stats(a=k,scale=t,moments='mvsk') #loc por definición es 0
r = gamma.rvs (a=k,scale=t,size=N)
plt.plot(r)
plt.title(str(N)+" puntos distribuidos gamma con k="+str(k)+" y theta t="+str(t))
plt.show()
r.sort()
plt.plot(r)
plt.title(str(N)+" puntos distribuidos gamma con k="+str(k)+" y theta t="+str(t))
plt.show()
plt.hist(r, bins = m)
plt.title("Histograma de los "+str(N)+" puntos distribuidos gamma con k="+str(k)+" y theta t="+str(t))
plt.show()
avg=r.mean()
varest=r.var()
print("Los valores teoricos de la media y varianza para la gamma son u=",mean,"var=",var)
print("Los valores estimados en nuestro experimento de la media y varianza son u_est=" ,avg,"var_est=",varest)
###Output
_____no_output_____ |
Homeworks/ddxk/legandre_3.ipynb | ###Markdown
* We can see that the errors of taylor approximations are very small in interval around 0 but it tends to grow large as we move away from 0. This is because the Taylor approximation is a local approximation.* The orthogonal projection on the other hand minimizes the global distance, the MSE between the approximation and the original vector.* Although the error is not as small around the origin, the energy of the error is lower, considered over the entire interval - it's a global optimization of the approximation.
###Code
plt.title('Taylor approximation vs polynomial approximation')
plt.xlabel('X')
plt.ylabel('Y')
plt.plot(t, taylor_error, label = 'Taylor error')
plt.plot(t, poly_error, label = 'Poly error')
plt.legend()
plt.show()
def mse(loss_arr):
return (1 / (len(loss_arr))) * sum(loss_arr ** 2)
mse(taylor_error)
mse(poly_error)
###Output
_____no_output_____ |
notebooks/wandb.ipynb | ###Markdown
Importing libraries and classes
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import wandb
import pytorch_lightning as pl
from pytorch_lightning.callbacks.model_checkpoint import ModelCheckpoint
from pytorch_lightning.loggers import TensorBoardLogger
from nam.config import defaults
from nam.data import FoldedDataset
from nam.data import NAMDataset
from nam.models import NAM
from nam.models import get_num_units
from nam.trainer import LitNAM
from nam.types import Config
from nam.utils import parse_args
from nam.utils import plot_mean_feature_importance
from nam.utils import plot_nams
from nam.data import load_gallup_data
###Output
_____no_output_____
###Markdown
Define the experiments configurations
###Code
config = defaults()
print(config)
###Output
_____no_output_____
###Markdown
---------
###Code
def run():
hparams_run = wandb.init()
config.update(**hparams_run.config)
dataset = load_gallup_data(config,
data_path='data/GALLUP.csv',
features_columns= ["income_2", "WP1219", "WP1220", "year"])
dataloaders = dataset.train_dataloaders()
model = NAM(
config=config,
name="NAM_GALLUP",
num_inputs=len(dataset[0][0]),
num_units=get_num_units(config, dataset.features),
)
for fold, (trainloader, valloader) in enumerate(dataloaders):
tb_logger = TensorBoardLogger(save_dir=config.logdir,
name=f'{model.name}',
version=f'fold_{fold + 1}')
checkpoint_callback = ModelCheckpoint(filename=tb_logger.log_dir +
"/{epoch:02d}-{val_loss:.4f}",
monitor='val_loss',
save_top_k=config.save_top_k,
mode='min')
litmodel = LitNAM(config, model)
trainer = pl.Trainer(logger=tb_logger,
max_epochs=config.num_epochs,
checkpoint_callback=checkpoint_callback)
trainer.fit(litmodel,
train_dataloader=trainloader,
val_dataloaders=valloader)
wandb.log({
"plot_mean_feature_importance": wandb.Image(plot_mean_feature_importance(model, dataset)),
"plot_nams": wandb.Image(plot_nams(model, dataset))
})
sweep_config = {
'method': 'bayes',
'metric': {
'name': 'val_loss',
'goal': 'minimize'
},
'parameters': {
'activation': {
'values': ["exu", "relu"]
},
"batch_size": {
'values': [2048, 4096]
},
"dropout": {
'min': 0.0,
'max': 0.99
},
"feature_dropout": {
'min': 0.0,
'max': 0.99
},
"output_regularization": {
'min': 0.0,
'max': 0.99
},
"l2_regularization": {
'min': 0.0,
'max': 0.99
},
"lr": {
'min': 1e-4,
'max': 0.1
},
"hidden_sizes": {
'values': [[], [32], [64, 32], [128, 64, 32]]
},
}
}
sweep_id = wandb.sweep(sweep_config, project="nam")
wandb.agent(sweep_id, function=run)
###Output
_____no_output_____ |
ASF/Projects/AI_Water_Masks_From_Prepared_Data_Stack.ipynb | ###Markdown
Flood Mapping using a Convolutional Neural Network (CNN) Alex Lewandowski; University of Alaska Fairbanks Adapted from ASF's AI_Water project: McKade Sorensen, George Meier, and Rohan Weeden This takes a prepared stack of RTC products as input, containing both VV and VH polarities (see the Prepare_Data_Stack notebook). It uses a fully convolutional neural network to create predicted water masks for all image pairs (VV and VH) in the stack.ASF's AI_Water is open source and freely available on github: https://github.com/asfadmin/AI_Water Important: The AI_Water neural network was trained on data collected during warm months. It was not trained on data collected from times and places experiencing sub-freezing temperatures.
###Code
%%javascript
var kernel = Jupyter.notebook.kernel;
var command = ["notebookUrl = ",
"'", window.location, "'" ].join('')
kernel.execute(command)
from IPython.display import Markdown
from IPython.display import display
user = !echo $JUPYTERHUB_USER
env = !echo $CONDA_PREFIX
if env[0] == '':
env[0] = 'Python 3 (base)'
if env[0] != '/home/jovyan/.local/envs/machine_learning':
display(Markdown(f'<text style=color:red><strong>WARNING:</strong></text>'))
display(Markdown(f'<text style=color:red>This notebook should be run using the "machine_learning" conda environment.</text>'))
display(Markdown(f'<text style=color:red>It is currently using the "{env[0].split("/")[-1]}" environment.</text>'))
display(Markdown(f'<text style=color:red>Select "machine_learning" from the "Change Kernel" submenu of the "Kernel" menu.</text>'))
display(Markdown(f'<text style=color:red>If the "machine_learning" environment is not present, use <a href="{notebookUrl.split("/user")[0]}/user/{user[0]}/notebooks/conda_environments/Create_OSL_Conda_Environments.ipynb"> Create_OSL_Conda_Environments.ipynb </a> to create it.</text>'))
display(Markdown(f'<text style=color:red>Note that you must restart your server after creating a new environment before it is usable by notebooks.</text>'))
###Output
_____no_output_____
###Markdown
Install Tensorflow, Keras, and other Needed Python Libraries Note: this must be done only once each time your OpneSARlab server is restarted Import necessary packages and libraries
###Code
import os
from osgeo import gdal
from typing import Tuple
import numpy as np
from keras.models import Model
from keras.models import load_model as kload_model
import asf_notebook as asfn
###Output
_____no_output_____
###Markdown
Setting Up Network and Path to Data Sets Define the path to our network
###Code
ai_water_path = '/home/jovyan/notebooks/ASF/Projects'
###Output
_____no_output_____
###Markdown
Write a function to create a list of paths to all tiffs in a directory
###Code
def get_tiff_paths(paths: str) -> list:
tiff_paths = !ls $paths | sort -t_ -k5,5
return tiff_paths
###Output
_____no_output_____
###Markdown
Enter the path to the data stack
###Code
while True:
print("Enter the absolute path to the directory holding your tiffs.")
tiff_dir = input()
paths = f"{tiff_dir}/*.tif*"
if os.path.exists(tiff_dir):
tiff_paths = get_tiff_paths(paths)
if len(tiff_paths) < 1:
print(f"{tiff_dir} exists but contains no tifs.")
print("You will not be able to proceed until tifs are prepared.")
break
else:
print(f"\n{tiff_dir} does not exist.")
continue
###Output
_____no_output_____
###Markdown
Move into the parent directory of the directory containing the data and create a directory in which to store the water masks
###Code
analysis_directory = os.path.dirname(tiff_dir)
os.chdir(analysis_directory)
mask_directory = f'{analysis_directory}/AI_Water_Masks'
asfn.new_directory(mask_directory)
print(f"Current working directory: {os.getcwd()}")
###Output
_____no_output_____
###Markdown
Discover Available Data Sets Write a function to create a dictionary containing lists of each vv/vh pair
###Code
def group_polarizations(tiff_paths: list) -> dict:
pths = {}
for tiff in tiff_paths:
product_name = tiff.split('.')[0][:-2]
if product_name in pths:
pths[product_name].append(tiff)
else:
pths.update({product_name: [tiff]})
pths[product_name].sort()
return pths
###Output
_____no_output_____
###Markdown
Write a function to confirm the presence of both VV and VH images in all image sets
###Code
def confirm_dual_polarizations(paths: dict) -> bool:
for p in paths:
if len(paths[p]) == 2:
if ('vv' not in paths[p][1] and 'VV' not in paths[p][1]) or \
('vh' not in paths[p][0] and 'VH' not in paths[p][0]):
return False
return True
###Output
_____no_output_____
###Markdown
Create a dictionary of VV/VH pairs and check it for completeness
###Code
grouped_pths = group_polarizations(tiff_paths)
if not confirm_dual_polarizations(grouped_pths):
print("ERROR: AI_Water requires both VV and VH polarizations.")
else:
print("Confirmed presence of VV and VH polarities for each product.")
#print(grouped_pths) #uncomment to print VV/VH path pairs
###Output
_____no_output_____
###Markdown
Creating Some Helper Scripts Write a function to pad an image, so it may be split into tiles with consistent dimensions
###Code
def pad_image(image: np.ndarray, to: int) -> np.ndarray:
height, width = image.shape
n_rows, n_cols = get_tile_row_col_count(height, width, to)
new_height = n_rows * to
new_width = n_cols * to
padded = np.zeros((new_height, new_width))
padded[:image.shape[0], :image.shape[1]] = image
return padded
###Output
_____no_output_____
###Markdown
Write a function to tile an image
###Code
def tile_image(image: np.ndarray, width: int = 512, height: int = 512) -> np.ndarray:
_nrows, _ncols = image.shape
_strides = image.strides
nrows, _m = divmod(_nrows, height)
ncols, _n = divmod(_ncols, width)
assert _m == 0, "Image must be evenly tileable. Please pad it first"
assert _n == 0, "Image must be evenly tileable. Please pad it first"
return np.lib.stride_tricks.as_strided(
np.ravel(image),
shape=(nrows, ncols, height, width),
strides=(height * _strides[0], width * _strides[1], *_strides),
writeable=False
).reshape(nrows * ncols, height, width)
###Output
_____no_output_____
###Markdown
Write a function to calculate the number of rows and columns of tiles needed to tile an image to a given size
###Code
def get_tile_row_col_count(height: int, width: int, tile_size: int) -> Tuple[int, int]:
return int(np.ceil(height / tile_size)), int(np.ceil(width / tile_size))
###Output
_____no_output_____
###Markdown
Write a function to load a trained model
###Code
def load_model(model_path: str) -> Model:
""" Loads and returns a model. Attaches the model name and that model's
history. """
model_dir = os.path.dirname(model_path)
print(f"model_dir: {model_dir}")
model = kload_model(model_path)
# Attach our extra data to the model
model.__asf_model_name = model_path
return model
###Output
_____no_output_____
###Markdown
Write a function to save a mask
###Code
def write_mask_to_file(mask: np.ndarray, file_name: str, projection: str, geo_transform: str) -> None:
(width, height) = mask.shape
out_image = gdal.GetDriverByName('GTiff').Create(
file_name, height, width, bands=1
)
out_image.SetProjection(projection)
out_image.SetGeoTransform(geo_transform)
out_image.GetRasterBand(1).WriteArray(mask)
out_image.GetRasterBand(1).SetNoDataValue(0)
out_image.FlushCache()
###Output
_____no_output_____
###Markdown
Run CNN-based Flood Mapping on Discovered Data Load the AI_Water model and print a summary of its fully convolutional neural network architecture
###Code
model_path = f'{ai_water_path}/network.h5'
model = load_model(model_path)
print(model.summary())
###Output
_____no_output_____
###Markdown
Iterate through each VV/VH pair, using AI_Water to create a predicted water mask for each
###Code
for pair in grouped_pths:
for tiff in grouped_pths[pair]:
f = gdal.Open(tiff)
img_array = f.ReadAsArray()
original_shape = img_array.shape
n_rows, n_cols = get_tile_row_col_count(*original_shape, tile_size=512)
print(f'tiff: {tiff}')
if 'vv' in tiff or 'VV' in tiff:
vv_array = pad_image(f.ReadAsArray(), 512)
invalid_pixels = np.nonzero(vv_array == 0.0)
vv_tiles = tile_image(vv_array)
else:
vh_array = pad_image(f.ReadAsArray(), 512)
invalid_pixels = np.nonzero(vh_array == 0.0)
vh_tiles = tile_image(vh_array)
# Predict masks
masks = model.predict(
np.stack((vh_tiles, vv_tiles), axis=3), batch_size=1, verbose=1
)
masks.round(decimals=0, out=masks)
# Stitch masks together
mask = masks.reshape((n_rows, n_cols, 512, 512)) \
.swapaxes(1, 2) \
.reshape(n_rows * 512, n_cols * 512) # yapf: disable
mask[invalid_pixels] = 0
filename, ext = os.path.basename(tiff).split('.')
outfile = f"{mask_directory}/{filename[:-3]}_water_mask.{ext}"
write_mask_to_file(mask, outfile, f.GetProjection(), f.GetGeoTransform())
###Output
_____no_output_____ |
notebooks/prepare_wikitext103.ipynb | ###Markdown
82841986 is_char and is_digit 82075350 regrex non-ascii and none-digit 86460763 left
###Code
import os
import random
import re
import pandas as pd
max_length = 25
min_length = 1
root = '../data'
charset = 'abcdefghijklmnopqrstuvwxyz'
digits = '0123456789'
def is_char(text, ratio=0.5):
text = text.lower()
length = max(len(text), 1)
char_num = sum([t in charset for t in text])
if char_num < min_length: return False
if char_num / length < ratio: return False
return True
def is_digit(text, ratio=0.5):
length = max(len(text), 1)
digit_num = sum([t in digits for t in text])
if digit_num / length < ratio: return False
return True
###Output
_____no_output_____
###Markdown
generate training dataset
###Code
with open('/tmp/wikitext-103/wiki.train.tokens', 'r') as file:
lines = file.readlines()
inp, gt = [], []
for line in lines:
token = line.lower().split()
for text in token:
text = re.sub('[^0-9a-zA-Z]+', '', text)
if len(text) < min_length:
# print('short-text', text)
continue
if len(text) > max_length:
# print('long-text', text)
continue
inp.append(text)
gt.append(text)
train_voc = os.path.join(root, 'WikiText-103.csv')
pd.DataFrame({'inp':inp, 'gt':gt}).to_csv(train_voc, index=None, sep='\t')
len(inp)
inp[:100]
###Output
_____no_output_____
###Markdown
generate evaluation dataset
###Code
def disturb(word, degree, p=0.3):
if len(word) // 2 < degree: return word
if is_digit(word): return word
if random.random() < p: return word
else:
index = list(range(len(word)))
random.shuffle(index)
index = index[:degree]
new_word = []
for i in range(len(word)):
if i not in index:
new_word.append(word[i])
continue
if (word[i] not in charset) and (word[i] not in digits):
# special token
new_word.append(word[i])
continue
op = random.random()
if op < 0.1: # add
new_word.append(random.choice(charset))
new_word.append(word[i])
elif op < 0.2: continue # remove
else: new_word.append(random.choice(charset)) # replace
return ''.join(new_word)
lines = inp
degree = 1
keep_num = 50000
random.shuffle(lines)
part_lines = lines[:keep_num]
inp, gt = [], []
for w in part_lines:
w = w.strip().lower()
new_w = disturb(w, degree)
inp.append(new_w)
gt.append(w)
eval_voc = os.path.join(root, f'WikiText-103_eval_d{degree}.csv')
pd.DataFrame({'inp':inp, 'gt':gt}).to_csv(eval_voc, index=None, sep='\t')
list(zip(inp, gt))[:100]
###Output
_____no_output_____
###Markdown
82841986 is_char and is_digit 82075350 regrex non-ascii and none-digit 86460763 left
###Code
import os
import random
import re
import pandas as pd
max_length = 25
min_length = 1
root = '../data'
charset = 'abcdefghijklmnopqrstuvwxyz'
digits = '0123456789'
def is_char(text, ratio=0.5):
text = text.lower()
length = max(len(text), 1)
char_num = sum([t in charset for t in text])
if char_num < min_length: return False
if char_num / length < ratio: return False
return True
def is_digit(text, ratio=0.5):
length = max(len(text), 1)
digit_num = sum([t in digits for t in text])
if digit_num / length < ratio: return False
return True
###Output
_____no_output_____
###Markdown
generate training dataset
###Code
with open('/home/shubham/Downloads/wikitext-103-v1/wikitext-103/wiki.train.tokens', 'r') as file:
lines = file.readlines()
inp, gt = [], []
for line in lines:
token = line.lower().split()
for text in token:
text = re.sub('[^0-9a-zA-Z]+', '', text)
if len(text) < min_length:
# print('short-text', text)
continue
if len(text) > max_length:
# print('long-text', text)
continue
inp.append(text)
gt.append(text)
train_voc = os.path.join(root, 'WikiText-103.csv')
pd.DataFrame({'inp':inp, 'gt':gt}).to_csv(train_voc, index=None, sep='\t')
len(inp)
inp[:100]
###Output
_____no_output_____
###Markdown
generate evaluation dataset
###Code
def disturb(word, degree, p=0.3):
if len(word) // 2 < degree: return word
if is_digit(word): return word
if random.random() < p: return word
else:
index = list(range(len(word)))
random.shuffle(index)
index = index[:degree]
new_word = []
for i in range(len(word)):
if i not in index:
new_word.append(word[i])
continue
if (word[i] not in charset) and (word[i] not in digits):
# special token
new_word.append(word[i])
continue
op = random.random()
if op < 0.1: # add
new_word.append(random.choice(charset))
new_word.append(word[i])
elif op < 0.2: continue # remove
else: new_word.append(random.choice(charset)) # replace
return ''.join(new_word)
lines = inp
degree = 1
keep_num = 50000
random.shuffle(lines)
part_lines = lines[:keep_num]
inp, gt = [], []
for w in part_lines:
w = w.strip().lower()
new_w = disturb(w, degree)
inp.append(new_w)
gt.append(w)
eval_voc = os.path.join(root, f'WikiText-103_eval_d{degree}.csv')
pd.DataFrame({'inp':inp, 'gt':gt}).to_csv(eval_voc, index=None, sep='\t')
list(zip(inp, gt))[:100]
###Output
_____no_output_____ |
MNIST/Session3/3_Regularization(GBN_BN_Dropout).ipynb | ###Markdown
Import Libraries
###Code
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Data TransformationsWe first start with defining our data transformations. We need to think what our data is and how can we augment it to correct represent images which it might not see otherwise.
###Code
# Train Phase transformations
train_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
# transforms.RandomRotation((-7.0, 7.0), fill=(1,)),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)) # The mean and std have to be sequences (e.g., tuples), therefore you should add a comma after the values.
# Note the difference between (0.1307) and (0.1307,)
])
# Test Phase transformations
test_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
###Output
_____no_output_____
###Markdown
Dataset and Creating Train/Test Split
###Code
train = datasets.MNIST('./data', train=True, download=True, transform=train_transforms)
test = datasets.MNIST('./data', train=False, download=True, transform=test_transforms)
###Output
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ./data/MNIST/raw/train-images-idx3-ubyte.gz
###Markdown
Dataloader Arguments & Test/Train Dataloaders
###Code
SEED = 1
# CUDA?
cuda = torch.cuda.is_available()
print("CUDA Available?", cuda)
# For reproducibility
torch.manual_seed(SEED)
if cuda:
torch.cuda.manual_seed(SEED)
# dataloader arguments - something you'll fetch these from cmdprmt
dataloader_args = dict(shuffle=True, batch_size=128, num_workers=4, pin_memory=True) if cuda else dict(shuffle=True, batch_size=64)
# train dataloader
train_loader = torch.utils.data.DataLoader(train, **dataloader_args)
# test dataloader
test_loader = torch.utils.data.DataLoader(test, **dataloader_args)
class GhostBatchNorm(nn.BatchNorm2d):
"""
From : https://github.com/davidcpage/cifar10-fast/blob/master/bag_of_tricks.ipynb
Batch norm seems to work best with batch size of around 32. The reasons presumably have to do
with noise in the batch statistics and specifically a balance between a beneficial regularising effect
at intermediate batch sizes and an excess of noise at small batches.
Our batches are of size 512 and we can't afford to reduce them without taking a serious hit on training times,
but we can apply batch norm separately to subsets of a training batch. This technique, known as 'ghost' batch
norm, is usually used in a distributed setting but is just as useful when using large batches on a single node.
It isn't supported directly in PyTorch but we can roll our own easily enough.
"""
def __init__(self, num_features, num_splits, eps=1e-05, momentum=0.1, weight=True, bias=True):
super(GhostBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum)
self.weight.data.fill_(1.0)
self.bias.data.fill_(0.0)
self.weight.requires_grad = weight
self.bias.requires_grad = bias
self.num_splits = num_splits
self.register_buffer('running_mean', torch.zeros(num_features*self.num_splits))
self.register_buffer('running_var', torch.ones(num_features*self.num_splits))
def train(self, mode=True):
if (self.training is True) and (mode is False):
self.running_mean = torch.mean(self.running_mean.view(self.num_splits, self.num_features), dim=0).repeat(self.num_splits)
self.running_var = torch.mean(self.running_var.view(self.num_splits, self.num_features), dim=0).repeat(self.num_splits)
return super(GhostBatchNorm, self).train(mode)
def forward(self, input):
N, C, H, W = input.shape
if self.training or not self.track_running_stats:
return F.batch_norm(
input.view(-1, C*self.num_splits, H, W), self.running_mean, self.running_var,
self.weight.repeat(self.num_splits), self.bias.repeat(self.num_splits),
True, self.momentum, self.eps).view(N, C, H, W)
else:
return F.batch_norm(
input, self.running_mean[:self.num_features], self.running_var[:self.num_features],
self.weight, self.bias, False, self.momentum, self.eps)
###Output
###Markdown
The modelLet's start with the model we first saw
###Code
dropout_value=0.05
class Net(nn.Module):
def __init__(self, Ghost_BN = False):
super(Net, self).__init__()
# Input Block
self.convblock1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16) if Ghost_BN is False else GhostBatchNorm(num_features=16,num_splits=4, weight=False),
nn.Dropout(dropout_value)
) # output_size = 26
# CONVOLUTION BLOCK 1
self.convblock2 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16) if Ghost_BN is False else GhostBatchNorm(num_features=16,num_splits=4, weight=False),
nn.Dropout(dropout_value)
) # output_size = 24
# TRANSITION BLOCK 1
self.convblock3 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(1, 1), padding=0, bias=False),
nn.ReLU(),
) # output_size = 24
self.pool1 = nn.MaxPool2d(2, 2) # output_size = 12
# CONVOLUTION BLOCK 2
self.convblock4 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16) if Ghost_BN is False else GhostBatchNorm(num_features=16,num_splits=4, weight=False),
nn.Dropout(dropout_value)
) # output_size = 10
self.convblock5 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16) if Ghost_BN is False else GhostBatchNorm(num_features=16,num_splits=4, weight=False),
nn.Dropout(dropout_value)
) # output_size = 8
self.convblock6 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=10, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(10) if Ghost_BN is False else GhostBatchNorm(num_features=10,num_splits=4, weight=False),
nn.Dropout(dropout_value)
) # output_size = 6
# OUTPUT BLOCK
self.convblock7 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(3, 3), padding=1, bias=False),
nn.ReLU(),
nn.BatchNorm2d(10) if Ghost_BN is False else GhostBatchNorm(num_features=10,num_splits=4, weight=False),
nn.Dropout(dropout_value)
) # output_size = 6
self.gap = nn.Sequential(
nn.AvgPool2d(kernel_size=6)
)
self.convblock8 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(1, 1), padding=0, bias=False),
# nn.BatchNorm2d(10), NEVER
# nn.ReLU() NEVER!
) # output_size = 1
def forward(self, x):
x = self.convblock1(x)
x = self.convblock2(x)
x = self.convblock3(x)
x = self.pool1(x)
x = self.convblock4(x)
x = self.convblock5(x)
x = self.convblock6(x)
x = self.convblock7(x)
x = self.gap(x)
x = self.convblock8(x)
x = x.view(-1, 10)
return F.log_softmax(x, dim=-1)
###Output
_____no_output_____
###Markdown
Model ParamsCan't emphasize on how important viewing Model Summary is. Unfortunately, there is no in-built model visualizer, so we have to take external help
###Code
!pip install torchsummary
from torchsummary import summary
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
print(device)
model = Net().to(device)
summary(model, input_size=(1, 28, 28))
###Output
Requirement already satisfied: torchsummary in /usr/local/lib/python3.6/dist-packages (1.5.1)
cuda
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 16, 26, 26] 144
ReLU-2 [-1, 16, 26, 26] 0
BatchNorm2d-3 [-1, 16, 26, 26] 32
Dropout-4 [-1, 16, 26, 26] 0
Conv2d-5 [-1, 16, 24, 24] 2,304
ReLU-6 [-1, 16, 24, 24] 0
BatchNorm2d-7 [-1, 16, 24, 24] 32
Dropout-8 [-1, 16, 24, 24] 0
Conv2d-9 [-1, 16, 24, 24] 256
ReLU-10 [-1, 16, 24, 24] 0
MaxPool2d-11 [-1, 16, 12, 12] 0
Conv2d-12 [-1, 16, 10, 10] 2,304
ReLU-13 [-1, 16, 10, 10] 0
BatchNorm2d-14 [-1, 16, 10, 10] 32
Dropout-15 [-1, 16, 10, 10] 0
Conv2d-16 [-1, 16, 8, 8] 2,304
ReLU-17 [-1, 16, 8, 8] 0
BatchNorm2d-18 [-1, 16, 8, 8] 32
Dropout-19 [-1, 16, 8, 8] 0
Conv2d-20 [-1, 10, 6, 6] 1,440
ReLU-21 [-1, 10, 6, 6] 0
BatchNorm2d-22 [-1, 10, 6, 6] 20
Dropout-23 [-1, 10, 6, 6] 0
Conv2d-24 [-1, 10, 6, 6] 900
ReLU-25 [-1, 10, 6, 6] 0
BatchNorm2d-26 [-1, 10, 6, 6] 20
Dropout-27 [-1, 10, 6, 6] 0
AvgPool2d-28 [-1, 10, 1, 1] 0
Conv2d-29 [-1, 10, 1, 1] 100
================================================================
Total params: 9,920
Trainable params: 9,920
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.87
Params size (MB): 0.04
Estimated Total Size (MB): 0.91
----------------------------------------------------------------
###Markdown
Training and TestingLooking at logs can be boring, so we'll introduce **tqdm** progressbar to get cooler logs. Let's write train and test functions
###Code
from tqdm import tqdm
def train(model, device, train_loader, optimizer, epoch, l1_penalty = 0):
train_losses = []
train_accuracy = []
model.train()
pbar = tqdm(train_loader)
correct = 0
processed = 0
for batch_idx, (data, target) in enumerate(pbar):
# get samples
data, target = data.to(device), target.to(device)
# Init
optimizer.zero_grad()
# In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes.
# Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly.
# Predict
y_pred = model(data)
# Calculate loss
loss = F.nll_loss(y_pred, target)
# l1 regularization
if l1_penalty:
with torch.enable_grad():
l1_loss=0
for param in model.parameters():
l1_loss+=torch.sum(param.abs())
loss+=l1_penalty*l1_loss
train_losses.append(loss)
# Backpropagation
loss.backward()
optimizer.step()
# Update pbar-tqdm
pred = y_pred.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
processed += len(data)
pbar.set_description(desc= f'Loss={loss.item()} Batch_id={batch_idx} Accuracy: {100*correct/processed:0.2f}% ')
train_accuracy.append(100*correct/processed)
return train_losses, train_accuracy
def test(model, device, test_loader):
test_losses = []
test_accuracy = []
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
test_accuracy.append(100. * correct / len(test_loader.dataset))
return test_losses, test_accuracy
###Output
_____no_output_____
###Markdown
Let's Train and test our model
###Code
model_versions = {1:"L1 + BN", 2:"L2 + BN", 3:"L1 + L2 + BN", 4:"GBN", 5:"L1 + GBN", 6:"L2 + GBN", 7:"L1 + L2 + GBN"}
print(model_versions[1])
print(model_versions[2])
print(model_versions[3])
print(model_versions[4])
print(model_versions[5])
print(model_versions[6])
print(model_versions[7])
def get_regularization_params(model_version):
if "GBN" in model_version:
Ghost_BN = True
else:
Ghost_BN = False
if "L1" in model_version:
l1_penalty = 0.0001
else:
l1_penalty = 0
if "L2" in model_version:
l2_penalty = 1e-5
else:
l2_penalty = 0
return l1_penalty, l2_penalty, Ghost_BN
EPOCHS = 25
model_history = {}
for model_number, model_version in model_versions.items():
print(model_version)
train_loss=[]
train_acc=[]
test_loss=[]
test_acc=[]
l1_penalty, l2_penalty, Ghost_BN = get_regularization_params(model_version)
print(f"l1_penalty: {l1_penalty}"); print(f"l2_penalty: {l2_penalty}"); print(f"Ghost_BN: {Ghost_BN}")
model = Net(Ghost_BN).to(device)
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9, weight_decay=l2_penalty)
for epoch in range(EPOCHS):
print("EPOCH:", epoch)
train_loss_epoch, train_accuracy_epoch = train(model, device, train_loader, optimizer, epoch,l1_penalty)
train_loss.append(train_loss_epoch)
train_acc.append(train_accuracy_epoch)
test_loss_epoch, test_accuracy_epoch = test(model, device, test_loader)
test_loss.append(test_loss_epoch)
test_acc.append(test_accuracy_epoch)
model_history[model_number]={"train_loss":train_loss, "train_acc":train_acc, "test_loss":test_loss, "test_acc":test_acc}
# print(f"\nMaximum training accuracy: {train_max}\n")
# print(f"\nMaximum test accuracy: {test_max}\n")
###Output
0%| | 0/469 [00:00<?, ?it/s]
###Markdown
Plotting Results
###Code
fig, ax = plt.subplots()
for model_number, model_version in model_versions.items():
ax.plot(model_history[model_number]["train_loss"][0],label=model_version)
leg = ax.legend()
plt.title('Train Losses')
plt.xlabel("Epochs")
plt.ylabel("Train Loss")
fig, ax = plt.subplots()
for model_number, model_version in model_versions.items():
ax.plot(model_history[model_number]["test_loss"],label=model_version)
leg = ax.legend()
plt.title('Validation Losses')
plt.xlabel("Epochs")
plt.ylabel("Validation Loss")
fig, ax = plt.subplots()
for model_number, model_version in model_versions.items():
ax.plot(model_history[model_number]["train_acc"][0],label=model_version)
leg = ax.legend()
plt.title('Train Accuracies')
plt.xlabel("Epochs")
plt.ylabel("Train Accuracy")
fig, ax = plt.subplots()
for model_number, model_version in model_versions.items():
ax.plot(model_history[model_number]["test_acc"],label=model_version)
leg = ax.legend()
plt.title('Validation Accuracies')
plt.xlabel("Epochs")
plt.ylabel("Validation Accuracy")
fig, ax = plt.subplots()
for model_number, model_version in model_versions.items():
after15epochs_acc = model_history[model_number]["test_acc"][15:]
ax.plot([x for x in range(15,25)],after15epochs_acc,label=model_version)
# ax.plot(model_history[model_number]["test_acc"],label=model_version)
leg = ax.legend()
plt.title('Validation Accuracies')
plt.xlabel("Epochs")
plt.ylabel("Validation Accuracy")
# plt.savefig(f'{root_path}/validation_accuracy_plot_after15epochs.png')
###Output
_____no_output_____
###Markdown
Interactive Plots
###Code
!pip install plotly
!pip install notebook ipywidgets
from plotly.offline import iplot
from plotly.subplots import make_subplots
import plotly.graph_objects as go
# color=["white","red", "blue", "green", "yellow", "gray", "black", "orange"]
# print(color[1])
# for model_number, model_version in model_versions.items():
# if model_number > 4:
# after15epochs_acc = model_history[model_number]["test_acc"][15:]
# # print(model_history[model_number]["test_acc"][15:])
# # print([x[0] for x in after15epochs_acc])
# fig1 = make_subplots(rows=1, cols=1)
# fig1.add_trace(go.Scatter(x=[x for x in range(15,25)], y = [x[0] for x in after15epochs_acc]),row=1, col=1)
# # fig1.add_trace(go.Scatter(name=model_version,x=[x for x in range(15,25)], y = [x[0] for x in after15epochs_acc],mode='markers+lines',marker=dict(color='blue', size=2),showlegend=True))
# fig1.add_trace(go.Scatter(name=model_version,x=[x for x in range(15,25)], y = [x[0] for x in after15epochs_acc],mode='markers+lines',marker=dict(color=color[model_number], size=2),showlegend=True))
# fig1.update_layout(height=600, width=800, title_text="Side By Side Subplots")
# fig1.show()
color=["white","red", "blue", "green", "yellow", "gray", "black", "orange"]
fig1 = make_subplots(rows=1, cols=1)
# fig1.add_trace(go.Scatter(x=[x for x in range(15,25)], y = [x[0] for x in model_history[1]["test_acc"][:]]),row=1, col=1)
model_number=1;fig1.add_trace(go.Scatter(name=model_versions[model_number],x=[x for x in range(15,25)], y = [x[0] for x in model_history[model_number]["test_acc"][:]],mode='markers+lines',marker=dict(color=color[model_number], size=2),showlegend=True))
model_number=2;fig1.add_trace(go.Scatter(name=model_versions[model_number],x=[x for x in range(15,25)], y = [x[0] for x in model_history[model_number]["test_acc"][:]],mode='markers+lines',marker=dict(color=color[model_number], size=2),showlegend=True))
model_number=3;fig1.add_trace(go.Scatter(name=model_versions[model_number],x=[x for x in range(15,25)], y = [x[0] for x in model_history[model_number]["test_acc"][:]],mode='markers+lines',marker=dict(color=color[model_number], size=2),showlegend=True))
model_number=4;fig1.add_trace(go.Scatter(name=model_versions[model_number],x=[x for x in range(15,25)], y = [x[0] for x in model_history[model_number]["test_acc"][:]],mode='markers+lines',marker=dict(color=color[model_number], size=2),showlegend=True))
model_number=5;fig1.add_trace(go.Scatter(name=model_versions[model_number],x=[x for x in range(15,25)], y = [x[0] for x in model_history[model_number]["test_acc"][:]],mode='markers+lines',marker=dict(color=color[model_number], size=2),showlegend=True))
model_number=6;fig1.add_trace(go.Scatter(name=model_versions[model_number],x=[x for x in range(15,25)], y = [x[0] for x in model_history[model_number]["test_acc"][:]],mode='markers+lines',marker=dict(color=color[model_number], size=2),showlegend=True))
model_number=7;fig1.add_trace(go.Scatter(name=model_versions[model_number],x=[x for x in range(15,25)], y = [x[0] for x in model_history[model_number]["test_acc"][:]],mode='markers+lines',marker=dict(color=color[model_number], size=2),showlegend=True))
fig1.update_layout(height=600, width=800, title_text="Validation Accuracies")
fig1.update_xaxes(title_text="Epochs", row=1, col=1)
fig1.update_yaxes(title_text="Validation Accuracy", row=1, col=1)
fig1.show()
# fig1.write_image("Validation Accuracies.jpeg")
fig1 = make_subplots(rows=1, cols=1)
model_number=1;fig1.add_trace(go.Scatter(name=model_versions[model_number],x=[x for x in range(15,25)], y = [x[0] for x in model_history[model_number]["test_acc"][15:]],mode='markers+lines',marker=dict(color=color[model_number], size=2),showlegend=True))
model_number=2;fig1.add_trace(go.Scatter(name=model_versions[model_number],x=[x for x in range(15,25)], y = [x[0] for x in model_history[model_number]["test_acc"][15:]],mode='markers+lines',marker=dict(color=color[model_number], size=2),showlegend=True))
model_number=3;fig1.add_trace(go.Scatter(name=model_versions[model_number],x=[x for x in range(15,25)], y = [x[0] for x in model_history[model_number]["test_acc"][15:]],mode='markers+lines',marker=dict(color=color[model_number], size=2),showlegend=True))
model_number=4;fig1.add_trace(go.Scatter(name=model_versions[model_number],x=[x for x in range(15,25)], y = [x[0] for x in model_history[model_number]["test_acc"][15:]],mode='markers+lines',marker=dict(color=color[model_number], size=2),showlegend=True))
model_number=5;fig1.add_trace(go.Scatter(name=model_versions[model_number],x=[x for x in range(15,25)], y = [x[0] for x in model_history[model_number]["test_acc"][15:]],mode='markers+lines',marker=dict(color=color[model_number], size=2),showlegend=True))
model_number=6;fig1.add_trace(go.Scatter(name=model_versions[model_number],x=[x for x in range(15,25)], y = [x[0] for x in model_history[model_number]["test_acc"][15:]],mode='markers+lines',marker=dict(color=color[model_number], size=2),showlegend=True))
model_number=7;fig1.add_trace(go.Scatter(name=model_versions[model_number],x=[x for x in range(15,25)], y = [x[0] for x in model_history[model_number]["test_acc"][15:]],mode='markers+lines',marker=dict(color=color[model_number], size=2),showlegend=True))
fig1.update_layout(height=600, width=800, title_text="Validation Accuracies")
fig1.update_xaxes(title_text="Epochs", row=1, col=1)
fig1.update_yaxes(title_text="Validation Accuracy", row=1, col=1)
fig1.show()
###Output
_____no_output_____
###Markdown
Misclassified Images by GBN
###Code
# miss_classification(model, device, testloader = test_loader, model_version, num_of_images = 25)
def miss_classification(model, device, testloader,model_version, num_of_images = 25):
model.eval()
misclassified_cnt = 0
fig = plt.figure(figsize=(12,12))
# print (f"Missclassification on {model_version}")
fig.suptitle(f"Missclassification on {model_version}", fontsize=16)
for data, target in testloader:
data, target = data.to(device), target.to(device)
output = model(data)
pred = output.argmax(dim=1, keepdim=True)
pred_marker = pred.eq(target.view_as(pred))
wrong_idx = (pred_marker == False).nonzero()
for idx in wrong_idx:
index = idx[0].item()
title = "Actual:{}, Prediction:{}".format(target[index].item(), pred[index][0].item())
ax = fig.add_subplot(5, 5, misclassified_cnt+1, xticks=[], yticks=[])
ax.set_title(title)
plt.imshow(data[index].cpu().numpy().squeeze(), cmap='gray_r')
misclassified_cnt += 1
if(misclassified_cnt==num_of_images):
break
if(misclassified_cnt==num_of_images):
break
fig.savefig(f"({model_version})_missclassified_images.jpg")
%matplotlib inline
miss_classification(model, device, test_loader,"GBN", num_of_images = 25)
# To plot for each model and create a zip
# %matplotlib inline
# for model_number, model_version in model_versions.items():
# miss_classification(model, device, test_loader,model_version, num_of_images = 25)
# !zip img.zip *.jpg
###Output
_____no_output_____ |
docs/examples/batch-to-online.ipynb | ###Markdown
From batch to online/stream A quick overview of batch learningIf you've already delved into machine learning, then you shouldn't have any difficulty in getting to use incremental learning. If you are somewhat new to machine learning, then do not worry! The point of this notebook in particular is to introduce simple notions. We'll also start to show how `river` fits in and explain how to use it.The whole point of machine learning is to *learn from data*. In *supervised learning* you want to learn how to predict a target $y$ given a set of features $X$. Meanwhile in an unsupervised learning there is no target, and the goal is rather to identify patterns and trends in the features $X$. At this point most people tend to imagine $X$ as a somewhat big table where each row is an observation and each column is a feature, and they would be quite right. Learning from tabular data is part of what's called *batch learning*, which basically that all of the data is available to our learning algorithm at once. Multiple libraries have been created to handle the batch learning regime, with one of the most prominent being Python's [scikit-learn](https://scikit-learn.org/stable/).As a simple example of batch learning let's say we want to learn to predict if a women has breast cancer or not. We'll use the [breast cancer dataset available with scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html). We'll learn to map a set of features to a binary decision using a [logistic regression](https://www.wikiwand.com/en/Logistic_regression). Like many other models based on numerical weights, logistic regression is sensitive to the scale of the features. Rescaling the data so that each feature has mean 0 and variance 1 is generally considered good practice. We can apply the rescaling and fit the logistic regression sequentially in an elegant manner using a [Pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html). To measure the performance of the model we'll evaluate the average [ROC AUC score](https://www.wikiwand.com/en/Receiver_operating_characteristic) using a 5 fold [cross-validation](https://www.wikiwand.com/en/Cross-validation_(statistics)).
###Code
from sklearn import datasets
from sklearn import linear_model
from sklearn import metrics
from sklearn import model_selection
from sklearn import pipeline
from sklearn import preprocessing
# Load the data
dataset = datasets.load_breast_cancer()
X, y = dataset.data, dataset.target
# Define the steps of the model
model = pipeline.Pipeline([
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LogisticRegression(solver='lbfgs'))
])
# Define a determistic cross-validation procedure
cv = model_selection.KFold(n_splits=5, shuffle=True, random_state=42)
# Compute the MSE values
scorer = metrics.make_scorer(metrics.roc_auc_score)
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.975 (± 0.011)
###Markdown
This might be a lot to take in if you're not accustomed to scikit-learn, but it probably isn't if you are. Batch learning basically boils down to:1. Loading (and preprocessing) the data2. Fitting a model to the data3. Computing the performance of the model on unseen dataThis is pretty standard and is maybe how most people imagine a machine learning pipeline. However, this way of proceeding has certain downsides. First of all your laptop would crash if the `load_boston` function returned a dataset who's size exceeds your available amount of RAM. Sometimes you can use some tricks to get around this. For example by optimizing the data types and by using sparse representations when applicable you can potentially save precious gigabytes of RAM. However, like many tricks this only goes so far. If your dataset weighs hundreds of gigabytes then you won't go far without some special hardware. One solution is to do out-of-core learning; that is, algorithms that can learn by being presented the data in chunks or mini-batches. If you want to go down this road then take a look at [Dask](https://examples.dask.org/machine-learning.html) and [Spark's MLlib](https://spark.apache.org/mllib/).Another issue with the batch learning regime is that it can't elegantly learn from new data. Indeed if new data is made available, then the model has to learn from scratch with a new dataset composed of the old data and the new data. This is particularly annoying in a real situation where you might have new incoming data every week, day, hour, minute, or even setting. For example if you're building a recommendation engine for an e-commerce app, then you're probably training your model from 0 every week or so. As your app grows in popularity, so does the dataset you're training on. This will lead to longer and longer training times and might require a hardware upgrade.A final downside that isn't very easy to grasp concerns the manner in which features are extracted. Every time you want to train your model you first have to extract features. The trick is that some features might not be accessible at the particular point in time you are at. For example maybe that some attributes in your data warehouse get overwritten with time. In other words maybe that all the features pertaining to a particular observations are not available, whereas they were a week ago. This happens more often than not in real scenarios, and apart if you have a sophisticated data engineering pipeline then you will encounter these issues at some point. A hands-on introduction to incremental learningIncremental learning is also often called *online learning* or *stream learning*, but if you [google online learning](https://www.google.com/search?q=online+learning) a lot of the results will point to educational websites. Hence, the terms "incremental learning" and "stream learning" (from which `river` derives it's name) are prefered. The point of incremental learning is to fit a model to a stream of data. In other words, the data isn't available in it's entirety, but rather the observations are provided one by one. As an example let's stream through the dataset used previously.
###Code
for xi, yi in zip(X, y):
# This is where the model learns
pass
###Output
_____no_output_____
###Markdown
In this case we're iterating over a dataset that is already in memory, but we could just as well stream from a CSV file, a Kafka stream, an SQL query, etc. If we look at `xi` we can notice that it is a `numpy.ndarray`.
###Code
xi
###Output
_____no_output_____
###Markdown
`river` by design works with `dict`s. We believe that `dict`s are more enjoyable to program with than `numpy.ndarray`s, at least for when single observations are concerned. `dict`'s bring the added benefit that each feature can be accessed by name rather than by position.
###Code
for xi, yi in zip(X, y):
xi = dict(zip(dataset.feature_names, xi))
pass
xi
###Output
_____no_output_____
###Markdown
Conveniently, `river`'s `stream` module has an `iter_sklearn_dataset` method that we can use instead.
###Code
from river import stream
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
pass
###Output
_____no_output_____
###Markdown
The simple fact that we are getting the data as a stream means that we can't do a lot of things the same way as in a batch setting. For example let's say we want to scale the data so that it has mean 0 and variance 1, as we did earlier. To do so we simply have to subtract the mean of each feature to each value and then divide the result by the standard deviation of the feature. The problem is that we can't possible known the values of the mean and the standard deviation before actually going through all the data! One way to proceed would be to do a first pass over the data to compute the necessary values and then scale the values during a second pass. The problem is that this defeats our purpose, which is to learn by only looking at the data once. Although this might seem rather restrictive, it reaps sizable benefits down the road.The way we do feature scaling in `river` involves computing *running statistics* (also know as *moving statistics*). The idea is that we use a data structure that estimates the mean and updates itself when it is provided with a value. The same goes for the variance (and thus the standard deviation). For example, if we denote $\mu_t$ the mean and $n_t$ the count at any moment $t$, then updating the mean can be done as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}}\end{cases}$$Likewise, the running variance can be computed as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}} \\s_{t+1} = s_t + (x - \mu_t) \times (x - \mu_{t+1}) \\\sigma_{t+1} = \frac{s_{t+1}}{n_{t+1}}\end{cases}$$where $s_t$ is a running sum of squares and $\sigma_t$ is the running variance at time $t$. This might seem a tad more involved than the batch algorithms you learn in school, but it is rather elegant. Implementing this in Python is not too difficult. For example let's compute the running mean and variance of the `'mean area'` variable.
###Code
n, mean, sum_of_squares, variance = 0, 0, 0, 0
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
n += 1
old_mean = mean
mean += (xi['mean area'] - mean) / n
sum_of_squares += (xi['mean area'] - old_mean) * (xi['mean area'] - mean)
variance = sum_of_squares / n
print(f'Running mean: {mean:.3f}')
print(f'Running variance: {variance:.3f}')
###Output
Running mean: 654.889
Running variance: 123625.903
###Markdown
Let's compare this with `numpy`. But remember, `numpy` requires access to "all" the data.
###Code
import numpy as np
i = list(dataset.feature_names).index('mean area')
print(f'True mean: {np.mean(X[:, i]):.3f}')
print(f'True variance: {np.var(X[:, i]):.3f}')
###Output
True mean: 654.889
True variance: 123625.903
###Markdown
The results seem to be exactly the same! The twist is that the running statistics won't be very accurate for the first few observations. In general though this doesn't matter too much. Some would even go as far as to say that this descrepancy is beneficial and acts as some sort of regularization...Now the idea is that we can compute the running statistics of each feature and scale them as they come along. The way to do this with `river` is to use the `StandardScaler` class from the `preprocessing` module, as so:
###Code
from river import preprocessing
scaler = preprocessing.StandardScaler()
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
scaler = scaler.learn_one(xi)
###Output
_____no_output_____
###Markdown
Now that we are scaling the data, we can start doing some actual machine learning. We're going to implement an online linear regression task. Because all the data isn't available at once, we are obliged to do what is called *stochastic gradient descent*, which is a popular research topic and has a lot of variants. SGD is commonly used to train neural networks. The idea is that at each step we compute the loss between the target prediction and the truth. We then calculate the gradient, which is simply a set of derivatives with respect to each weight from the linear regression. Once we have obtained the gradient, we can update the weights by moving them in the opposite direction of the gradient. The amount by which the weights are moved typically depends on a *learning rate*, which is typically set by the user. Different optimizers have different ways of managing the weight update, and some handle the learning rate implicitly. Online linear regression can be done in `river` with the `LinearRegression` class from the `linear_model` module. We'll be using plain and simple SGD using the `SGD` optimizer from the `optim` module. During training we'll measure the squared error between the truth and the predictions.
###Code
from river import linear_model
from river import optim
scaler = preprocessing.StandardScaler()
optimizer = optim.SGD(lr=0.01)
log_reg = linear_model.LogisticRegression(optimizer)
y_true = []
y_pred = []
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer(), shuffle=True, seed=42):
# Scale the features
xi_scaled = scaler.learn_one(xi).transform_one(xi)
# Test the current model on the new "unobserved" sample
yi_pred = log_reg.predict_proba_one(xi_scaled)
# Train the model with the new sample
log_reg.learn_one(xi_scaled, yi)
# Store the truth and the prediction
y_true.append(yi)
y_pred.append(yi_pred[True])
print(f'ROC AUC: {metrics.roc_auc_score(y_true, y_pred):.3f}')
###Output
ROC AUC: 0.990
###Markdown
The ROC AUC is significantly better than the one obtained from the cross-validation of scikit-learn's logisitic regression. However to make things really comparable it would be nice to compare with the same cross-validation procedure. `river` has a `compat` module that contains utilities for making `river` compatible with other Python libraries. Because we're doing regression we'll be using the `SKLRegressorWrapper`. We'll also be using `Pipeline` to encapsulate the logic of the `StandardScaler` and the `LogisticRegression` in one single object.
###Code
from river import compat
from river import compose
# We define a Pipeline, exactly like we did earlier for sklearn
model = compose.Pipeline(
('scale', preprocessing.StandardScaler()),
('log_reg', linear_model.LogisticRegression())
)
# We make the Pipeline compatible with sklearn
model = compat.convert_river_to_sklearn(model)
# We compute the CV scores using the same CV scheme and the same scoring
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.964 (± 0.016)
###Markdown
From batch to online/stream A quick overview of batch learningIf you've already delved into machine learning, then you shouldn't have any difficulty in getting to use incremental learning. If you are somewhat new to machine learning, then do not worry! The point of this notebook in particular is to introduce simple notions. We'll also start to show how `river` fits in and explain how to use it.The whole point of machine learning is to *learn from data*. In *supervised learning* you want to learn how to predict a target $y$ given a set of features $X$. Meanwhile in an unsupervised learning there is no target, and the goal is rather to identify patterns and trends in the features $X$. At this point most people tend to imagine $X$ as a somewhat big table where each row is an observation and each column is a feature, and they would be quite right. Learning from tabular data is part of what's called *batch learning*, which basically that all of the data is available to our learning algorithm at once. Multiple libraries have been created to handle the batch learning regime, with one of the most prominent being Python's [scikit-learn](https://scikit-learn.org/stable/).As a simple example of batch learning let's say we want to learn to predict if a women has breast cancer or not. We'll use the [breast cancer dataset available with scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html). We'll learn to map a set of features to a binary decision using a [logistic regression](https://www.wikiwand.com/en/Logistic_regression). Like many other models based on numerical weights, logisitc regression is sensitive to the scale of the features. Rescaling the data so that each feature has mean 0 and variance 1 is generally considered good practice. We can apply the rescaling and fit the logistic regression sequentially in an elegant manner using a [Pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html). To measure the performance of the model we'll evaluate the average [ROC AUC score](https://www.wikiwand.com/en/Receiver_operating_characteristic) using a 5 fold [cross-validation](https://www.wikiwand.com/en/Cross-validation_(statistics)).
###Code
from sklearn import datasets
from sklearn import linear_model
from sklearn import metrics
from sklearn import model_selection
from sklearn import pipeline
from sklearn import preprocessing
# Load the data
dataset = datasets.load_breast_cancer()
X, y = dataset.data, dataset.target
# Define the steps of the model
model = pipeline.Pipeline([
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LogisticRegression(solver='lbfgs'))
])
# Define a determistic cross-validation procedure
cv = model_selection.KFold(n_splits=5, shuffle=True, random_state=42)
# Compute the MSE values
scorer = metrics.make_scorer(metrics.roc_auc_score)
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.975 (± 0.011)
###Markdown
This might be a lot to take in if you're not accustomed to scikit-learn, but it probably isn't if you are. Batch learning basically boils down to:1. Loading (and preprocessing) the data2. Fitting a model to the data3. Computing the performance of the model on unseen dataThis is pretty standard and is maybe how most people imagine a machine learning pipeline. However, this way of proceeding has certain downsides. First of all your laptop would crash if the `load_boston` function returned a dataset who's size exceeds your available amount of RAM. Sometimes you can use some tricks to get around this. For example by optimizing the data types and by using sparse representations when applicable you can potentially save precious gigabytes of RAM. However, like many tricks this only goes so far. If your dataset weighs hundreds of gigabytes then you won't go far without some special hardware. One solution is to do out-of-core learning; that is, algorithms that can learn by being presented the data in chunks or mini-batches. If you want to go down this road then take a look at [Dask](https://examples.dask.org/machine-learning.html) and [Spark's MLlib](https://spark.apache.org/mllib/).Another issue with the batch learning regime is that it can't elegantly learn from new data. Indeed if new data is made available, then the model has to learn from scratch with a new dataset composed of the old data and the new data. This is particularly annoying in a real situation where you might have new incoming data every week, day, hour, minute, or even setting. For example if you're building a recommendation engine for an e-commerce app, then you're probably training your model from 0 every week or so. As your app grows in popularity, so does the dataset you're training on. This will lead to longer and longer training times and might require a hardware upgrade.A final downside that isn't very easy to grasp concerns the manner in which features are extracted. Every time you want to train your model you first have to extract features. The trick is that some features might not be accessible at the particular point in time you are at. For example maybe that some attributes in your data warehouse get overwritten with time. In other words maybe that all the features pertaining to a particular observations are not available, whereas they were a week ago. This happens more often than not in real scenarios, and apart if you have a sophisticated data engineering pipeline then you will encounter these issues at some point. A hands-on introduction to incremental learningIncremental learning is also often called *online learning* or *stream learning*, but if you [google online learning](https://www.google.com/search?q=online+learning) a lot of the results will point to educational websites. Hence, the terms "incremental learning" and "stream learning" (from which `river` derives it's name) are prefered. The point of incremental learning is to fit a model to a stream of data. In other words, the data isn't available in it's entirety, but rather the observations are provided one by one. As an example let's stream through the dataset used previously.
###Code
for xi, yi in zip(X, y):
# This is where the model learns
pass
###Output
_____no_output_____
###Markdown
In this case we're iterating over a dataset that is already in memory, but we could just as well stream from a CSV file, a Kafka stream, an SQL query, etc. If we look at `xi` we can notice that it is a `numpy.ndarray`.
###Code
xi
###Output
_____no_output_____
###Markdown
`river` by design works with `dict`s. We believe that `dict`s are more enjoyable to program with than `numpy.ndarray`s, at least for when single observations are concerned. `dict`'s bring the added benefit that each feature can be accessed by name rather than by position.
###Code
for xi, yi in zip(X, y):
xi = dict(zip(dataset.feature_names, xi))
pass
xi
###Output
_____no_output_____
###Markdown
Conveniently, `river`'s `stream` module has an `iter_sklearn_dataset` method that we can use instead.
###Code
from river import stream
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
pass
###Output
_____no_output_____
###Markdown
The simple fact that we are getting the data as a stream means that we can't do a lot of things the same way as in a batch setting. For example let's say we want to scale the data so that it has mean 0 and variance 1, as we did earlier. To do so we simply have to subtract the mean of each feature to each value and then divide the result by the standard deviation of the feature. The problem is that we can't possible known the values of the mean and the standard deviation before actually going through all the data! One way to proceed would be to do a first pass over the data to compute the necessary values and then scale the values during a second pass. The problem is that this defeats our purpose, which is to learn by only looking at the data once. Although this might seem rather restrictive, it reaps sizable benefits down the road.The way we do feature scaling in `river` involves computing *running statistics* (also know as *moving statistics*). The idea is that we use a data structure that estimates the mean and updates itself when it is provided with a value. The same goes for the variance (and thus the standard deviation). For example, if we denote $\mu_t$ the mean and $n_t$ the count at any moment $t$, then updating the mean can be done as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}}\end{cases}$$Likewise, the running variance can be computed as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}} \\s_{t+1} = s_t + (x - \mu_t) \times (x - \mu_{t+1}) \\\sigma_{t+1} = \frac{s_{t+1}}{n_{t+1}}\end{cases}$$where $s_t$ is a running sum of squares and $\sigma_t$ is the running variance at time $t$. This might seem a tad more involved than the batch algorithms you learn in school, but it is rather elegant. Implementing this in Python is not too difficult. For example let's compute the running mean and variance of the `'mean area'` variable.
###Code
n, mean, sum_of_squares, variance = 0, 0, 0, 0
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
n += 1
old_mean = mean
mean += (xi['mean area'] - mean) / n
sum_of_squares += (xi['mean area'] - old_mean) * (xi['mean area'] - mean)
variance = sum_of_squares / n
print(f'Running mean: {mean:.3f}')
print(f'Running variance: {variance:.3f}')
###Output
Running mean: 654.889
Running variance: 123625.903
###Markdown
Let's compare this with `numpy`. But remember, `numpy` requires access to "all" the data.
###Code
import numpy as np
i = list(dataset.feature_names).index('mean area')
print(f'True mean: {np.mean(X[:, i]):.3f}')
print(f'True variance: {np.var(X[:, i]):.3f}')
###Output
True mean: 654.889
True variance: 123625.903
###Markdown
The results seem to be exactly the same! The twist is that the running statistics won't be very accurate for the first few observations. In general though this doesn't matter too much. Some would even go as far as to say that this descrepancy is beneficial and acts as some sort of regularization...Now the idea is that we can compute the running statistics of each feature and scale them as they come along. The way to do this with `river` is to use the `StandardScaler` class from the `preprocessing` module, as so:
###Code
from river import preprocessing
scaler = preprocessing.StandardScaler()
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
scaler = scaler.learn_one(xi)
###Output
_____no_output_____
###Markdown
Now that we are scaling the data, we can start doing some actual machine learning. We're going to implement an online linear regression task. Because all the data isn't available at once, we are obliged to do what is called *stochastic gradient descent*, which is a popular research topic and has a lot of variants. SGD is commonly used to train neural networks. The idea is that at each step we compute the loss between the target prediction and the truth. We then calculate the gradient, which is simply a set of derivatives with respect to each weight from the linear regression. Once we have obtained the gradient, we can update the weights by moving them in the opposite direction of the gradient. The amount by which the weights are moved typically depends on a *learning rate*, which is typically set by the user. Different optimizers have different ways of managing the weight update, and some handle the learning rate implicitly. Online linear regression can be done in `river` with the `LinearRegression` class from the `linear_model` module. We'll be using plain and simple SGD using the `SGD` optimizer from the `optim` module. During training we'll measure the squared error between the truth and the predictions.
###Code
from river import linear_model
from river import optim
scaler = preprocessing.StandardScaler()
optimizer = optim.SGD(lr=0.01)
log_reg = linear_model.LogisticRegression(optimizer)
y_true = []
y_pred = []
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer(), shuffle=True, seed=42):
# Scale the features
xi_scaled = scaler.learn_one(xi).transform_one(xi)
# Test the current model on the new "unobserved" sample
yi_pred = log_reg.predict_proba_one(xi_scaled)
# Train the model with the new sample
log_reg.learn_one(xi_scaled, yi)
# Store the truth and the prediction
y_true.append(yi)
y_pred.append(yi_pred[True])
print(f'ROC AUC: {metrics.roc_auc_score(y_true, y_pred):.3f}')
###Output
ROC AUC: 0.990
###Markdown
The ROC AUC is significantly better than the one obtained from the cross-validation of scikit-learn's logisitic regression. However to make things really comparable it would be nice to compare with the same cross-validation procedure. `river` has a `compat` module that contains utilities for making `river` compatible with other Python libraries. Because we're doing regression we'll be using the `SKLRegressorWrapper`. We'll also be using `Pipeline` to encapsulate the logic of the `StandardScaler` and the `LogisticRegression` in one single object.
###Code
from river import compat
from river import compose
# We define a Pipeline, exactly like we did earlier for sklearn
model = compose.Pipeline(
('scale', preprocessing.StandardScaler()),
('log_reg', linear_model.LogisticRegression())
)
# We make the Pipeline compatible with sklearn
model = compat.convert_river_to_sklearn(model)
# We compute the CV scores using the same CV scheme and the same scoring
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.964 (± 0.016)
###Markdown
From batch to online/stream A quick overview of batch learningIf you've already delved into machine learning, then you shouldn't have any difficulty in getting to use incremental learning. If you are somewhat new to machine learning, then do not worry! The point of this notebook in particular is to introduce simple notions. We'll also start to show how `river` fits in and explain how to use it.The whole point of machine learning is to *learn from data*. In *supervised learning* you want to learn how to predict a target $y$ given a set of features $X$. Meanwhile in an unsupervised learning there is no target, and the goal is rather to identify patterns and trends in the features $X$. At this point most people tend to imagine $X$ as a somewhat big table where each row is an observation and each column is a feature, and they would be quite right. Learning from tabular data is part of what's called *batch learning*, which basically that all of the data is available to our learning algorithm at once. Multiple libraries have been created to handle the batch learning regime, with one of the most prominent being Python's [scikit-learn](https://scikit-learn.org/stable/).As a simple example of batch learning let's say we want to learn to predict if a women has breast cancer or not. We'll use the [breast cancer dataset available with scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer().html). We'll learn to map a set of features to a binary decision using a [logistic regression](https://www.wikiwand.com/en/Logistic_regression). Like many other models based on numerical weights, logisitc regression is sensitive to the scale of the features. Rescaling the data so that each feature has mean 0 and variance 1 is generally considered good practice. We can apply the rescaling and fit the logistic regression sequentially in an elegant manner using a [Pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html). To measure the performance of the model we'll evaluate the average [ROC AUC score](https://www.wikiwand.com/en/Receiver_operating_characteristic) using a 5 fold [cross-validation](https://www.wikiwand.com/en/Cross-validation_(statistics)).
###Code
from sklearn import datasets
from sklearn import linear_model
from sklearn import metrics
from sklearn import model_selection
from sklearn import pipeline
from sklearn import preprocessing
# Load the data
dataset = datasets.load_breast_cancer()
X, y = dataset.data, dataset.target
# Define the steps of the model
model = pipeline.Pipeline([
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LogisticRegression(solver='lbfgs'))
])
# Define a determistic cross-validation procedure
cv = model_selection.KFold(n_splits=5, shuffle=True, random_state=42)
# Compute the MSE values
scorer = metrics.make_scorer(metrics.roc_auc_score)
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.975 (± 0.011)
###Markdown
This might be a lot to take in if you're not accustomed to scikit-learn, but it probably isn't if you are. Batch learning basically boils down to:1. Loading (and preprocessing) the data2. Fitting a model to the data3. Computing the performance of the model on unseen dataThis is pretty standard and is maybe how most people imagine a machine learning pipeline. However, this way of proceeding has certain downsides. First of all your laptop would crash if the `load_boston` function returned a dataset who's size exceeds your available amount of RAM. Sometimes you can use some tricks to get around this. For example by optimizing the data types and by using sparse representations when applicable you can potentially save precious gigabytes of RAM. However, like many tricks this only goes so far. If your dataset weighs hundreds of gigabytes then you won't go far without some special hardware. One solution is to do out-of-core learning; that is, algorithms that can learn by being presented the data in chunks or mini-batches. If you want to go down this road then take a look at [Dask](https://examples.dask.org/machine-learning.html) and [Spark's MLlib](https://spark.apache.org/mllib/).Another issue with the batch learning regime is that it can't elegantly learn from new data. Indeed if new data is made available, then the model has to learn from scratch with a new dataset composed of the old data and the new data. This is particularly annoying in a real situation where you might have new incoming data every week, day, hour, minute, or even setting. For example if you're building a recommendation engine for an e-commerce app, then you're probably training your model from 0 every week or so. As your app grows in popularity, so does the dataset you're training on. This will lead to longer and longer training times and might require a hardware upgrade.A final downside that isn't very easy to grasp concerns the manner in which features are extracted. Every time you want to train your model you first have to extract features. The trick is that some features might not be accessible at the particular point in time you are at. For example maybe that some attributes in your data warehouse get overwritten with time. In other words maybe that all the features pertaining to a particular observations are not available, whereas they were a week ago. This happens more often than not in real scenarios, and apart if you have a sophisticated data engineering pipeline then you will encounter these issues at some point. A hands-on introduction to incremental learningIncremental learning is also often called *online learning* or *stream learning*, but if you [google online learning](https://www.google.com/search?q=online+learning) a lot of the results will point to educational websites. Hence, the terms "incremental learning" and "stream learning" (from which `river` derives it's name) are prefered. The point of incremental learning is to fit a model to a stream of data. In other words, the data isn't available in it's entirety, but rather the observations are provided one by one. As an example let's stream through the dataset used previously.
###Code
for xi, yi in zip(X, y):
# This is where the model learns
pass
###Output
_____no_output_____
###Markdown
In this case we're iterating over a dataset that is already in memory, but we could just as well stream from a CSV file, a Kafka stream, an SQL query, etc. If we look at `xi` we can notice that it is a `numpy.ndarray`.
###Code
xi
###Output
_____no_output_____
###Markdown
`river` by design works with `dict`s. We believe that `dict`s are more enjoyable to program with than `numpy.ndarray`s, at least for when single observations are concerned. `dict`'s bring the added benefit that each feature can be accessed by name rather than by position.
###Code
for xi, yi in zip(X, y):
xi = dict(zip(dataset.feature_names, xi))
pass
xi
###Output
_____no_output_____
###Markdown
Conveniently, `river`'s `stream` module has an `iter_sklearn_dataset` method that we can use instead.
###Code
from river import stream
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
pass
###Output
_____no_output_____
###Markdown
The simple fact that we are getting the data as a stream means that we can't do a lot of things the same way as in a batch setting. For example let's say we want to scale the data so that it has mean 0 and variance 1, as we did earlier. To do so we simply have to subtract the mean of each feature to each value and then divide the result by the standard deviation of the feature. The problem is that we can't possible known the values of the mean and the standard deviation before actually going through all the data! One way to proceed would be to do a first pass over the data to compute the necessary values and then scale the values during a second pass. The problem is that this defeats our purpose, which is to learn by only looking at the data once. Although this might seem rather restrictive, it reaps sizable benefits down the road.The way we do feature scaling in `river` involves computing *running statistics* (also know as *moving statistics*). The idea is that we use a data structure that estimates the mean and updates itself when it is provided with a value. The same goes for the variance (and thus the standard deviation). For example, if we denote $\mu_t$ the mean and $n_t$ the count at any moment $t$, then updating the mean can be done as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}}\end{cases}$$Likewise, the running variance can be computed as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}} \\s_{t+1} = s_t + (x - \mu_t) \times (x - \mu_{t+1}) \\\sigma_{t+1} = \frac{s_{t+1}}{n_{t+1}}\end{cases}$$where $s_t$ is a running sum of squares and $\sigma_t$ is the running variance at time $t$. This might seem a tad more involved than the batch algorithms you learn in school, but it is rather elegant. Implementing this in Python is not too difficult. For example let's compute the running mean and variance of the `'mean area'` variable.
###Code
n, mean, sum_of_squares, variance = 0, 0, 0, 0
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
n += 1
old_mean = mean
mean += (xi['mean area'] - mean) / n
sum_of_squares += (xi['mean area'] - old_mean) * (xi['mean area'] - mean)
variance = sum_of_squares / n
print(f'Running mean: {mean:.3f}')
print(f'Running variance: {variance:.3f}')
###Output
Running mean: 654.889
Running variance: 123625.903
###Markdown
Let's compare this with `numpy`. But remember, `numpy` requires access to "all" the data.
###Code
import numpy as np
i = list(dataset.feature_names).index('mean area')
print(f'True mean: {np.mean(X[:, i]):.3f}')
print(f'True variance: {np.var(X[:, i]):.3f}')
###Output
True mean: 654.889
True variance: 123625.903
###Markdown
The results seem to be exactly the same! The twist is that the running statistics won't be very accurate for the first few observations. In general though this doesn't matter too much. Some would even go as far as to say that this descrepancy is beneficial and acts as some sort of regularization...Now the idea is that we can compute the running statistics of each feature and scale them as they come along. The way to do this with `river` is to use the `StandardScaler` class from the `preprocessing` module, as so:
###Code
from river import preprocessing
scaler = preprocessing.StandardScaler()
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
scaler = scaler.learn_one(xi)
###Output
_____no_output_____
###Markdown
Now that we are scaling the data, we can start doing some actual machine learning. We're going to implement an online linear regression task. Because all the data isn't available at once, we are obliged to do what is called *stochastic gradient descent*, which is a popular research topic and has a lot of variants. SGD is commonly used to train neural networks. The idea is that at each step we compute the loss between the target prediction and the truth. We then calculate the gradient, which is simply a set of derivatives with respect to each weight from the linear regression. Once we have obtained the gradient, we can update the weights by moving them in the opposite direction of the gradient. The amount by which the weights are moved typically depends on a *learning rate*, which is typically set by the user. Different optimizers have different ways of managing the weight update, and some handle the learning rate implicitly. Online linear regression can be done in `river` with the `LinearRegression` class from the `linear_model` module. We'll be using plain and simple SGD using the `SGD` optimizer from the `optim` module. During training we'll measure the squared error between the truth and the predictions.
###Code
from river import linear_model
from river import optim
scaler = preprocessing.StandardScaler()
optimizer = optim.SGD(lr=0.01)
log_reg = linear_model.LogisticRegression(optimizer)
y_true = []
y_pred = []
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer(), shuffle=True, seed=42):
# Scale the features
xi_scaled = scaler.learn_one(xi).transform_one(xi)
# Test the current model on the new "unobserved" sample
yi_pred = log_reg.predict_proba_one(xi_scaled)
# Train the model with the new sample
log_reg.learn_one(xi_scaled, yi)
# Store the truth and the prediction
y_true.append(yi)
y_pred.append(yi_pred[True])
print(f'ROC AUC: {metrics.roc_auc_score(y_true, y_pred):.3f}')
###Output
ROC AUC: 0.990
###Markdown
The ROC AUC is significantly better than the one obtained from the cross-validation of scikit-learn's logisitic regression. However to make things really comparable it would be nice to compare with the same cross-validation procedure. `river` has a `compat` module that contains utilities for making `river` compatible with other Python libraries. Because we're doing regression we'll be using the `SKLRegressorWrapper`. We'll also be using `Pipeline` to encapsulate the logic of the `StandardScaler` and the `LogisticRegression` in one single object.
###Code
from river import compat
from river import compose
# We define a Pipeline, exactly like we did earlier for sklearn
model = compose.Pipeline(
('scale', preprocessing.StandardScaler()),
('log_reg', linear_model.LogisticRegression())
)
# We make the Pipeline compatible with sklearn
model = compat.convert_river_to_sklearn(model)
# We compute the CV scores using the same CV scheme and the same scoring
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.964 (± 0.016)
###Markdown
From batch to online/stream A quick overview of batch learningIf you've already delved into machine learning, then you shouldn't have any difficulty in getting to use incremental learning. If you are somewhat new to machine learning, then do not worry! The point of this notebook in particular is to introduce simple notions. We'll also start to show how `river` fits in and explain how to use it.The whole point of machine learning is to *learn from data*. In *supervised learning* you want to learn how to predict a target $y$ given a set of features $X$. Meanwhile in an unsupervised learning there is no target, and the goal is rather to identify patterns and trends in the features $X$. At this point most people tend to imagine $X$ as a somewhat big table where each row is an observation and each column is a feature, and they would be quite right. Learning from tabular data is part of what's called *batch learning*, which basically that all of the data is available to our learning algorithm at once. Multiple libraries have been created to handle the batch learning regime, with one of the most prominent being Python's [scikit-learn](https://scikit-learn.org/stable/).As a simple example of batch learning let's say we want to learn to predict if a women has breast cancer or not. We'll use the [breast cancer dataset available with scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html). We'll learn to map a set of features to a binary decision using a [logistic regression](https://www.wikiwand.com/en/Logistic_regression). Like many other models based on numerical weights, logistic regression is sensitive to the scale of the features. Rescaling the data so that each feature has mean 0 and variance 1 is generally considered good practice. We can apply the rescaling and fit the logistic regression sequentially in an elegant manner using a [Pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html). To measure the performance of the model we'll evaluate the average [ROC AUC score](https://www.wikiwand.com/en/Receiver_operating_characteristic) using a 5 fold [cross-validation](https://www.wikiwand.com/en/Cross-validation_(statistics)).
###Code
from sklearn import datasets
from sklearn import linear_model
from sklearn import metrics
from sklearn import model_selection
from sklearn import pipeline
from sklearn import preprocessing
# Load the data
dataset = datasets.load_breast_cancer()
X, y = dataset.data, dataset.target
# Define the steps of the model
model = pipeline.Pipeline([
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LogisticRegression(solver='lbfgs'))
])
# Define a determistic cross-validation procedure
cv = model_selection.KFold(n_splits=5, shuffle=True, random_state=42)
# Compute the MSE values
scorer = metrics.make_scorer(metrics.roc_auc_score)
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.975 (± 0.011)
###Markdown
This might be a lot to take in if you're not accustomed to scikit-learn, but it probably isn't if you are. Batch learning basically boils down to:1. Loading (and preprocessing) the data2. Fitting a model to the data3. Computing the performance of the model on unseen dataThis is pretty standard and is maybe how most people imagine a machine learning pipeline. However, this way of proceeding has certain downsides. First of all your laptop would crash if the `load_boston` function returned a dataset who's size exceeds your available amount of RAM. Sometimes you can use some tricks to get around this. For example by optimizing the data types and by using sparse representations when applicable you can potentially save precious gigabytes of RAM. However, like many tricks this only goes so far. If your dataset weighs hundreds of gigabytes then you won't go far without some special hardware. One solution is to do out-of-core learning; that is, algorithms that can learn by being presented the data in chunks or mini-batches. If you want to go down this road then take a look at [Dask](https://examples.dask.org/machine-learning.html) and [Spark's MLlib](https://spark.apache.org/mllib/).Another issue with the batch learning regime is that it can't elegantly learn from new data. Indeed if new data is made available, then the model has to learn from scratch with a new dataset composed of the old data and the new data. This is particularly annoying in a real situation where you might have new incoming data every week, day, hour, minute, or even setting. For example if you're building a recommendation engine for an e-commerce app, then you're probably training your model from 0 every week or so. As your app grows in popularity, so does the dataset you're training on. This will lead to longer and longer training times and might require a hardware upgrade.A final downside that isn't very easy to grasp concerns the manner in which features are extracted. Every time you want to train your model you first have to extract features. The trick is that some features might not be accessible at the particular point in time you are at. For example maybe that some attributes in your data warehouse get overwritten with time. In other words maybe that all the features pertaining to a particular observations are not available, whereas they were a week ago. This happens more often than not in real scenarios, and apart if you have a sophisticated data engineering pipeline then you will encounter these issues at some point. A hands-on introduction to incremental learningIncremental learning is also often called *online learning* or *stream learning*, but if you [google online learning](https://www.google.com/search?q=online+learning) a lot of the results will point to educational websites. Hence, the terms "incremental learning" and "stream learning" (from which `river` derives it's name) are prefered. The point of incremental learning is to fit a model to a stream of data. In other words, the data isn't available in it's entirety, but rather the observations are provided one by one. As an example let's stream through the dataset used previously.
###Code
for xi, yi in zip(X, y):
# This is where the model learns
pass
###Output
_____no_output_____
###Markdown
In this case we're iterating over a dataset that is already in memory, but we could just as well stream from a CSV file, a Kafka stream, an SQL query, etc. If we look at `xi` we can notice that it is a `numpy.ndarray`.
###Code
xi
###Output
_____no_output_____
###Markdown
`river` by design works with `dict`s. We believe that `dict`s are more enjoyable to program with than `numpy.ndarray`s, at least for when single observations are concerned. `dict`'s bring the added benefit that each feature can be accessed by name rather than by position.
###Code
for xi, yi in zip(X, y):
xi = dict(zip(dataset.feature_names, xi))
pass
xi
###Output
_____no_output_____
###Markdown
Conveniently, `river`'s `stream` module has an `iter_sklearn_dataset` method that we can use instead.
###Code
from river import stream
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
pass
###Output
_____no_output_____
###Markdown
The simple fact that we are getting the data as a stream means that we can't do a lot of things the same way as in a batch setting. For example let's say we want to scale the data so that it has mean 0 and variance 1, as we did earlier. To do so we simply have to subtract the mean of each feature to each value and then divide the result by the standard deviation of the feature. The problem is that we can't possible known the values of the mean and the standard deviation before actually going through all the data! One way to proceed would be to do a first pass over the data to compute the necessary values and then scale the values during a second pass. The problem is that this defeats our purpose, which is to learn by only looking at the data once. Although this might seem rather restrictive, it reaps sizable benefits down the road.The way we do feature scaling in `river` involves computing *running statistics* (also know as *moving statistics*). The idea is that we use a data structure that estimates the mean and updates itself when it is provided with a value. The same goes for the variance (and thus the standard deviation). For example, if we denote $\mu_t$ the mean and $n_t$ the count at any moment $t$, then updating the mean can be done as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}}\end{cases}$$Likewise, the running variance can be computed as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}} \\s_{t+1} = s_t + (x - \mu_t) \times (x - \mu_{t+1}) \\\sigma_{t+1} = \frac{s_{t+1}}{n_{t+1}}\end{cases}$$where $s_t$ is a running sum of squares and $\sigma_t$ is the running variance at time $t$. This might seem a tad more involved than the batch algorithms you learn in school, but it is rather elegant. Implementing this in Python is not too difficult. For example let's compute the running mean and variance of the `'mean area'` variable.
###Code
n, mean, sum_of_squares, variance = 0, 0, 0, 0
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
n += 1
old_mean = mean
mean += (xi['mean area'] - mean) / n
sum_of_squares += (xi['mean area'] - old_mean) * (xi['mean area'] - mean)
variance = sum_of_squares / n
print(f'Running mean: {mean:.3f}')
print(f'Running variance: {variance:.3f}')
###Output
Running mean: 654.889
Running variance: 123625.903
###Markdown
Let's compare this with `numpy`. But remember, `numpy` requires access to "all" the data.
###Code
import numpy as np
i = list(dataset.feature_names).index('mean area')
print(f'True mean: {np.mean(X[:, i]):.3f}')
print(f'True variance: {np.var(X[:, i]):.3f}')
###Output
True mean: 654.889
True variance: 123625.903
###Markdown
The results seem to be exactly the same! The twist is that the running statistics won't be very accurate for the first few observations. In general though this doesn't matter too much. Some would even go as far as to say that this descrepancy is beneficial and acts as some sort of regularization...Now the idea is that we can compute the running statistics of each feature and scale them as they come along. The way to do this with `river` is to use the `StandardScaler` class from the `preprocessing` module, as so:
###Code
from river import preprocessing
scaler = preprocessing.StandardScaler()
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
scaler = scaler.learn_one(xi)
###Output
_____no_output_____
###Markdown
Now that we are scaling the data, we can start doing some actual machine learning. We're going to implement an online linear regression task. Because all the data isn't available at once, we are obliged to do what is called *stochastic gradient descent*, which is a popular research topic and has a lot of variants. SGD is commonly used to train neural networks. The idea is that at each step we compute the loss between the target prediction and the truth. We then calculate the gradient, which is simply a set of derivatives with respect to each weight from the linear regression. Once we have obtained the gradient, we can update the weights by moving them in the opposite direction of the gradient. The amount by which the weights are moved typically depends on a *learning rate*, which is typically set by the user. Different optimizers have different ways of managing the weight update, and some handle the learning rate implicitly. Online linear regression can be done in `river` with the `LinearRegression` class from the `linear_model` module. We'll be using plain and simple SGD using the `SGD` optimizer from the `optim` module. During training we'll measure the squared error between the truth and the predictions.
###Code
from river import linear_model
from river import optim
scaler = preprocessing.StandardScaler()
optimizer = optim.SGD(lr=0.01)
log_reg = linear_model.LogisticRegression(optimizer)
y_true = []
y_pred = []
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer(), shuffle=True, seed=42):
# Scale the features
xi_scaled = scaler.learn_one(xi).transform_one(xi)
# Test the current model on the new "unobserved" sample
yi_pred = log_reg.predict_proba_one(xi_scaled)
# Train the model with the new sample
log_reg.learn_one(xi_scaled, yi)
# Store the truth and the prediction
y_true.append(yi)
y_pred.append(yi_pred[True])
print(f'ROC AUC: {metrics.roc_auc_score(y_true, y_pred):.3f}')
###Output
ROC AUC: 0.990
###Markdown
The ROC AUC is significantly better than the one obtained from the cross-validation of scikit-learn's logisitic regression. However to make things really comparable it would be nice to compare with the same cross-validation procedure. `river` has a `compat` module that contains utilities for making `river` compatible with other Python libraries. Because we're doing regression we'll be using the `SKLRegressorWrapper`. We'll also be using `Pipeline` to encapsulate the logic of the `StandardScaler` and the `LogisticRegression` in one single object.
###Code
from river import compat
from river import compose
# We define a Pipeline, exactly like we did earlier for sklearn
model = compose.Pipeline(
('scale', preprocessing.StandardScaler()),
('log_reg', linear_model.LogisticRegression())
)
# We make the Pipeline compatible with sklearn
model = compat.convert_river_to_sklearn(model)
# We compute the CV scores using the same CV scheme and the same scoring
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.964 (± 0.016)
###Markdown
From batch to online A quick overview of batch learningIf you've already delved into machine learning, then you shouldn't have any difficulty in getting to use incremental learning. If you are somewhat new to machine learning, then do not worry! The point of this notebook in particular is to introduce simple notions. We'll also start to show how `creme` fits in and explain how to use it.The whole point of machine learning is to *learn from data*. In *supervised learning* you want to learn how to predict a target $y$ given a set of features $X$. Meanwhile in an unsupervised learning there is no target, and the goal is rather to identify patterns and trends in the features $X$. At this point most people tend to imagine $X$ as a somewhat big table where each row is an observation and each column is a feature, and they would be quite right. Learning from tabular data is part of what's called *batch learning*, which basically that all of the data is available to our learning algorithm at once. A lot of libraries have been created to handle the batch learning regime, with one of the most prominent being Python's [scikit-learn](https://scikit-learn.org/stable/). As a simple example of batch learning let's say we want to learn to predict if a women has breast cancer or not. We'll use the [breast cancer dataset available with scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer().html). We'll learn to map a set of features to a binary decision using a [logistic regression](https://www.wikiwand.com/en/Logistic_regression). Like many other models based on numerical weights, logisitc regression is sensitive to the scale of the features. Rescaling the data so that each feature has mean 0 and variance 1 is generally considered good practice. We can apply the rescaling and fit the logistic regression sequentially in an elegant manner using a [Pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html). To measure the performance of the model we'll evaluate the average [ROC AUC score](https://www.wikiwand.com/en/Receiver_operating_characteristic) using a 5 fold [cross-validation](https://www.wikiwand.com/en/Cross-validation_(statistics)).
###Code
from sklearn import datasets
from sklearn import linear_model
from sklearn import metrics
from sklearn import model_selection
from sklearn import pipeline
from sklearn import preprocessing
# Load the data
dataset = datasets.load_breast_cancer()
X, y = dataset.data, dataset.target
# Define the steps of the model
model = pipeline.Pipeline([
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LogisticRegression(solver='lbfgs'))
])
# Define a determistic cross-validation procedure
cv = model_selection.KFold(n_splits=5, shuffle=True, random_state=42)
# Compute the MSE values
scorer = metrics.make_scorer(metrics.roc_auc_score)
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.975 (± 0.011)
###Markdown
This might be a lot to take in if you're not accustomed to scikit-learn, but it probably isn't if you are. Batch learning basically boils down to:1. Loading the data2. Fitting a model to the data3. Computing the performance of the model on unseen dataThis is pretty standard and is maybe how most people imagine a machine learning pipeline. However this way of proceding has certain downsides. First of all your laptop would crash if the `load_boston` function returned a dataset who's size exceeds your available amount of RAM. Sometimes you can use some tricks to get around this. For example by optimizing the data types and by using sparse representations when applicable you can potentially save precious gigabytes of RAM. However like many tricks this only goes so far. If your dataset weighs hundreds of gigabytes then you won't go far without some special hardware. One solution is to do out-of-core learning; that is, algorithms that can learning by being presented the data in chunks. If you want to go down this road then take a look at [Dask](https://examples.dask.org/machine-learning.html) and [Spark's MLlib](https://spark.apache.org/mllib/).Another issue with the batch learning regime is that can't elegantly learn from new data. Indeed if new data is made available, then the model has to learn from scratch with a new dataset composed of the old data and the new data. This is particularly annoying in a real situation where you might have new incoming data every week, day, hour, minute, or even setting. For example if you're building a recommendation engine for an e-commerce app, then you're probably training your model from 0 every week or so. As your app grows in popularity, so does the dataset you're training on. This will lead to longer and longer training times and might require a hardware upgrade.A final downside that isn't very easy to grasp concerns the manner in which features are extracted. Everytime you want to train your model you first have to extract features. The trick is that some features might not be accessible at the particular point in time you are at. For example maybe that some attributes in your data warehouse get overwritten with time. In other words maybe that all the features pertaining to a particular observations are not available, whereas they were a week ago. This happens more often than not in real scenarios, and apart if you have a sophisticated data engineering pipeline then you will encounter these issues at some point. A hands-on introduction to incremental learningIncremental learning is also often called *online learning*, but if you [google online learning](https://www.google.com/search?q=online+learning) a lot of the results will point to educational websites. Hence we prefer the name "incremental learning", from which `creme` derives it's name. The point of incremental learning is to fit a model to a stream of data. In other words, the data isn't available in it's entirety, but rather the observations are provided one by one. As an example let's stream through the dataset used previously.
###Code
for xi, yi in zip(X, y):
# This where the model learns
pass
###Output
_____no_output_____
###Markdown
In this case we're iterating over a dataset that is already in memory, but we could just as well stream from a CSV file, a Kafka stream, an SQL query, etc. If we look at `x` we can notice that it is a `numpy.ndarray`.
###Code
xi
###Output
_____no_output_____
###Markdown
`creme` on the other hand works with `dict`s. We believe that `dict`s are more enjoyable to program with than `numpy.ndarray`s, at least for when single observations are concerned. `dict`'s bring the added benefit that each feature can be accessed by name rather than by position.
###Code
for xi, yi in zip(X, y):
xi = dict(zip(dataset.feature_names, xi))
pass
xi
###Output
_____no_output_____
###Markdown
`creme`'s `stream` module has an `iter_sklearn_dataset` convenience function that we can use instead.
###Code
from creme import stream
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
pass
###Output
_____no_output_____
###Markdown
The simple fact that we are getting the data in a stream means that we can't do a lot of things the same way as in a batch setting. For example let's say we want to scale the data so that it has mean 0 and variance 1, as we did earlier. To do so we simply have to subtract the mean of each feature to each value and then divide the result by the standard deviation of the feature. The problem is that we can't possible known the values of the mean and the standard deviation before actually going through all the data! One way to procede would be to do a first pass over the data to compute the necessary values and then scale the values during a second pass. The problem is that defeats our purpose, which is to learn by only looking at the data once. Although this might seem rather restrictive, it reaps sizable benefits down the road.The way we do feature scaling in `creme` involves computing *running statistics*. The idea is that we use a data structure that estimates the mean and updates itself when it is provided with a value. The same goes for the variance (and thus the standard deviation). For example, if we denote $\mu_t$ the mean and $n_t$ the count at any moment $t$, then updating the mean can be done as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}}\end{cases}$$Likewhise a running variance can be computed as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}} \\s_{t+1} = s_t + (x - \mu_t) \times (x - \mu_{t+1}) \\\sigma_{t+1} = \frac{s_{t+1}}{n_{t+1}}\end{cases}$$where $s_t$ is a running sum of squares and $\sigma_t$ is the running variance at time $t$. This might seem a tad more involved than the batch algorithms you learn in school, but it is rather elegant. Implementing this in Python is not too difficult. For example let's compute the running mean and variance of the `'mean area'` variable.
###Code
n, mean, sum_of_squares, variance = 0, 0, 0, 0
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
n += 1
old_mean = mean
mean += (xi['mean area'] - mean) / n
sum_of_squares += (xi['mean area'] - old_mean) * (xi['mean area'] - mean)
variance = sum_of_squares / n
print(f'Running mean: {mean:.3f}')
print(f'Running variance: {variance:.3f}')
###Output
Running mean: 654.889
Running variance: 123625.903
###Markdown
Let's compare this with `numpy`.
###Code
import numpy as np
i = list(dataset.feature_names).index('mean area')
print(f'True mean: {np.mean(X[:, i]):.3f}')
print(f'True variance: {np.var(X[:, i]):.3f}')
###Output
True mean: 654.889
True variance: 123625.903
###Markdown
The results seem to be exactly the same! The twist is that the running statistics won't be very accurate for the first few observations. In general though this doesn't matter too much. Some would even go as far as to say that this descrepancy is beneficial and acts as some sort of regularization...Now the idea is that we can compute the running statistics of each feature and scale them as they come along. The way to do this with `creme` is to use the `StandardScaler` class from the `preprocessing` module, as so:
###Code
from creme import preprocessing
scaler = preprocessing.StandardScaler()
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
xi = scaler.fit_one(xi)
###Output
_____no_output_____
###Markdown
This is quite terse but let's break it down nonetheless. Every class in `creme` has a `fit_one(x, y)` method where all the magic happens. Now the important thing to notice is that the `fit_one` actually returns the output for the given input. This is one of the nice properties of online learning: inference can be done immediatly. In `creme` each call to a `Transformer`'s `fit_one` will return the transformed output. Meanwhile calling `fit_one` with a `Classifier` or a `Regressor` will return the predicted target for the given set of features. The twist is that the prediction is made *before* looking at the true target `y`. This means that we get a free hold-out prediction every time we call `fit_one`. This can be used to monitor the performance of the model as it trains, which is obviously nice to have.Now that we are scaling the data, we can start doing some actual machine learning. We're going to implement an online linear regression. Because all the data isn't available at once, we are obliged to do what is called *stochastic gradient descent*, which is a popular research topic and has a lot of variants. SGD is commonly used to train neural networks. The idea is that at each step we compute the loss between the target prediction and the truth. We then calculate the gradient, which is simply a set of derivatives with respect to each weight from the linear regression. Once we have obtained the gradient, we can update the weights by moving them in the opposite direction of the gradient. The amount by which the weights are moved typically depends on a *learning rate*, which is typically set by the user. Different optimizers have different ways of managing the weight update, and some handle the learning rate implicitely. Online linear regression can be done in `creme` with the `LinearRegression` class from the `linear_model` module. We'll be using plain and simple SGD using the `SGD` optimizer from the `optim` module. During training we'll measure the squared error between the truth and the predictions.
###Code
from creme import linear_model
from creme import optim
scaler = preprocessing.StandardScaler()
optimizer = optim.SGD(lr=0.01)
log_reg = linear_model.LogisticRegression(optimizer)
y_true = []
y_pred = []
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer(), shuffle=True, seed=42):
# Scale the features
xi_scaled = scaler.fit_one(xi).transform_one(xi)
# Fit the linear regression
yi_pred = log_reg.predict_proba_one(xi_scaled)
log_reg.fit_one(xi_scaled, yi)
# Store the truth and the prediction
y_true.append(yi)
y_pred.append(yi_pred[True])
print(f'ROC AUC: {metrics.roc_auc_score(y_true, y_pred):.3f}')
###Output
ROC AUC: 0.990
###Markdown
The ROC AUC is significantly better than the one obtained from the cross-validation of scikit-learn's logisitic regression. However to make things really comparable it would be nice to compare with the same cross-validation procedure. `creme` has a `compat` module that contains utilities for making `creme` compatible with other Python libraries. Because we're doing regression we'll be using the `SKLRegressorWrapper`. We'll also be using `Pipeline` to encapsulate the logic of the `StandardScaler` and the `LogisticRegression` in one single object.
###Code
from creme import compat
from creme import compose
# We define a Pipeline, exactly like we did earlier for sklearn
model = compose.Pipeline(
('scale', preprocessing.StandardScaler()),
('log_reg', linear_model.LogisticRegression())
)
# We make the Pipeline compatible with sklearn
model = compat.convert_creme_to_sklearn(model)
# We compute the CV scores using the same CV scheme and the same scoring
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.964 (± 0.016)
###Markdown
From batch to online/stream A quick overview of batch learningIf you've already delved into machine learning, then you shouldn't have any difficulty in getting to use incremental learning. If you are somewhat new to machine learning, then do not worry! The point of this notebook in particular is to introduce simple notions. We'll also start to show how `river` fits in and explain how to use it.The whole point of machine learning is to *learn from data*. In *supervised learning* you want to learn how to predict a target $y$ given a set of features $X$. Meanwhile in an unsupervised learning there is no target, and the goal is rather to identify patterns and trends in the features $X$. At this point most people tend to imagine $X$ as a somewhat big table where each row is an observation and each column is a feature, and they would be quite right. Learning from tabular data is part of what's called *batch learning*, which basically that all of the data is available to our learning algorithm at once. Multiple libraries have been created to handle the batch learning regime, with one of the most prominent being Python's [scikit-learn](https://scikit-learn.org/stable/).As a simple example of batch learning let's say we want to learn to predict if a women has breast cancer or not. We'll use the [breast cancer dataset available with scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html). We'll learn to map a set of features to a binary decision using a [logistic regression](https://www.wikiwand.com/en/Logistic_regression). Like many other models based on numerical weights, logistic regression is sensitive to the scale of the features. Rescaling the data so that each feature has mean 0 and variance 1 is generally considered good practice. We can apply the rescaling and fit the logistic regression sequentially in an elegant manner using a [Pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html). To measure the performance of the model we'll evaluate the average [ROC AUC score](https://www.wikiwand.com/en/Receiver_operating_characteristic) using a 5 fold [cross-validation](https://www.wikiwand.com/en/Cross-validation_(statistics)).
###Code
from sklearn import datasets
from sklearn import linear_model
from sklearn import metrics
from sklearn import model_selection
from sklearn import pipeline
from sklearn import preprocessing
# Load the data
dataset = datasets.load_breast_cancer()
X, y = dataset.data, dataset.target
# Define the steps of the model
model = pipeline.Pipeline([
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LogisticRegression(solver='lbfgs'))
])
# Define a determistic cross-validation procedure
cv = model_selection.KFold(n_splits=5, shuffle=True, random_state=42)
# Compute the MSE values
scorer = metrics.make_scorer(metrics.roc_auc_score)
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.975 (± 0.011)
###Markdown
This might be a lot to take in if you're not accustomed to scikit-learn, but it probably isn't if you are. Batch learning basically boils down to:1. Loading (and preprocessing) the data2. Fitting a model to the data3. Computing the performance of the model on unseen dataThis is pretty standard and is maybe how most people imagine a machine learning pipeline. However, this way of proceeding has certain downsides. First of all your laptop would crash if the `load_boston` function returned a dataset who's size exceeds your available amount of RAM. Sometimes you can use some tricks to get around this. For example by optimizing the data types and by using sparse representations when applicable you can potentially save precious gigabytes of RAM. However, like many tricks this only goes so far. If your dataset weighs hundreds of gigabytes then you won't go far without some special hardware. One solution is to do out-of-core learning; that is, algorithms that can learn by being presented the data in chunks or mini-batches. If you want to go down this road then take a look at [Dask](https://examples.dask.org/machine-learning.html) and [Spark's MLlib](https://spark.apache.org/mllib/).Another issue with the batch learning regime is that it can't elegantly learn from new data. Indeed if new data is made available, then the model has to learn from scratch with a new dataset composed of the old data and the new data. This is particularly annoying in a real situation where you might have new incoming data every week, day, hour, minute, or even setting. For example if you're building a recommendation engine for an e-commerce app, then you're probably training your model from 0 every week or so. As your app grows in popularity, so does the dataset you're training on. This will lead to longer and longer training times and might require a hardware upgrade.A final downside that isn't very easy to grasp concerns the manner in which features are extracted. Every time you want to train your model you first have to extract features. The trick is that some features might not be accessible at the particular point in time you are at. For example maybe that some attributes in your data warehouse get overwritten with time. In other words maybe that all the features pertaining to a particular observations are not available, whereas they were a week ago. This happens more often than not in real scenarios, and apart if you have a sophisticated data engineering pipeline then you will encounter these issues at some point. A hands-on introduction to incremental learningIncremental learning is also often called *online learning* or *stream learning*, but if you [google online learning](https://www.google.com/search?q=online+learning) a lot of the results will point to educational websites. Hence, the terms "incremental learning" and "stream learning" (from which `river` derives it's name) are prefered. The point of incremental learning is to fit a model to a stream of data. In other words, the data isn't available in it's entirety, but rather the observations are provided one by one. As an example let's stream through the dataset used previously.
###Code
for xi, yi in zip(X, y):
# This is where the model learns
pass
###Output
_____no_output_____
###Markdown
In this case we're iterating over a dataset that is already in memory, but we could just as well stream from a CSV file, a Kafka stream, an SQL query, etc. If we look at `xi` we can notice that it is a `numpy.ndarray`.
###Code
xi
###Output
_____no_output_____
###Markdown
`river` by design works with `dict`s. We believe that `dict`s are more enjoyable to program with than `numpy.ndarray`s, at least for when single observations are concerned. `dict`'s bring the added benefit that each feature can be accessed by name rather than by position.
###Code
for xi, yi in zip(X, y):
xi = dict(zip(dataset.feature_names, xi))
pass
xi
###Output
_____no_output_____
###Markdown
Conveniently, `river`'s `stream` module has an `iter_sklearn_dataset` method that we can use instead.
###Code
from river import stream
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
pass
###Output
_____no_output_____
###Markdown
The simple fact that we are getting the data as a stream means that we can't do a lot of things the same way as in a batch setting. For example let's say we want to scale the data so that it has mean 0 and variance 1, as we did earlier. To do so we simply have to subtract the mean of each feature to each value and then divide the result by the standard deviation of the feature. The problem is that we can't possible known the values of the mean and the standard deviation before actually going through all the data! One way to proceed would be to do a first pass over the data to compute the necessary values and then scale the values during a second pass. The problem is that this defeats our purpose, which is to learn by only looking at the data once. Although this might seem rather restrictive, it reaps sizable benefits down the road.The way we do feature scaling in `river` involves computing *running statistics* (also know as *moving statistics*). The idea is that we use a data structure that estimates the mean and updates itself when it is provided with a value. The same goes for the variance (and thus the standard deviation). For example, if we denote $\mu_t$ the mean and $n_t$ the count at any moment $t$, then updating the mean can be done as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}}\end{cases}$$Likewise, the running variance can be computed as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}} \\s_{t+1} = s_t + (x - \mu_t) \times (x - \mu_{t+1}) \\\sigma_{t+1} = \frac{s_{t+1}}{n_{t+1}}\end{cases}$$where $s_t$ is a running sum of squares and $\sigma_t$ is the running variance at time $t$. This might seem a tad more involved than the batch algorithms you learn in school, but it is rather elegant. Implementing this in Python is not too difficult. For example let's compute the running mean and variance of the `'mean area'` variable.
###Code
n, mean, sum_of_squares, variance = 0, 0, 0, 0
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
n += 1
old_mean = mean
mean += (xi['mean area'] - mean) / n
sum_of_squares += (xi['mean area'] - old_mean) * (xi['mean area'] - mean)
variance = sum_of_squares / n
print(f'Running mean: {mean:.3f}')
print(f'Running variance: {variance:.3f}')
###Output
Running mean: 654.889
Running variance: 123625.903
###Markdown
Let's compare this with `numpy`. But remember, `numpy` requires access to "all" the data.
###Code
import numpy as np
i = list(dataset.feature_names).index('mean area')
print(f'True mean: {np.mean(X[:, i]):.3f}')
print(f'True variance: {np.var(X[:, i]):.3f}')
###Output
True mean: 654.889
True variance: 123625.903
###Markdown
The results seem to be exactly the same! The twist is that the running statistics won't be very accurate for the first few observations. In general though this doesn't matter too much. Some would even go as far as to say that this descrepancy is beneficial and acts as some sort of regularization...Now the idea is that we can compute the running statistics of each feature and scale them as they come along. The way to do this with `river` is to use the `StandardScaler` class from the `preprocessing` module, as so:
###Code
from river import preprocessing
scaler = preprocessing.StandardScaler()
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
scaler = scaler.learn_one(xi)
###Output
_____no_output_____
###Markdown
Now that we are scaling the data, we can start doing some actual machine learning. We're going to implement an online linear regression task. Because all the data isn't available at once, we are obliged to do what is called *stochastic gradient descent*, which is a popular research topic and has a lot of variants. SGD is commonly used to train neural networks. The idea is that at each step we compute the loss between the target prediction and the truth. We then calculate the gradient, which is simply a set of derivatives with respect to each weight from the linear regression. Once we have obtained the gradient, we can update the weights by moving them in the opposite direction of the gradient. The amount by which the weights are moved typically depends on a *learning rate*, which is typically set by the user. Different optimizers have different ways of managing the weight update, and some handle the learning rate implicitly. Online linear regression can be done in `river` with the `LinearRegression` class from the `linear_model` module. We'll be using plain and simple SGD using the `SGD` optimizer from the `optim` module. During training we'll measure the squared error between the truth and the predictions.
###Code
from river import linear_model
from river import optim
scaler = preprocessing.StandardScaler()
optimizer = optim.SGD(lr=0.01)
log_reg = linear_model.LogisticRegression(optimizer)
y_true = []
y_pred = []
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer(), shuffle=True, seed=42):
# Scale the features
xi_scaled = scaler.learn_one(xi).transform_one(xi)
# Test the current model on the new "unobserved" sample
yi_pred = log_reg.predict_proba_one(xi_scaled)
# Train the model with the new sample
log_reg.learn_one(xi_scaled, yi)
# Store the truth and the prediction
y_true.append(yi)
y_pred.append(yi_pred[True])
print(f'ROC AUC: {metrics.roc_auc_score(y_true, y_pred):.3f}')
###Output
ROC AUC: 0.990
###Markdown
The ROC AUC is significantly better than the one obtained from the cross-validation of scikit-learn's logisitic regression. However to make things really comparable it would be nice to compare with the same cross-validation procedure. `river` has a `compat` module that contains utilities for making `river` compatible with other Python libraries. Because we're doing regression we'll be using the `SKLRegressorWrapper`. We'll also be using `Pipeline` to encapsulate the logic of the `StandardScaler` and the `LogisticRegression` in one single object.
###Code
from river import compat
from river import compose
# We define a Pipeline, exactly like we did earlier for sklearn
model = compose.Pipeline(
('scale', preprocessing.StandardScaler()),
('log_reg', linear_model.LogisticRegression())
)
# We make the Pipeline compatible with sklearn
model = compat.convert_river_to_sklearn(model)
# We compute the CV scores using the same CV scheme and the same scoring
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.964 (± 0.016)
###Markdown
From batch to online/stream A quick overview of batch learningIf you've already delved into machine learning, then you shouldn't have any difficulty in getting to use incremental learning. If you are somewhat new to machine learning, then do not worry! The point of this notebook in particular is to introduce simple notions. We'll also start to show how `river` fits in and explain how to use it.The whole point of machine learning is to *learn from data*. In *supervised learning* you want to learn how to predict a target $y$ given a set of features $X$. Meanwhile in an unsupervised learning there is no target, and the goal is rather to identify patterns and trends in the features $X$. At this point most people tend to imagine $X$ as a somewhat big table where each row is an observation and each column is a feature, and they would be quite right. Learning from tabular data is part of what's called *batch learning*, which basically that all of the data is available to our learning algorithm at once. Multiple libraries have been created to handle the batch learning regime, with one of the most prominent being Python's [scikit-learn](https://scikit-learn.org/stable/).As a simple example of batch learning let's say we want to learn to predict if a women has breast cancer or not. We'll use the [breast cancer dataset available with scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html). We'll learn to map a set of features to a binary decision using a [logistic regression](https://www.wikiwand.com/en/Logistic_regression). Like many other models based on numerical weights, logisitc regression is sensitive to the scale of the features. Rescaling the data so that each feature has mean 0 and variance 1 is generally considered good practice. We can apply the rescaling and fit the logistic regression sequentially in an elegant manner using a [Pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html). To measure the performance of the model we'll evaluate the average [ROC AUC score](https://www.wikiwand.com/en/Receiver_operating_characteristic) using a 5 fold [cross-validation](https://www.wikiwand.com/en/Cross-validation_(statistics)).
###Code
from sklearn import datasets
from sklearn import linear_model
from sklearn import metrics
from sklearn import model_selection
from sklearn import pipeline
from sklearn import preprocessing
# Load the data
dataset = datasets.load_breast_cancer()
X, y = dataset.data, dataset.target
# Define the steps of the model
model = pipeline.Pipeline([
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LogisticRegression(solver='lbfgs'))
])
# Define a determistic cross-validation procedure
cv = model_selection.KFold(n_splits=5, shuffle=True, random_state=42)
# Compute the MSE values
scorer = metrics.make_scorer(metrics.roc_auc_score)
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.975 (± 0.011)
###Markdown
This might be a lot to take in if you're not accustomed to scikit-learn, but it probably isn't if you are. Batch learning basically boils down to:1. Loading (and preprocessing) the data2. Fitting a model to the data3. Computing the performance of the model on unseen dataThis is pretty standard and is maybe how most people imagine a machine learning pipeline. However, this way of proceeding has certain downsides. First of all your laptop would crash if the `load_boston` function returned a dataset who's size exceeds your available amount of RAM. Sometimes you can use some tricks to get around this. For example by optimizing the data types and by using sparse representations when applicable you can potentially save precious gigabytes of RAM. However, like many tricks this only goes so far. If your dataset weighs hundreds of gigabytes then you won't go far without some special hardware. One solution is to do out-of-core learning; that is, algorithms that can learn by being presented the data in chunks or mini-batches. If you want to go down this road then take a look at [Dask](https://examples.dask.org/machine-learning.html) and [Spark's MLlib](https://spark.apache.org/mllib/).Another issue with the batch learning regime is that it can't elegantly learn from new data. Indeed if new data is made available, then the model has to learn from scratch with a new dataset composed of the old data and the new data. This is particularly annoying in a real situation where you might have new incoming data every week, day, hour, minute, or even setting. For example if you're building a recommendation engine for an e-commerce app, then you're probably training your model from 0 every week or so. As your app grows in popularity, so does the dataset you're training on. This will lead to longer and longer training times and might require a hardware upgrade.A final downside that isn't very easy to grasp concerns the manner in which features are extracted. Every time you want to train your model you first have to extract features. The trick is that some features might not be accessible at the particular point in time you are at. For example maybe that some attributes in your data warehouse get overwritten with time. In other words maybe that all the features pertaining to a particular observations are not available, whereas they were a week ago. This happens more often than not in real scenarios, and apart if you have a sophisticated data engineering pipeline then you will encounter these issues at some point. A hands-on introduction to incremental learningIncremental learning is also often called *online learning* or *stream learning*, but if you [google online learning](https://www.google.com/search?q=online+learning) a lot of the results will point to educational websites. Hence, the terms "incremental learning" and "stream learning" (from which `river` derives it's name) are prefered. The point of incremental learning is to fit a model to a stream of data. In other words, the data isn't available in it's entirety, but rather the observations are provided one by one. As an example let's stream through the dataset used previously.
###Code
for xi, yi in zip(X, y):
# This is where the model learns
pass
###Output
_____no_output_____
###Markdown
In this case we're iterating over a dataset that is already in memory, but we could just as well stream from a CSV file, a Kafka stream, an SQL query, etc. If we look at `xi` we can notice that it is a `numpy.ndarray`.
###Code
xi
###Output
_____no_output_____
###Markdown
`river` by design works with `dict`s. We believe that `dict`s are more enjoyable to program with than `numpy.ndarray`s, at least for when single observations are concerned. `dict`'s bring the added benefit that each feature can be accessed by name rather than by position.
###Code
for xi, yi in zip(X, y):
xi = dict(zip(dataset.feature_names, xi))
pass
xi
###Output
_____no_output_____
###Markdown
Conveniently, `river`'s `stream` module has an `iter_sklearn_dataset` method that we can use instead.
###Code
from river import stream
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
pass
###Output
_____no_output_____
###Markdown
The simple fact that we are getting the data as a stream means that we can't do a lot of things the same way as in a batch setting. For example let's say we want to scale the data so that it has mean 0 and variance 1, as we did earlier. To do so we simply have to subtract the mean of each feature to each value and then divide the result by the standard deviation of the feature. The problem is that we can't possible known the values of the mean and the standard deviation before actually going through all the data! One way to proceed would be to do a first pass over the data to compute the necessary values and then scale the values during a second pass. The problem is that this defeats our purpose, which is to learn by only looking at the data once. Although this might seem rather restrictive, it reaps sizable benefits down the road.The way we do feature scaling in `river` involves computing *running statistics* (also know as *moving statistics*). The idea is that we use a data structure that estimates the mean and updates itself when it is provided with a value. The same goes for the variance (and thus the standard deviation). For example, if we denote $\mu_t$ the mean and $n_t$ the count at any moment $t$, then updating the mean can be done as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}}\end{cases}$$Likewise, the running variance can be computed as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}} \\s_{t+1} = s_t + (x - \mu_t) \times (x - \mu_{t+1}) \\\sigma_{t+1} = \frac{s_{t+1}}{n_{t+1}}\end{cases}$$where $s_t$ is a running sum of squares and $\sigma_t$ is the running variance at time $t$. This might seem a tad more involved than the batch algorithms you learn in school, but it is rather elegant. Implementing this in Python is not too difficult. For example let's compute the running mean and variance of the `'mean area'` variable.
###Code
n, mean, sum_of_squares, variance = 0, 0, 0, 0
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
n += 1
old_mean = mean
mean += (xi['mean area'] - mean) / n
sum_of_squares += (xi['mean area'] - old_mean) * (xi['mean area'] - mean)
variance = sum_of_squares / n
print(f'Running mean: {mean:.3f}')
print(f'Running variance: {variance:.3f}')
###Output
Running mean: 654.889
Running variance: 123625.903
###Markdown
Let's compare this with `numpy`. But remember, `numpy` requires access to "all" the data.
###Code
import numpy as np
i = list(dataset.feature_names).index('mean area')
print(f'True mean: {np.mean(X[:, i]):.3f}')
print(f'True variance: {np.var(X[:, i]):.3f}')
###Output
True mean: 654.889
True variance: 123625.903
###Markdown
The results seem to be exactly the same! The twist is that the running statistics won't be very accurate for the first few observations. In general though this doesn't matter too much. Some would even go as far as to say that this descrepancy is beneficial and acts as some sort of regularization...Now the idea is that we can compute the running statistics of each feature and scale them as they come along. The way to do this with `river` is to use the `StandardScaler` class from the `preprocessing` module, as so:
###Code
from river import preprocessing
scaler = preprocessing.StandardScaler()
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
scaler = scaler.learn_one(xi)
###Output
_____no_output_____
###Markdown
Now that we are scaling the data, we can start doing some actual machine learning. We're going to implement an online linear regression task. Because all the data isn't available at once, we are obliged to do what is called *stochastic gradient descent*, which is a popular research topic and has a lot of variants. SGD is commonly used to train neural networks. The idea is that at each step we compute the loss between the target prediction and the truth. We then calculate the gradient, which is simply a set of derivatives with respect to each weight from the linear regression. Once we have obtained the gradient, we can update the weights by moving them in the opposite direction of the gradient. The amount by which the weights are moved typically depends on a *learning rate*, which is typically set by the user. Different optimizers have different ways of managing the weight update, and some handle the learning rate implicitly. Online linear regression can be done in `river` with the `LinearRegression` class from the `linear_model` module. We'll be using plain and simple SGD using the `SGD` optimizer from the `optim` module. During training we'll measure the squared error between the truth and the predictions.
###Code
from river import linear_model
from river import optim
scaler = preprocessing.StandardScaler()
optimizer = optim.SGD(lr=0.01)
log_reg = linear_model.LogisticRegression(optimizer)
y_true = []
y_pred = []
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer(), shuffle=True, seed=42):
# Scale the features
xi_scaled = scaler.learn_one(xi).transform_one(xi)
# Test the current model on the new "unobserved" sample
yi_pred = log_reg.predict_proba_one(xi_scaled)
# Train the model with the new sample
log_reg.learn_one(xi_scaled, yi)
# Store the truth and the prediction
y_true.append(yi)
y_pred.append(yi_pred[True])
print(f'ROC AUC: {metrics.roc_auc_score(y_true, y_pred):.3f}')
###Output
ROC AUC: 0.990
###Markdown
The ROC AUC is significantly better than the one obtained from the cross-validation of scikit-learn's logisitic regression. However to make things really comparable it would be nice to compare with the same cross-validation procedure. `river` has a `compat` module that contains utilities for making `river` compatible with other Python libraries. Because we're doing regression we'll be using the `SKLRegressorWrapper`. We'll also be using `Pipeline` to encapsulate the logic of the `StandardScaler` and the `LogisticRegression` in one single object.
###Code
from river import compat
from river import compose
# We define a Pipeline, exactly like we did earlier for sklearn
model = compose.Pipeline(
('scale', preprocessing.StandardScaler()),
('log_reg', linear_model.LogisticRegression())
)
# We make the Pipeline compatible with sklearn
model = compat.convert_river_to_sklearn(model)
# We compute the CV scores using the same CV scheme and the same scoring
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.964 (± 0.016)
###Markdown
From batch to online A quick overview of batch learningIf you've already delved into machine learning, then you shouldn't have any difficulty in getting to use incremental learning. If you are somewhat new to machine learning, then do not worry! The point of this notebook in particular is to introduce simple notions. We'll also start to show how `creme` fits in and explain how to use it.The whole point of machine learning is to *learn from data*. In *supervised learning* you want to learn how to predict a target $y$ given a set of features $X$. Meanwhile in an unsupervised learning there is no target, and the goal is rather to identify patterns and trends in the features $X$. At this point most people tend to imagine $X$ as a somewhat big table where each row is an observation and each column is a feature, and they would be quite right. Learning from tabular data is part of what's called *batch learning*, which basically that all of the data is available to our learning algorithm at once. A lot of libraries have been created to handle the batch learning regime, with one of the most prominent being Python's [scikit-learn](https://scikit-learn.org/stable/). As a simple example of batch learning let's say we want to learn to predict if a women has breast cancer or not. We'll use the [breast cancer dataset available with scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer().html). We'll learn to map a set of features to a binary decision using a [logistic regression](https://www.wikiwand.com/en/Logistic_regression). Like many other models based on numerical weights, logisitc regression is sensitive to the scale of the features. Rescaling the data so that each feature has mean 0 and variance 1 is generally considered good practice. We can apply the rescaling and fit the logistic regression sequentially in an elegant manner using a [Pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html). To measure the performance of the model we'll evaluate the average [ROC AUC score](https://www.wikiwand.com/en/Receiver_operating_characteristic) using a 5 fold [cross-validation](https://www.wikiwand.com/en/Cross-validation_(statistics)).
###Code
from sklearn import datasets
from sklearn import linear_model
from sklearn import metrics
from sklearn import model_selection
from sklearn import pipeline
from sklearn import preprocessing
# Load the data
dataset = datasets.load_breast_cancer()
X, y = dataset.data, dataset.target
# Define the steps of the model
model = pipeline.Pipeline([
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LogisticRegression(solver='lbfgs'))
])
# Define a determistic cross-validation procedure
cv = model_selection.KFold(n_splits=5, shuffle=True, random_state=42)
# Compute the MSE values
scorer = metrics.make_scorer(metrics.roc_auc_score)
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.975 (± 0.011)
###Markdown
This might be a lot to take in if you're not accustomed to scikit-learn, but it probably isn't if you are. Batch learning basically boils down to:1. Loading the data2. Fitting a model to the data3. Computing the performance of the model on unseen dataThis is pretty standard and is maybe how most people imagine a machine learning pipeline. However this way of proceding has certain downsides. First of all your laptop would crash if the `load_boston` function returned a dataset who's size exceeds your available amount of RAM. Sometimes you can use some tricks to get around this. For example by optimizing the data types and by using sparse representations when applicable you can potentially save precious gigabytes of RAM. However like many tricks this only goes so far. If your dataset weighs hundreds of gigabytes then you won't go far without some special hardware. One solution is to do out-of-core learning; that is, algorithms that can learning by being presented the data in chunks. If you want to go down this road then take a look at [Dask](https://examples.dask.org/machine-learning.html) and [Spark's MLlib](https://spark.apache.org/mllib/).Another issue with the batch learning regime is that can't elegantly learn from new data. Indeed if new data is made available, then the model has to learn from scratch with a new dataset composed of the old data and the new data. This is particularly annoying in a real situation where you might have new incoming data every week, day, hour, minute, or even setting. For example if you're building a recommendation engine for an e-commerce app, then you're probably training your model from 0 every week or so. As your app grows in popularity, so does the dataset you're training on. This will lead to longer and longer training times and might require a hardware upgrade.A final downside that isn't very easy to grasp concerns the manner in which features are extracted. Everytime you want to train your model you first have to extract features. The trick is that some features might not be accessible at the particular point in time you are at. For example maybe that some attributes in your data warehouse get overwritten with time. In other words maybe that all the features pertaining to a particular observations are not available, whereas they were a week ago. This happens more often than not in real scenarios, and apart if you have a sophisticated data engineering pipeline then you will encounter these issues at some point. A hands-on introduction to incremental learningIncremental learning is also often called *online learning*, but if you [google online learning](https://www.google.com/search?q=online+learning) a lot of the results will point to educational websites. Hence we prefer the name "incremental learning", from which `creme` derives it's name. The point of incremental learning is to fit a model to a stream of data. In other words, the data isn't available in it's entirety, but rather the observations are provided one by one. As an example let's stream through the dataset used previously.
###Code
for xi, yi in zip(X, y):
# This where the model learns
pass
###Output
_____no_output_____
###Markdown
In this case we're iterating over a dataset that is already in memory, but we could just as well stream from a CSV file, a Kafka stream, an SQL query, etc. If we look at `x` we can notice that it is a `numpy.ndarray`.
###Code
xi
###Output
_____no_output_____
###Markdown
`creme` on the other hand works with `dict`s. We believe that `dict`s are more enjoyable to program with than `numpy.ndarray`s, at least for when single observations are concerned. `dict`'s bring the added benefit that each feature can be accessed by name rather than by position.
###Code
for xi, yi in zip(X, y):
xi = dict(zip(dataset.feature_names, xi))
pass
xi
###Output
_____no_output_____
###Markdown
`creme`'s `stream` module has an `iter_sklearn_dataset` convenience function that we can use instead.
###Code
from creme import stream
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
pass
###Output
_____no_output_____
###Markdown
The simple fact that we are getting the data in a stream means that we can't do a lot of things the same way as in a batch setting. For example let's say we want to scale the data so that it has mean 0 and variance 1, as we did earlier. To do so we simply have to subtract the mean of each feature to each value and then divide the result by the standard deviation of the feature. The problem is that we can't possible known the values of the mean and the standard deviation before actually going through all the data! One way to procede would be to do a first pass over the data to compute the necessary values and then scale the values during a second pass. The problem is that defeats our purpose, which is to learn by only looking at the data once. Although this might seem rather restrictive, it reaps sizable benefits down the road.The way we do feature scaling in `creme` involves computing *running statistics*. The idea is that we use a data structure that estimates the mean and updates itself when it is provided with a value. The same goes for the variance (and thus the standard deviation). For example, if we denote $\mu_t$ the mean and $n_t$ the count at any moment $t$, then updating the mean can be done as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}}\end{cases}$$Likewhise a running variance can be computed as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}} \\s_{t+1} = s_t + (x - \mu_t) \times (x - \mu_{t+1}) \\\sigma_{t+1} = \frac{s_{t+1}}{n_{t+1}}\end{cases}$$where $s_t$ is a running sum of squares and $\sigma_t$ is the running variance at time $t$. This might seem a tad more involved than the batch algorithms you learn in school, but it is rather elegant. Implementing this in Python is not too difficult. For example let's compute the running mean and variance of the `'mean area'` variable.
###Code
n, mean, sum_of_squares, variance = 0, 0, 0, 0
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
n += 1
old_mean = mean
mean += (xi['mean area'] - mean) / n
sum_of_squares += (xi['mean area'] - old_mean) * (xi['mean area'] - mean)
variance = sum_of_squares / n
print(f'Running mean: {mean:.3f}')
print(f'Running variance: {variance:.3f}')
###Output
Running mean: 654.889
Running variance: 123625.903
###Markdown
Let's compare this with `numpy`.
###Code
import numpy as np
i = list(dataset.feature_names).index('mean area')
print(f'True mean: {np.mean(X[:, i]):.3f}')
print(f'True variance: {np.var(X[:, i]):.3f}')
###Output
True mean: 654.889
True variance: 123625.903
###Markdown
The results seem to be exactly the same! The twist is that the running statistics won't be very accurate for the first few observations. In general though this doesn't matter too much. Some would even go as far as to say that this descrepancy is beneficial and acts as some sort of regularization...Now the idea is that we can compute the running statistics of each feature and scale them as they come along. The way to do this with `creme` is to use the `StandardScaler` class from the `preprocessing` module, as so:
###Code
from creme import preprocessing
scaler = preprocessing.StandardScaler()
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
xi = scaler.fit_one(xi)
###Output
_____no_output_____
###Markdown
This is quite terse but let's break it down nonetheless. Every class in `creme` has a `fit_one(x, y)` method where all the magic happens. Now the important thing to notice is that the `fit_one` actually returns the output for the given input. This is one of the nice properties of online learning: inference can be done immediatly. In `creme` each call to a `Transformer`'s `fit_one` will return the transformed output. Meanwhile calling `fit_one` with a `Classifier` or a `Regressor` will return the predicted target for the given set of features. The twist is that the prediction is made *before* looking at the true target `y`. This means that we get a free hold-out prediction every time we call `fit_one`. This can be used to monitor the performance of the model as it trains, which is obviously nice to have.Now that we are scaling the data, we can start doing some actual machine learning. We're going to implement an online linear regression. Because all the data isn't available at once, we are obliged to do what is called *stochastic gradient descent*, which is a popular research topic and has a lot of variants. SGD is commonly used to train neural networks. The idea is that at each step we compute the loss between the target prediction and the truth. We then calculate the gradient, which is simply a set of derivatives with respect to each weight from the linear regression. Once we have obtained the gradient, we can update the weights by moving them in the opposite direction of the gradient. The amount by which the weights are moved typically depends on a *learning rate*, which is typically set by the user. Different optimizers have different ways of managing the weight update, and some handle the learning rate implicitely. Online linear regression can be done in `creme` with the `LinearRegression` class from the `linear_model` module. We'll be using plain and simple SGD using the `SGD` optimizer from the `optim` module. During training we'll measure the squared error between the truth and the predictions.
###Code
from creme import linear_model
from creme import optim
scaler = preprocessing.StandardScaler()
optimizer = optim.SGD(lr=0.01)
log_reg = linear_model.LogisticRegression(optimizer)
y_true = []
y_pred = []
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer(), shuffle=True, seed=42):
# Scale the features
xi_scaled = scaler.fit_one(xi).transform_one(xi)
# Fit the linear regression
yi_pred = log_reg.predict_proba_one(xi_scaled)
log_reg.fit_one(xi_scaled, yi)
# Store the truth and the prediction
y_true.append(yi)
y_pred.append(yi_pred[True])
print(f'ROC AUC: {metrics.roc_auc_score(y_true, y_pred):.3f}')
###Output
ROC AUC: 0.990
###Markdown
The ROC AUC is significantly better than the one obtained from the cross-validation of scikit-learn's logisitic regression. However to make things really comparable it would be nice to compare with the same cross-validation procedure. `creme` has a `compat` module that contains utilities for making `creme` compatible with other Python libraries. Because we're doing regression we'll be using the `SKLRegressorWrapper`. We'll also be using `Pipeline` to encapsulate the logic of the `StandardScaler` and the `LogisticRegression` in one single object.
###Code
from creme import compat
from creme import compose
# We define a Pipeline, exactly like we did earlier for sklearn
model = compose.Pipeline(
('scale', preprocessing.StandardScaler()),
('log_reg', linear_model.LogisticRegression())
)
# We make the Pipeline compatible with sklearn
model = compat.convert_creme_to_sklearn(model)
# We compute the CV scores using the same CV scheme and the same scoring
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.964 (± 0.016)
###Markdown
From batch to online/stream A quick overview of batch learningIf you've already delved into machine learning, then you shouldn't have any difficulty in getting to use incremental learning. If you are somewhat new to machine learning, then do not worry! The point of this notebook in particular is to introduce simple notions. We'll also start to show how `river` fits in and explain how to use it.The whole point of machine learning is to *learn from data*. In *supervised learning* you want to learn how to predict a target $y$ given a set of features $X$. Meanwhile in an unsupervised learning there is no target, and the goal is rather to identify patterns and trends in the features $X$. At this point most people tend to imagine $X$ as a somewhat big table where each row is an observation and each column is a feature, and they would be quite right. Learning from tabular data is part of what's called *batch learning*, which basically that all of the data is available to our learning algorithm at once. Multiple libraries have been created to handle the batch learning regime, with one of the most prominent being Python's [scikit-learn](https://scikit-learn.org/stable/).As a simple example of batch learning let's say we want to learn to predict if a women has breast cancer or not. We'll use the [breast cancer dataset available with scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html). We'll learn to map a set of features to a binary decision using a [logistic regression](https://www.wikiwand.com/en/Logistic_regression). Like many other models based on numerical weights, logisitc regression is sensitive to the scale of the features. Rescaling the data so that each feature has mean 0 and variance 1 is generally considered good practice. We can apply the rescaling and fit the logistic regression sequentially in an elegant manner using a [Pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html). To measure the performance of the model we'll evaluate the average [ROC AUC score](https://www.wikiwand.com/en/Receiver_operating_characteristic) using a 5 fold [cross-validation](https://www.wikiwand.com/en/Cross-validation_(statistics)).
###Code
from sklearn import datasets
from sklearn import linear_model
from sklearn import metrics
from sklearn import model_selection
from sklearn import pipeline
from sklearn import preprocessing
# Load the data
dataset = datasets.load_breast_cancer()
X, y = dataset.data, dataset.target
# Define the steps of the model
model = pipeline.Pipeline([
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LogisticRegression(solver='lbfgs'))
])
# Define a determistic cross-validation procedure
cv = model_selection.KFold(n_splits=5, shuffle=True, random_state=42)
# Compute the MSE values
scorer = metrics.make_scorer(metrics.roc_auc_score)
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.975 (± 0.011)
###Markdown
This might be a lot to take in if you're not accustomed to scikit-learn, but it probably isn't if you are. Batch learning basically boils down to:1. Loading (and preprocessing) the data2. Fitting a model to the data3. Computing the performance of the model on unseen dataThis is pretty standard and is maybe how most people imagine a machine learning pipeline. However, this way of proceeding has certain downsides. First of all your laptop would crash if the `load_boston` function returned a dataset who's size exceeds your available amount of RAM. Sometimes you can use some tricks to get around this. For example by optimizing the data types and by using sparse representations when applicable you can potentially save precious gigabytes of RAM. However, like many tricks this only goes so far. If your dataset weighs hundreds of gigabytes then you won't go far without some special hardware. One solution is to do out-of-core learning; that is, algorithms that can learn by being presented the data in chunks or mini-batches. If you want to go down this road then take a look at [Dask](https://examples.dask.org/machine-learning.html) and [Spark's MLlib](https://spark.apache.org/mllib/).Another issue with the batch learning regime is that it can't elegantly learn from new data. Indeed if new data is made available, then the model has to learn from scratch with a new dataset composed of the old data and the new data. This is particularly annoying in a real situation where you might have new incoming data every week, day, hour, minute, or even setting. For example if you're building a recommendation engine for an e-commerce app, then you're probably training your model from 0 every week or so. As your app grows in popularity, so does the dataset you're training on. This will lead to longer and longer training times and might require a hardware upgrade.A final downside that isn't very easy to grasp concerns the manner in which features are extracted. Every time you want to train your model you first have to extract features. The trick is that some features might not be accessible at the particular point in time you are at. For example maybe that some attributes in your data warehouse get overwritten with time. In other words maybe that all the features pertaining to a particular observations are not available, whereas they were a week ago. This happens more often than not in real scenarios, and apart if you have a sophisticated data engineering pipeline then you will encounter these issues at some point. A hands-on introduction to incremental learningIncremental learning is also often called *online learning* or *stream learning*, but if you [google online learning](https://www.google.com/search?q=online+learning) a lot of the results will point to educational websites. Hence, the terms "incremental learning" and "stream learning" (from which `river` derives it's name) are prefered. The point of incremental learning is to fit a model to a stream of data. In other words, the data isn't available in it's entirety, but rather the observations are provided one by one. As an example let's stream through the dataset used previously.
###Code
for xi, yi in zip(X, y):
# This is where the model learns
pass
###Output
_____no_output_____
###Markdown
In this case we're iterating over a dataset that is already in memory, but we could just as well stream from a CSV file, a Kafka stream, an SQL query, etc. If we look at `xi` we can notice that it is a `numpy.ndarray`.
###Code
xi
###Output
_____no_output_____
###Markdown
`river` by design works with `dict`s. We believe that `dict`s are more enjoyable to program with than `numpy.ndarray`s, at least for when single observations are concerned. `dict`'s bring the added benefit that each feature can be accessed by name rather than by position.
###Code
for xi, yi in zip(X, y):
xi = dict(zip(dataset.feature_names, xi))
pass
xi
###Output
_____no_output_____
###Markdown
Conveniently, `river`'s `stream` module has an `iter_sklearn_dataset` method that we can use instead.
###Code
from river import stream
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
pass
###Output
_____no_output_____
###Markdown
The simple fact that we are getting the data as a stream means that we can't do a lot of things the same way as in a batch setting. For example let's say we want to scale the data so that it has mean 0 and variance 1, as we did earlier. To do so we simply have to subtract the mean of each feature to each value and then divide the result by the standard deviation of the feature. The problem is that we can't possible known the values of the mean and the standard deviation before actually going through all the data! One way to proceed would be to do a first pass over the data to compute the necessary values and then scale the values during a second pass. The problem is that this defeats our purpose, which is to learn by only looking at the data once. Although this might seem rather restrictive, it reaps sizable benefits down the road.The way we do feature scaling in `river` involves computing *running statistics* (also know as *moving statistics*). The idea is that we use a data structure that estimates the mean and updates itself when it is provided with a value. The same goes for the variance (and thus the standard deviation). For example, if we denote $\mu_t$ the mean and $n_t$ the count at any moment $t$, then updating the mean can be done as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}}\end{cases}$$Likewise, the running variance can be computed as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}} \\s_{t+1} = s_t + (x - \mu_t) \times (x - \mu_{t+1}) \\\sigma_{t+1} = \frac{s_{t+1}}{n_{t+1}}\end{cases}$$where $s_t$ is a running sum of squares and $\sigma_t$ is the running variance at time $t$. This might seem a tad more involved than the batch algorithms you learn in school, but it is rather elegant. Implementing this in Python is not too difficult. For example let's compute the running mean and variance of the `'mean area'` variable.
###Code
n, mean, sum_of_squares, variance = 0, 0, 0, 0
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
n += 1
old_mean = mean
mean += (xi['mean area'] - mean) / n
sum_of_squares += (xi['mean area'] - old_mean) * (xi['mean area'] - mean)
variance = sum_of_squares / n
print(f'Running mean: {mean:.3f}')
print(f'Running variance: {variance:.3f}')
###Output
Running mean: 654.889
Running variance: 123625.903
###Markdown
Let's compare this with `numpy`. But remember, `numpy` requires access to "all" the data.
###Code
import numpy as np
i = list(dataset.feature_names).index('mean area')
print(f'True mean: {np.mean(X[:, i]):.3f}')
print(f'True variance: {np.var(X[:, i]):.3f}')
###Output
True mean: 654.889
True variance: 123625.903
###Markdown
The results seem to be exactly the same! The twist is that the running statistics won't be very accurate for the first few observations. In general though this doesn't matter too much. Some would even go as far as to say that this descrepancy is beneficial and acts as some sort of regularization...Now the idea is that we can compute the running statistics of each feature and scale them as they come along. The way to do this with `river` is to use the `StandardScaler` class from the `preprocessing` module, as so:
###Code
from river import preprocessing
scaler = preprocessing.StandardScaler()
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
scaler = scaler.learn_one(xi)
###Output
_____no_output_____
###Markdown
Now that we are scaling the data, we can start doing some actual machine learning. We're going to implement an online linear regression task. Because all the data isn't available at once, we are obliged to do what is called *stochastic gradient descent*, which is a popular research topic and has a lot of variants. SGD is commonly used to train neural networks. The idea is that at each step we compute the loss between the target prediction and the truth. We then calculate the gradient, which is simply a set of derivatives with respect to each weight from the linear regression. Once we have obtained the gradient, we can update the weights by moving them in the opposite direction of the gradient. The amount by which the weights are moved typically depends on a *learning rate*, which is typically set by the user. Different optimizers have different ways of managing the weight update, and some handle the learning rate implicitly. Online linear regression can be done in `river` with the `LinearRegression` class from the `linear_model` module. We'll be using plain and simple SGD using the `SGD` optimizer from the `optim` module. During training we'll measure the squared error between the truth and the predictions.
###Code
from river import linear_model
from river import optim
scaler = preprocessing.StandardScaler()
optimizer = optim.SGD(lr=0.01)
log_reg = linear_model.LogisticRegression(optimizer)
y_true = []
y_pred = []
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer(), shuffle=True, seed=42):
# Scale the features
xi_scaled = scaler.learn_one(xi).transform_one(xi)
# Test the current model on the new "unobserved" sample
yi_pred = log_reg.predict_proba_one(xi_scaled)
# Train the model with the new sample
log_reg.learn_one(xi_scaled, yi)
# Store the truth and the prediction
y_true.append(yi)
y_pred.append(yi_pred[True])
print(f'ROC AUC: {metrics.roc_auc_score(y_true, y_pred):.3f}')
###Output
ROC AUC: 0.990
###Markdown
The ROC AUC is significantly better than the one obtained from the cross-validation of scikit-learn's logisitic regression. However to make things really comparable it would be nice to compare with the same cross-validation procedure. `river` has a `compat` module that contains utilities for making `river` compatible with other Python libraries. Because we're doing regression we'll be using the `SKLRegressorWrapper`. We'll also be using `Pipeline` to encapsulate the logic of the `StandardScaler` and the `LogisticRegression` in one single object.
###Code
from river import compat
from river import compose
# We define a Pipeline, exactly like we did earlier for sklearn
model = compose.Pipeline(
('scale', preprocessing.StandardScaler()),
('log_reg', linear_model.LogisticRegression())
)
# We make the Pipeline compatible with sklearn
model = compat.convert_river_to_sklearn(model)
# We compute the CV scores using the same CV scheme and the same scoring
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.964 (± 0.016)
###Markdown
From batch to online/stream A quick overview of batch learningIf you've already delved into machine learning, then you shouldn't have any difficulty in getting to use incremental learning. If you are somewhat new to machine learning, then do not worry! The point of this notebook in particular is to introduce simple notions. We'll also start to show how `river` fits in and explain how to use it.The whole point of machine learning is to *learn from data*. In *supervised learning* you want to learn how to predict a target $y$ given a set of features $X$. Meanwhile in an unsupervised learning there is no target, and the goal is rather to identify patterns and trends in the features $X$. At this point most people tend to imagine $X$ as a somewhat big table where each row is an observation and each column is a feature, and they would be quite right. Learning from tabular data is part of what's called *batch learning*, which basically that all of the data is available to our learning algorithm at once. Multiple libraries have been created to handle the batch learning regime, with one of the most prominent being Python's [scikit-learn](https://scikit-learn.org/stable/).As a simple example of batch learning let's say we want to learn to predict if a women has breast cancer or not. We'll use the [breast cancer dataset available with scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html). We'll learn to map a set of features to a binary decision using a [logistic regression](https://www.wikiwand.com/en/Logistic_regression). Like many other models based on numerical weights, logistic regression is sensitive to the scale of the features. Rescaling the data so that each feature has mean 0 and variance 1 is generally considered good practice. We can apply the rescaling and fit the logistic regression sequentially in an elegant manner using a [Pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html). To measure the performance of the model we'll evaluate the average [ROC AUC score](https://www.wikiwand.com/en/Receiver_operating_characteristic) using a 5 fold [cross-validation](https://www.wikiwand.com/en/Cross-validation_(statistics)).
###Code
from sklearn import datasets
from sklearn import linear_model
from sklearn import metrics
from sklearn import model_selection
from sklearn import pipeline
from sklearn import preprocessing
# Load the data
dataset = datasets.load_breast_cancer()
X, y = dataset.data, dataset.target
# Define the steps of the model
model = pipeline.Pipeline([
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LogisticRegression(solver='lbfgs'))
])
# Define a determistic cross-validation procedure
cv = model_selection.KFold(n_splits=5, shuffle=True, random_state=42)
# Compute the MSE values
scorer = metrics.make_scorer(metrics.roc_auc_score)
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.975 (± 0.011)
###Markdown
This might be a lot to take in if you're not accustomed to scikit-learn, but it probably isn't if you are. Batch learning basically boils down to:1. Loading (and preprocessing) the data2. Fitting a model to the data3. Computing the performance of the model on unseen dataThis is pretty standard and is maybe how most people imagine a machine learning pipeline. However, this way of proceeding has certain downsides. First of all your laptop would crash if the `load_boston` function returned a dataset who's size exceeds your available amount of RAM. Sometimes you can use some tricks to get around this. For example by optimizing the data types and by using sparse representations when applicable you can potentially save precious gigabytes of RAM. However, like many tricks this only goes so far. If your dataset weighs hundreds of gigabytes then you won't go far without some special hardware. One solution is to do out-of-core learning; that is, algorithms that can learn by being presented the data in chunks or mini-batches. If you want to go down this road then take a look at [Dask](https://examples.dask.org/machine-learning.html) and [Spark's MLlib](https://spark.apache.org/mllib/).Another issue with the batch learning regime is that it can't elegantly learn from new data. Indeed if new data is made available, then the model has to learn from scratch with a new dataset composed of the old data and the new data. This is particularly annoying in a real situation where you might have new incoming data every week, day, hour, minute, or even setting. For example if you're building a recommendation engine for an e-commerce app, then you're probably training your model from 0 every week or so. As your app grows in popularity, so does the dataset you're training on. This will lead to longer and longer training times and might require a hardware upgrade.A final downside that isn't very easy to grasp concerns the manner in which features are extracted. Every time you want to train your model you first have to extract features. The trick is that some features might not be accessible at the particular point in time you are at. For example maybe that some attributes in your data warehouse get overwritten with time. In other words maybe that all the features pertaining to a particular observations are not available, whereas they were a week ago. This happens more often than not in real scenarios, and apart if you have a sophisticated data engineering pipeline then you will encounter these issues at some point. A hands-on introduction to incremental learningIncremental learning is also often called *online learning* or *stream learning*, but if you [google online learning](https://www.google.com/search?q=online+learning) a lot of the results will point to educational websites. Hence, the terms "incremental learning" and "stream learning" (from which `river` derives it's name) are prefered. The point of incremental learning is to fit a model to a stream of data. In other words, the data isn't available in it's entirety, but rather the observations are provided one by one. As an example let's stream through the dataset used previously.
###Code
for xi, yi in zip(X, y):
# This is where the model learns
pass
###Output
_____no_output_____
###Markdown
In this case we're iterating over a dataset that is already in memory, but we could just as well stream from a CSV file, a Kafka stream, an SQL query, etc. If we look at `xi` we can notice that it is a `numpy.ndarray`.
###Code
xi
###Output
_____no_output_____
###Markdown
`river` by design works with `dict`s. We believe that `dict`s are more enjoyable to program with than `numpy.ndarray`s, at least for when single observations are concerned. `dict`'s bring the added benefit that each feature can be accessed by name rather than by position.
###Code
for xi, yi in zip(X, y):
xi = dict(zip(dataset.feature_names, xi))
pass
xi
###Output
_____no_output_____
###Markdown
Conveniently, `river`'s `stream` module has an `iter_sklearn_dataset` method that we can use instead.
###Code
from river import stream
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
pass
###Output
_____no_output_____
###Markdown
The simple fact that we are getting the data as a stream means that we can't do a lot of things the same way as in a batch setting. For example let's say we want to scale the data so that it has mean 0 and variance 1, as we did earlier. To do so we simply have to subtract the mean of each feature to each value and then divide the result by the standard deviation of the feature. The problem is that we can't possible known the values of the mean and the standard deviation before actually going through all the data! One way to proceed would be to do a first pass over the data to compute the necessary values and then scale the values during a second pass. The problem is that this defeats our purpose, which is to learn by only looking at the data once. Although this might seem rather restrictive, it reaps sizable benefits down the road.The way we do feature scaling in `river` involves computing *running statistics* (also know as *moving statistics*). The idea is that we use a data structure that estimates the mean and updates itself when it is provided with a value. The same goes for the variance (and thus the standard deviation). For example, if we denote $\mu_t$ the mean and $n_t$ the count at any moment $t$, then updating the mean can be done as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}}\end{cases}$$Likewise, the running variance can be computed as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}} \\s_{t+1} = s_t + (x - \mu_t) \times (x - \mu_{t+1}) \\\sigma_{t+1} = \frac{s_{t+1}}{n_{t+1}}\end{cases}$$where $s_t$ is a running sum of squares and $\sigma_t$ is the running variance at time $t$. This might seem a tad more involved than the batch algorithms you learn in school, but it is rather elegant. Implementing this in Python is not too difficult. For example let's compute the running mean and variance of the `'mean area'` variable.
###Code
n, mean, sum_of_squares, variance = 0, 0, 0, 0
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
n += 1
old_mean = mean
mean += (xi['mean area'] - mean) / n
sum_of_squares += (xi['mean area'] - old_mean) * (xi['mean area'] - mean)
variance = sum_of_squares / n
print(f'Running mean: {mean:.3f}')
print(f'Running variance: {variance:.3f}')
###Output
Running mean: 654.889
Running variance: 123625.903
###Markdown
Let's compare this with `numpy`. But remember, `numpy` requires access to "all" the data.
###Code
import numpy as np
i = list(dataset.feature_names).index('mean area')
print(f'True mean: {np.mean(X[:, i]):.3f}')
print(f'True variance: {np.var(X[:, i]):.3f}')
###Output
True mean: 654.889
True variance: 123625.903
###Markdown
The results seem to be exactly the same! The twist is that the running statistics won't be very accurate for the first few observations. In general though this doesn't matter too much. Some would even go as far as to say that this descrepancy is beneficial and acts as some sort of regularization...Now the idea is that we can compute the running statistics of each feature and scale them as they come along. The way to do this with `river` is to use the `StandardScaler` class from the `preprocessing` module, as so:
###Code
from river import preprocessing
scaler = preprocessing.StandardScaler()
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
scaler = scaler.learn_one(xi)
###Output
_____no_output_____
###Markdown
Now that we are scaling the data, we can start doing some actual machine learning. We're going to implement an online linear regression task. Because all the data isn't available at once, we are obliged to do what is called *stochastic gradient descent*, which is a popular research topic and has a lot of variants. SGD is commonly used to train neural networks. The idea is that at each step we compute the loss between the target prediction and the truth. We then calculate the gradient, which is simply a set of derivatives with respect to each weight from the linear regression. Once we have obtained the gradient, we can update the weights by moving them in the opposite direction of the gradient. The amount by which the weights are moved typically depends on a *learning rate*, which is typically set by the user. Different optimizers have different ways of managing the weight update, and some handle the learning rate implicitly. Online linear regression can be done in `river` with the `LinearRegression` class from the `linear_model` module. We'll be using plain and simple SGD using the `SGD` optimizer from the `optim` module. During training we'll measure the squared error between the truth and the predictions.
###Code
from river import linear_model
from river import optim
scaler = preprocessing.StandardScaler()
optimizer = optim.SGD(lr=0.01)
log_reg = linear_model.LogisticRegression(optimizer)
y_true = []
y_pred = []
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer(), shuffle=True, seed=42):
# Scale the features
xi_scaled = scaler.learn_one(xi).transform_one(xi)
# Test the current model on the new "unobserved" sample
yi_pred = log_reg.predict_proba_one(xi_scaled)
# Train the model with the new sample
log_reg.learn_one(xi_scaled, yi)
# Store the truth and the prediction
y_true.append(yi)
y_pred.append(yi_pred[True])
print(f'ROC AUC: {metrics.roc_auc_score(y_true, y_pred):.3f}')
###Output
ROC AUC: 0.990
###Markdown
The ROC AUC is significantly better than the one obtained from the cross-validation of scikit-learn's logisitic regression. However to make things really comparable it would be nice to compare with the same cross-validation procedure. `river` has a `compat` module that contains utilities for making `river` compatible with other Python libraries. Because we're doing regression we'll be using the `SKLRegressorWrapper`. We'll also be using `Pipeline` to encapsulate the logic of the `StandardScaler` and the `LogisticRegression` in one single object.
###Code
from river import compat
from river import compose
# We define a Pipeline, exactly like we did earlier for sklearn
model = compose.Pipeline(
('scale', preprocessing.StandardScaler()),
('log_reg', linear_model.LogisticRegression())
)
# We make the Pipeline compatible with sklearn
model = compat.convert_river_to_sklearn(model)
# We compute the CV scores using the same CV scheme and the same scoring
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.964 (± 0.016)
###Markdown
From batch to online/stream A quick overview of batch learningIf you've already delved into machine learning, then you shouldn't have any difficulty in getting to use incremental learning. If you are somewhat new to machine learning, then do not worry! The point of this notebook in particular is to introduce simple notions. We'll also start to show how `river` fits in and explain how to use it.The whole point of machine learning is to *learn from data*. In *supervised learning* you want to learn how to predict a target $y$ given a set of features $X$. Meanwhile in an unsupervised learning there is no target, and the goal is rather to identify patterns and trends in the features $X$. At this point most people tend to imagine $X$ as a somewhat big table where each row is an observation and each column is a feature, and they would be quite right. Learning from tabular data is part of what's called *batch learning*, which basically that all of the data is available to our learning algorithm at once. Multiple libraries have been created to handle the batch learning regime, with one of the most prominent being Python's [scikit-learn](https://scikit-learn.org/stable/).As a simple example of batch learning let's say we want to learn to predict if a women has breast cancer or not. We'll use the [breast cancer dataset available with scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer().html). We'll learn to map a set of features to a binary decision using a [logistic regression](https://www.wikiwand.com/en/Logistic_regression). Like many other models based on numerical weights, logisitc regression is sensitive to the scale of the features. Rescaling the data so that each feature has mean 0 and variance 1 is generally considered good practice. We can apply the rescaling and fit the logistic regression sequentially in an elegant manner using a [Pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html). To measure the performance of the model we'll evaluate the average [ROC AUC score](https://www.wikiwand.com/en/Receiver_operating_characteristic) using a 5 fold [cross-validation](https://www.wikiwand.com/en/Cross-validation_(statistics)).
###Code
from sklearn import datasets
from sklearn import linear_model
from sklearn import metrics
from sklearn import model_selection
from sklearn import pipeline
from sklearn import preprocessing
# Load the data
dataset = datasets.load_breast_cancer()
X, y = dataset.data, dataset.target
# Define the steps of the model
model = pipeline.Pipeline([
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LogisticRegression(solver='lbfgs'))
])
# Define a determistic cross-validation procedure
cv = model_selection.KFold(n_splits=5, shuffle=True, random_state=42)
# Compute the MSE values
scorer = metrics.make_scorer(metrics.roc_auc_score)
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.975 (± 0.011)
###Markdown
This might be a lot to take in if you're not accustomed to scikit-learn, but it probably isn't if you are. Batch learning basically boils down to:1. Loading (and preprocessing) the data2. Fitting a model to the data3. Computing the performance of the model on unseen dataThis is pretty standard and is maybe how most people imagine a machine learning pipeline. However, this way of proceeding has certain downsides. First of all your laptop would crash if the `load_boston` function returned a dataset who's size exceeds your available amount of RAM. Sometimes you can use some tricks to get around this. For example by optimizing the data types and by using sparse representations when applicable you can potentially save precious gigabytes of RAM. However, like many tricks this only goes so far. If your dataset weighs hundreds of gigabytes then you won't go far without some special hardware. One solution is to do out-of-core learning; that is, algorithms that can learn by being presented the data in chunks or mini-batches. If you want to go down this road then take a look at [Dask](https://examples.dask.org/machine-learning.html) and [Spark's MLlib](https://spark.apache.org/mllib/).Another issue with the batch learning regime is that it can't elegantly learn from new data. Indeed if new data is made available, then the model has to learn from scratch with a new dataset composed of the old data and the new data. This is particularly annoying in a real situation where you might have new incoming data every week, day, hour, minute, or even setting. For example if you're building a recommendation engine for an e-commerce app, then you're probably training your model from 0 every week or so. As your app grows in popularity, so does the dataset you're training on. This will lead to longer and longer training times and might require a hardware upgrade.A final downside that isn't very easy to grasp concerns the manner in which features are extracted. Every time you want to train your model you first have to extract features. The trick is that some features might not be accessible at the particular point in time you are at. For example maybe that some attributes in your data warehouse get overwritten with time. In other words maybe that all the features pertaining to a particular observations are not available, whereas they were a week ago. This happens more often than not in real scenarios, and apart if you have a sophisticated data engineering pipeline then you will encounter these issues at some point. A hands-on introduction to incremental learningIncremental learning is also often called *online learning* or *stream learning*, but if you [google online learning](https://www.google.com/search?q=online+learning) a lot of the results will point to educational websites. Hence, the terms "incremental learning" and "stream learning" (from which `river` derives it's name) are prefered. The point of incremental learning is to fit a model to a stream of data. In other words, the data isn't available in it's entirety, but rather the observations are provided one by one. As an example let's stream through the dataset used previously.
###Code
for xi, yi in zip(X, y):
# This is where the model learns
pass
###Output
_____no_output_____
###Markdown
In this case we're iterating over a dataset that is already in memory, but we could just as well stream from a CSV file, a Kafka stream, an SQL query, etc. If we look at `xi` we can notice that it is a `numpy.ndarray`.
###Code
xi
###Output
_____no_output_____
###Markdown
`river` by design works with `dict`s. We believe that `dict`s are more enjoyable to program with than `numpy.ndarray`s, at least for when single observations are concerned. `dict`'s bring the added benefit that each feature can be accessed by name rather than by position.
###Code
for xi, yi in zip(X, y):
xi = dict(zip(dataset.feature_names, xi))
pass
xi
###Output
_____no_output_____
###Markdown
Conveniently, `river`'s `stream` module has an `iter_sklearn_dataset` method that we can use instead.
###Code
from river import stream
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
pass
###Output
_____no_output_____
###Markdown
The simple fact that we are getting the data as a stream means that we can't do a lot of things the same way as in a batch setting. For example let's say we want to scale the data so that it has mean 0 and variance 1, as we did earlier. To do so we simply have to subtract the mean of each feature to each value and then divide the result by the standard deviation of the feature. The problem is that we can't possible known the values of the mean and the standard deviation before actually going through all the data! One way to proceed would be to do a first pass over the data to compute the necessary values and then scale the values during a second pass. The problem is that this defeats our purpose, which is to learn by only looking at the data once. Although this might seem rather restrictive, it reaps sizable benefits down the road.The way we do feature scaling in `river` involves computing *running statistics* (also know as *moving statistics*). The idea is that we use a data structure that estimates the mean and updates itself when it is provided with a value. The same goes for the variance (and thus the standard deviation). For example, if we denote $\mu_t$ the mean and $n_t$ the count at any moment $t$, then updating the mean can be done as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}}\end{cases}$$Likewise, the running variance can be computed as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}} \\s_{t+1} = s_t + (x - \mu_t) \times (x - \mu_{t+1}) \\\sigma_{t+1} = \frac{s_{t+1}}{n_{t+1}}\end{cases}$$where $s_t$ is a running sum of squares and $\sigma_t$ is the running variance at time $t$. This might seem a tad more involved than the batch algorithms you learn in school, but it is rather elegant. Implementing this in Python is not too difficult. For example let's compute the running mean and variance of the `'mean area'` variable.
###Code
n, mean, sum_of_squares, variance = 0, 0, 0, 0
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
n += 1
old_mean = mean
mean += (xi['mean area'] - mean) / n
sum_of_squares += (xi['mean area'] - old_mean) * (xi['mean area'] - mean)
variance = sum_of_squares / n
print(f'Running mean: {mean:.3f}')
print(f'Running variance: {variance:.3f}')
###Output
Running mean: 654.889
Running variance: 123625.903
###Markdown
Let's compare this with `numpy`. But remember, `numpy` requires access to "all" the data.
###Code
import numpy as np
i = list(dataset.feature_names).index('mean area')
print(f'True mean: {np.mean(X[:, i]):.3f}')
print(f'True variance: {np.var(X[:, i]):.3f}')
###Output
True mean: 654.889
True variance: 123625.903
###Markdown
The results seem to be exactly the same! The twist is that the running statistics won't be very accurate for the first few observations. In general though this doesn't matter too much. Some would even go as far as to say that this descrepancy is beneficial and acts as some sort of regularization...Now the idea is that we can compute the running statistics of each feature and scale them as they come along. The way to do this with `river` is to use the `StandardScaler` class from the `preprocessing` module, as so:
###Code
from river import preprocessing
scaler = preprocessing.StandardScaler()
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
scaler = scaler.learn_one(xi)
###Output
_____no_output_____
###Markdown
Now that we are scaling the data, we can start doing some actual machine learning. We're going to implement an online linear regression task. Because all the data isn't available at once, we are obliged to do what is called *stochastic gradient descent*, which is a popular research topic and has a lot of variants. SGD is commonly used to train neural networks. The idea is that at each step we compute the loss between the target prediction and the truth. We then calculate the gradient, which is simply a set of derivatives with respect to each weight from the linear regression. Once we have obtained the gradient, we can update the weights by moving them in the opposite direction of the gradient. The amount by which the weights are moved typically depends on a *learning rate*, which is typically set by the user. Different optimizers have different ways of managing the weight update, and some handle the learning rate implicitly. Online linear regression can be done in `river` with the `LinearRegression` class from the `linear_model` module. We'll be using plain and simple SGD using the `SGD` optimizer from the `optim` module. During training we'll measure the squared error between the truth and the predictions.
###Code
from river import linear_model
from river import optim
scaler = preprocessing.StandardScaler()
optimizer = optim.SGD(lr=0.01)
log_reg = linear_model.LogisticRegression(optimizer)
y_true = []
y_pred = []
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer(), shuffle=True, seed=42):
# Scale the features
xi_scaled = scaler.learn_one(xi).transform_one(xi)
# Test the current model on the new "unobserved" sample
yi_pred = log_reg.predict_proba_one(xi_scaled)
# Train the model with the new sample
log_reg.learn_one(xi_scaled, yi)
# Store the truth and the prediction
y_true.append(yi)
y_pred.append(yi_pred[True])
print(f'ROC AUC: {metrics.roc_auc_score(y_true, y_pred):.3f}')
###Output
ROC AUC: 0.990
###Markdown
The ROC AUC is significantly better than the one obtained from the cross-validation of scikit-learn's logisitic regression. However to make things really comparable it would be nice to compare with the same cross-validation procedure. `river` has a `compat` module that contains utilities for making `river` compatible with other Python libraries. Because we're doing regression we'll be using the `SKLRegressorWrapper`. We'll also be using `Pipeline` to encapsulate the logic of the `StandardScaler` and the `LogisticRegression` in one single object.
###Code
from river import compat
from river import compose
# We define a Pipeline, exactly like we did earlier for sklearn
model = compose.Pipeline(
('scale', preprocessing.StandardScaler()),
('log_reg', linear_model.LogisticRegression())
)
# We make the Pipeline compatible with sklearn
model = compat.convert_river_to_sklearn(model)
# We compute the CV scores using the same CV scheme and the same scoring
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.964 (± 0.016)
###Markdown
From batch to online/stream A quick overview of batch learningIf you've already delved into machine learning, then you shouldn't have any difficulty in getting to use incremental learning. If you are somewhat new to machine learning, then do not worry! The point of this notebook in particular is to introduce simple notions. We'll also start to show how `river` fits in and explain how to use it.The whole point of machine learning is to *learn from data*. In *supervised learning* you want to learn how to predict a target $y$ given a set of features $X$. Meanwhile in an unsupervised learning there is no target, and the goal is rather to identify patterns and trends in the features $X$. At this point most people tend to imagine $X$ as a somewhat big table where each row is an observation and each column is a feature, and they would be quite right. Learning from tabular data is part of what's called *batch learning*, which basically that all of the data is available to our learning algorithm at once. Multiple libraries have been created to handle the batch learning regime, with one of the most prominent being Python's [scikit-learn](https://scikit-learn.org/stable/).As a simple example of batch learning let's say we want to learn to predict if a women has breast cancer or not. We'll use the [breast cancer dataset available with scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html). We'll learn to map a set of features to a binary decision using a [logistic regression](https://www.wikiwand.com/en/Logistic_regression). Like many other models based on numerical weights, logisitc regression is sensitive to the scale of the features. Rescaling the data so that each feature has mean 0 and variance 1 is generally considered good practice. We can apply the rescaling and fit the logistic regression sequentially in an elegant manner using a [Pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html). To measure the performance of the model we'll evaluate the average [ROC AUC score](https://www.wikiwand.com/en/Receiver_operating_characteristic) using a 5 fold [cross-validation](https://www.wikiwand.com/en/Cross-validation_(statistics)).
###Code
from sklearn import datasets
from sklearn import linear_model
from sklearn import metrics
from sklearn import model_selection
from sklearn import pipeline
from sklearn import preprocessing
# Load the data
dataset = datasets.load_breast_cancer()
X, y = dataset.data, dataset.target
# Define the steps of the model
model = pipeline.Pipeline([
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LogisticRegression(solver='lbfgs'))
])
# Define a determistic cross-validation procedure
cv = model_selection.KFold(n_splits=5, shuffle=True, random_state=42)
# Compute the MSE values
scorer = metrics.make_scorer(metrics.roc_auc_score)
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.975 (± 0.011)
###Markdown
This might be a lot to take in if you're not accustomed to scikit-learn, but it probably isn't if you are. Batch learning basically boils down to:1. Loading (and preprocessing) the data2. Fitting a model to the data3. Computing the performance of the model on unseen dataThis is pretty standard and is maybe how most people imagine a machine learning pipeline. However, this way of proceeding has certain downsides. First of all your laptop would crash if the `load_boston` function returned a dataset who's size exceeds your available amount of RAM. Sometimes you can use some tricks to get around this. For example by optimizing the data types and by using sparse representations when applicable you can potentially save precious gigabytes of RAM. However, like many tricks this only goes so far. If your dataset weighs hundreds of gigabytes then you won't go far without some special hardware. One solution is to do out-of-core learning; that is, algorithms that can learn by being presented the data in chunks or mini-batches. If you want to go down this road then take a look at [Dask](https://examples.dask.org/machine-learning.html) and [Spark's MLlib](https://spark.apache.org/mllib/).Another issue with the batch learning regime is that it can't elegantly learn from new data. Indeed if new data is made available, then the model has to learn from scratch with a new dataset composed of the old data and the new data. This is particularly annoying in a real situation where you might have new incoming data every week, day, hour, minute, or even setting. For example if you're building a recommendation engine for an e-commerce app, then you're probably training your model from 0 every week or so. As your app grows in popularity, so does the dataset you're training on. This will lead to longer and longer training times and might require a hardware upgrade.A final downside that isn't very easy to grasp concerns the manner in which features are extracted. Every time you want to train your model you first have to extract features. The trick is that some features might not be accessible at the particular point in time you are at. For example maybe that some attributes in your data warehouse get overwritten with time. In other words maybe that all the features pertaining to a particular observations are not available, whereas they were a week ago. This happens more often than not in real scenarios, and apart if you have a sophisticated data engineering pipeline then you will encounter these issues at some point. A hands-on introduction to incremental learningIncremental learning is also often called *online learning* or *stream learning*, but if you [google online learning](https://www.google.com/search?q=online+learning) a lot of the results will point to educational websites. Hence, the terms "incremental learning" and "stream learning" (from which `river` derives it's name) are prefered. The point of incremental learning is to fit a model to a stream of data. In other words, the data isn't available in it's entirety, but rather the observations are provided one by one. As an example let's stream through the dataset used previously.
###Code
for xi, yi in zip(X, y):
# This is where the model learns
pass
###Output
_____no_output_____
###Markdown
In this case we're iterating over a dataset that is already in memory, but we could just as well stream from a CSV file, a Kafka stream, an SQL query, etc. If we look at `xi` we can notice that it is a `numpy.ndarray`.
###Code
xi
###Output
_____no_output_____
###Markdown
`river` by design works with `dict`s. We believe that `dict`s are more enjoyable to program with than `numpy.ndarray`s, at least for when single observations are concerned. `dict`'s bring the added benefit that each feature can be accessed by name rather than by position.
###Code
for xi, yi in zip(X, y):
xi = dict(zip(dataset.feature_names, xi))
pass
xi
###Output
_____no_output_____
###Markdown
Conveniently, `river`'s `stream` module has an `iter_sklearn_dataset` method that we can use instead.
###Code
from river import stream
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
pass
###Output
_____no_output_____
###Markdown
The simple fact that we are getting the data as a stream means that we can't do a lot of things the same way as in a batch setting. For example let's say we want to scale the data so that it has mean 0 and variance 1, as we did earlier. To do so we simply have to subtract the mean of each feature to each value and then divide the result by the standard deviation of the feature. The problem is that we can't possible known the values of the mean and the standard deviation before actually going through all the data! One way to proceed would be to do a first pass over the data to compute the necessary values and then scale the values during a second pass. The problem is that this defeats our purpose, which is to learn by only looking at the data once. Although this might seem rather restrictive, it reaps sizable benefits down the road.The way we do feature scaling in `river` involves computing *running statistics* (also know as *moving statistics*). The idea is that we use a data structure that estimates the mean and updates itself when it is provided with a value. The same goes for the variance (and thus the standard deviation). For example, if we denote $\mu_t$ the mean and $n_t$ the count at any moment $t$, then updating the mean can be done as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}}\end{cases}$$Likewise, the running variance can be computed as so:$$\begin{cases}n_{t+1} = n_t + 1 \\\mu_{t+1} = \mu_t + \frac{x - \mu_t}{n_{t+1}} \\s_{t+1} = s_t + (x - \mu_t) \times (x - \mu_{t+1}) \\\sigma_{t+1} = \frac{s_{t+1}}{n_{t+1}}\end{cases}$$where $s_t$ is a running sum of squares and $\sigma_t$ is the running variance at time $t$. This might seem a tad more involved than the batch algorithms you learn in school, but it is rather elegant. Implementing this in Python is not too difficult. For example let's compute the running mean and variance of the `'mean area'` variable.
###Code
n, mean, sum_of_squares, variance = 0, 0, 0, 0
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
n += 1
old_mean = mean
mean += (xi['mean area'] - mean) / n
sum_of_squares += (xi['mean area'] - old_mean) * (xi['mean area'] - mean)
variance = sum_of_squares / n
print(f'Running mean: {mean:.3f}')
print(f'Running variance: {variance:.3f}')
###Output
Running mean: 654.889
Running variance: 123625.903
###Markdown
Let's compare this with `numpy`. But remember, `numpy` requires access to "all" the data.
###Code
import numpy as np
i = list(dataset.feature_names).index('mean area')
print(f'True mean: {np.mean(X[:, i]):.3f}')
print(f'True variance: {np.var(X[:, i]):.3f}')
###Output
True mean: 654.889
True variance: 123625.903
###Markdown
The results seem to be exactly the same! The twist is that the running statistics won't be very accurate for the first few observations. In general though this doesn't matter too much. Some would even go as far as to say that this descrepancy is beneficial and acts as some sort of regularization...Now the idea is that we can compute the running statistics of each feature and scale them as they come along. The way to do this with `river` is to use the `StandardScaler` class from the `preprocessing` module, as so:
###Code
from river import preprocessing
scaler = preprocessing.StandardScaler()
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer()):
scaler = scaler.learn_one(xi)
###Output
_____no_output_____
###Markdown
Now that we are scaling the data, we can start doing some actual machine learning. We're going to implement an online linear regression task. Because all the data isn't available at once, we are obliged to do what is called *stochastic gradient descent*, which is a popular research topic and has a lot of variants. SGD is commonly used to train neural networks. The idea is that at each step we compute the loss between the target prediction and the truth. We then calculate the gradient, which is simply a set of derivatives with respect to each weight from the linear regression. Once we have obtained the gradient, we can update the weights by moving them in the opposite direction of the gradient. The amount by which the weights are moved typically depends on a *learning rate*, which is typically set by the user. Different optimizers have different ways of managing the weight update, and some handle the learning rate implicitly. Online linear regression can be done in `river` with the `LinearRegression` class from the `linear_model` module. We'll be using plain and simple SGD using the `SGD` optimizer from the `optim` module. During training we'll measure the squared error between the truth and the predictions.
###Code
from river import linear_model
from river import optim
scaler = preprocessing.StandardScaler()
optimizer = optim.SGD(lr=0.01)
log_reg = linear_model.LogisticRegression(optimizer)
y_true = []
y_pred = []
for xi, yi in stream.iter_sklearn_dataset(datasets.load_breast_cancer(), shuffle=True, seed=42):
# Scale the features
xi_scaled = scaler.learn_one(xi).transform_one(xi)
# Test the current model on the new "unobserved" sample
yi_pred = log_reg.predict_proba_one(xi_scaled)
# Train the model with the new sample
log_reg.learn_one(xi_scaled, yi)
# Store the truth and the prediction
y_true.append(yi)
y_pred.append(yi_pred[True])
print(f'ROC AUC: {metrics.roc_auc_score(y_true, y_pred):.3f}')
###Output
ROC AUC: 0.990
###Markdown
The ROC AUC is significantly better than the one obtained from the cross-validation of scikit-learn's logisitic regression. However to make things really comparable it would be nice to compare with the same cross-validation procedure. `river` has a `compat` module that contains utilities for making `river` compatible with other Python libraries. Because we're doing regression we'll be using the `SKLRegressorWrapper`. We'll also be using `Pipeline` to encapsulate the logic of the `StandardScaler` and the `LogisticRegression` in one single object.
###Code
from river import compat
from river import compose
# We define a Pipeline, exactly like we did earlier for sklearn
model = compose.Pipeline(
('scale', preprocessing.StandardScaler()),
('log_reg', linear_model.LogisticRegression())
)
# We make the Pipeline compatible with sklearn
model = compat.convert_river_to_sklearn(model)
# We compute the CV scores using the same CV scheme and the same scoring
scores = model_selection.cross_val_score(model, X, y, scoring=scorer, cv=cv)
# Display the average score and it's standard deviation
print(f'ROC AUC: {scores.mean():.3f} (± {scores.std():.3f})')
###Output
ROC AUC: 0.964 (± 0.016)
|
Step 3/Activity_5_Assembling_a_Deep_Learning_System.ipynb | ###Markdown
Activity 5: Assembling a Deep Learning SystemIn this activity, we will train the first version of our LSTM model using Bitcoin daily closing prices. These prices will be organized using the weeks of both 2016 and 2017. We do that because we are interested in predicting the prices of a week's worth of trading.
###Code
%autosave 5
# Import necessary libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn-white')
from keras.models import load_model
# Import training dataset
train = pd.read_csv('data/train_dataset.csv')
train.head()
###Output
_____no_output_____
###Markdown
Reshape Data
###Code
def create_groups(data, group_size=7):
"""Create distinct groups from a continuous series.
Parameters
----------
data: np.array
Series of continious observations.
group_size: int, default 7
Determines how large the groups are. That is,
how many observations each group contains.
Returns
-------
A Numpy array object.
"""
samples = []
for i in range(0, len(data), group_size):
sample = list(data[i:i + group_size])
if len(sample) == group_size:
samples.append(np.array(sample).reshape(1, group_size))
return np.array(samples)
# Find the remainder when the number of observations is divided by group size
len(train) % 7
# Create groups of 7 from our data.
# We drop the first two observations so that the
# number of total observations is divisible by the `group_size`.
data = create_groups(train['close_point_relative_normalization'][2:].values)
print(data.shape)
# Reshape data into format expected by LSTM layer
X_train = data[:-1, :].reshape(1, 76, 7)
Y_validation = data[-1].reshape(1, 7)
print(X_train.shape)
print(Y_validation.shape)
###Output
(1, 76, 7)
(1, 7)
###Markdown
Load Our Model
###Code
# Load our previously trained model
model = load_model('bitcoin_lstm_v0.h5')
###Output
_____no_output_____
###Markdown
Train model
###Code
%%time
# Train the model
history = model.fit(
x=X_train, y=Y_validation,
batch_size=32, epochs=100)
# Plot loss function
pd.Series(history.history['loss']).plot(figsize=(14, 4));
###Output
_____no_output_____
###Markdown
Make Predictions
###Code
# Make predictions using X_train data
predictions = model.predict(x=X_train)[0]
predictions
def denormalize(series, last_value):
"""Denormalize the values for a given series.
This uses the last value available (i.e. the last
closing price of the week before our prediction)
as a reference for scaling the predicted results.
"""
result = last_value * (series + 1)
return result
# Denormalize predictions
last_weeks_value = train[train['date'] == train['date'][:-7].max()]['close'].values[0]
denormalized_prediction = denormalize(predictions, last_weeks_value)
denormalized_prediction
# Plot denormalized predictions against actual predictions
plt.figure(figsize=(14, 4))
plt.plot(train['close'][-7:].values, label='Actual')
plt.plot(denormalized_prediction, color='#d35400', label='Predicted')
plt.grid()
plt.legend();
prediction_plot = np.zeros(len(train)-2)
prediction_plot[:] = np.nan
prediction_plot[-7:] = denormalized_prediction
plt.figure(figsize=(14, 4))
plt.plot(train['close'][-30:].values, label='Actual')
plt.plot(prediction_plot[-30:], color='#d35400', linestyle='--', label='Predicted')
plt.axvline(30 - 7, color='r', linestyle='--', linewidth=1)
plt.grid()
plt.legend(loc='lower right');
# TASK:
# Save model to disk
#
model.save('bitcoin_lstm_v0.h5')
###Output
_____no_output_____ |
.ipynb_checkpoints/angular_modulations-checkpoint.ipynb | ###Markdown
Creating a class for modulation
###Code
class modulation(object):
#receives the signal and the frequecy of the carrier and modulates(FM and PM)
def __init__(self,frequency_carrier=200., t_max=1., kind='fm'):
self.frequency_carrier = frequency_carrier
self.t_max = t_max
self.kind = kind
#Freq should be im Mhz, but I'm dont have this kind of prcessing power, it must be scaled, but...
c = 10. #constant that makes things "continuous"
self.t = t = np.linspace(0, self.t_max, self.t_max * frequency_carrier * c)
self.mod = np.zeros(self.t.shape[0])
self.sig = np.zeros(self.t.shape[0])
if (self.kind not in["fm", "pm"]):
raise NameError('%s is not implemented and problably will never be, deal with it!'%(self.kind))
def _mod_fm(self,kfm, amplitude):
print "#########################FM Modulation############################"
return amplitude*np.cos(self.frequency_carrier*2*np.pi*self.t
+ 2*np.pi*kfm*np.cumsum(self.sig)) #the jump of the cat
def _mod_pm(self, kpm, amplitude):
print "#########################PM Modulation############################"
return amplitude*np.cos(self.frequency_carrier*2*np.pi*self.t + kpm*self.sig)
def modulate(self, signal, k=1., amplitude=1., showing_options='both', periods_to_show=10):
self.sig = signal
if (self.kind=='fm'): self.mod = self._mod_fm(k, amplitude)
else: self.mod = self._mod_pm(k, amplitude)
if (showing_options in['time', 'both']):
plt.subplot(211)
plt.title("Signal/Modulated signal (%s)"%(self.kind.upper()))
plt.plot(self.t, self.sig)
plt.xlim(0, periods_to_show/float(self.frequency_carrier))
plt.subplot(212)
plt.plot(self.t, self.mod)
plt.xlim(0, periods_to_show/float(self.frequency_carrier))
plt.xlabel("Time(s)")
plt.ylabel("Amplitude(V)")
plt.show()
if (showing_options in ["frequency", "freq", "both"]):
fs = self.t.shape[0]/self.t_max #another way to retrieve the frequency
f = np.linspace(-fs/2.,fs/2.,self.mod.shape[0])
M = np.fft.fftshift(np.abs(np.fft.fft(self.mod)))
plt.title("Signal in frequency (%s)"%(self.kind.upper()))
plt.plot(f, M)
plt.xlabel("Frequency(Hz)")
plt.ylabel("Absolute value")
plt.grid()
plt.show()
def demodulate(self, frequency, periods_to_show=10):
print "#########################%s Demodulation############################"%(self.kind.upper())
s_diff = np.diff(np.hstack((self.mod[1], self.mod)))
s_diode = np.zeros(self.sig.shape[0])
for i in xrange(self.sig.shape[0]):
if (self.sig[i]>=0): s_diode[i] = self.sig[i]
#ideal low pass, very well compressed in two line
f_lp = int(round(1.5*frequency*self.t_max)) # (f/fs)*samples[], fs = samples[]/t and 10% more
s_filtered = np.fft.ifft(np.multiply(np.fft.fft(s_diode),
np.hstack((np.ones(f_lp),
np.zeros(s_diode.shape[0]-2*f_lp),
np.ones(f_lp))))).real
s_nodc = s_filtered - s_filtered.mean() # removing DC in the easiest way
plt.subplots_adjust(hspace=.7, wspace=.7)#adjusting spacing
plt.subplot(211)
plt.title("Original signal")
plt.plot(self.t, self.sig)
plt.xlim(0, periods_to_show/float(self.frequency_carrier))
plt.subplot(212)
plt.title("Demodulated signal")
plt.plot(self.t, 2*s_nodc)
plt.xlim(0, periods_to_show/float(self.frequency_carrier))
plt.xlabel("Time(s)")
plt.ylabel("Amplitude(V)")
plt.show()
print "#########################Showing steps############################"
plt.subplots_adjust(hspace=1., wspace=1.)#adjusting spacing
plt.subplot(411)
plt.title("Differentiating")
plt.plot(self.t, s_diff)
plt.xlim(0, periods_to_show/float(self.frequency_carrier))
plt.subplot(412)
plt.title("'Diode'")
plt.plot(self.t, s_diode)
plt.xlim(0, periods_to_show/float(self.frequency_carrier))
plt.subplot(413)
plt.title("Ideal Low pass (+50%)")
plt.plot(self.t, s_filtered)
plt.xlim(0, periods_to_show/float(self.frequency_carrier))
plt.subplot(414)
plt.title("Removing DC level")
plt.plot(self.t, s_nodc)
plt.xlim(0, periods_to_show/float(self.frequency_carrier))
plt.show()
###Output
_____no_output_____
###Markdown
Testing the modulations FM
###Code
# creating things
# "Let there be light..." Just kidding
freq1 = 20.
fm = modulation(kind='fm')
signal = np.cos(freq1*2*np.pi*fm.t)
fm.modulate(signal, periods_to_show=20, k=.03, amplitude=2.)
###Output
#########################FM Modulation############################
###Markdown
PM
###Code
freq2 =40.
pm = modulation(kind='pm')
signal2 = np.sin(freq2*2*np.pi*pm.t)
pm.modulate(signal2, periods_to_show=10, k=0.5*np.pi, amplitude=7.)
###Output
#########################PM Modulation############################
###Markdown
Testing the demodulations FM
###Code
fm.demodulate(frequency=freq1, periods_to_show=20)
###Output
#########################FM Demodulation############################
###Markdown
PM
###Code
pm.demodulate(frequency=freq2, periods_to_show=10)
###Output
#########################PM Demodulation############################
###Markdown
Square wave FM Modulation
###Code
fq = 10.
sqfm = modulation(kind='fm')
sq = square(2 * np.pi * fq * sqfm.t)
sqfm.modulate(signal=sq, k=.0001, periods_to_show=40 )
###Output
#########################FM Modulation############################
###Markdown
FM Demodulation
###Code
sqfm.demodulate(frequency=fq+40, periods_to_show=40)
###Output
#########################FM Demodulation############################
###Markdown
PM Modulation
###Code
fq2 = 10.
sqpm = modulation(kind='pm')
sq2 = square(2 * np.pi * fq2 * sqpm.t)
sqpm.modulate(signal=sq2, k=5*np.pi, periods_to_show=40 )
###Output
#########################PM Modulation############################
###Markdown
PM Demodulation
###Code
sqpm.demodulate(frequency=fq2+100, periods_to_show=40)
###Output
#########################PM Demodulation############################
|
pytorch-tutorial-02-classification.ipynb | ###Markdown
Simple MNIST classifier
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torchvision
import torchvision.transforms as transforms
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layer1 = nn.Sequential(
nn.Linear(28*28, 300),
nn.Dropout(0.9),
nn.Tanh())
self.layer2 = nn.Sequential(
nn.Linear(300, 300),
nn.Tanh())
self.layer3 = nn.Sequential(
nn.Linear(300, 10)
)
def forward(self, x):
x = x.view(-1, 28*28)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
return x
net = Net()
x0 = torch.ones((28,28), requires_grad=True)
net(x0)
class ConvNet(nn.Module):
def __init__(self, num_classes=10):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 5, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(5),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(5, 5, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(5),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.fc = nn.Linear(245, num_classes)
def forward(self, x):
x = self.layer1(x)
x = self.layer2(x)
x = x.reshape(x.size(0), -1)
x = self.fc(x)
return x
# load dataset
tr = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
training_data = torchvision.datasets.MNIST(root='./data', train=True,
download=True, transform=tr)
training_loader = torch.utils.data.DataLoader(training_data, batch_size=64,
shuffle=True, num_workers=1)
test_data = torchvision.datasets.MNIST(root='./data', train=False,
download=True, transform=tr)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=len(test_data),
shuffle=True, num_workers=1)
# the function parameters() is implemented in nn.Module
net = ConvNet()
params = list(net.parameters())
cross_entropy = nn.CrossEntropyLoss() # instantiate loss
opt = optim.Adam(params) # instantiate optimizer
epochs = 3
history = []
for i in range(0, epochs):
for j,(inputs, labels) in enumerate(training_loader):
# zero the parameter gradients
opt.zero_grad()
# regularization loss
reg_loss = 0
for param in net.parameters():
reg_loss += torch.sum(torch.abs(param))
# forward pass
outputs = net(inputs)
# training loss
train_loss = cross_entropy(outputs, labels)
# calculate total loss
loss = train_loss + 0.00005*reg_loss
history.append(loss.item())
# backward pass
loss.backward()
opt.step()
if (j+1)%100==0:
print("epoch: {:2} batch: {:4} loss: {:3.4}".format(i+1,j+1,history[-1]))
# set model to evaluation mode
# (important for batchnorm/dropout)
net.train(False)
test_output, test_labels = [(net(data), target) for data, target in test_loader][0]
predicted_class = test_output.max(dim = 1)[1]
# compute accuracy
(predicted_class == test_labels).float().mean().item()
plt.plot(history);
# Save model to disk
torch.save(net.state_dict(), "net")
# Load model
net = ConvNet()
net.load_state_dict(torch.load("net"))
###Output
_____no_output_____ |
module3-autoencoders/Fixed_Lecture_NB_DS16_LS_DS_433_Autoencoders_Lecture.ipynb | ###Markdown
Lambda School Data Science*Unit 4, Sprint 3, Module 3*--- Autoencoders> An autoencoder is a type of artificial neural network used to learn efficient data codings in an unsupervised manner.[1][2] The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise”. Along with the reduction side, a reconstructing side is learnt, where the autoencoder tries to generate from the reduced encoding a representation as close as possible to its original input, hence its name. Learning Objectives*At the end of the lecture you should be to*:* Part 1: Describe the componenets of an autoencoder* Part 2: Train an autoencoder* Part 3: Apply an autoenocder to a basic information retrieval problem__Problem:__ Is it possible to automatically represent an image as a fixed-sized vector even if it isn’t labeled?__Solution:__ Use an autoencoderWhy do we need to represent an image as a fixed-sized vector do you ask? * __Information Retrieval__ - [Reverse Image Search](https://en.wikipedia.org/wiki/Reverse_image_search) - [Recommendation Systems - Content Based Filtering](https://en.wikipedia.org/wiki/Recommender_systemContent-based_filtering)* __Dimensionality Reduction__ - [Feature Extraction](https://www.kaggle.com/c/vsb-power-line-fault-detection/discussion/78285) - [Manifold Learning](https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction)We've already seen *representation learning* when we talked about word embedding modelings during our NLP week. Today we're going to achieve a similiar goal on images using *autoencoders*. An autoencoder is a neural network that is trained to attempt to copy its input to its output. Usually they are restricted in ways that allow them to copy only approximately. The model often learns useful properties of the data, because it is forced to prioritize which aspecs of the input should be copied. The properties of autoencoders have made them an important part of modern generative modeling approaches. Consider autoencoders a special case of feed-forward networks (the kind we've been studying); backpropagation and gradient descent still work. Autoencoder Architecture (Learn) OverviewThe *encoder* compresses the input data and the *decoder* does the reverse to produce the uncompressed version of the data to create a reconstruction of the input as accurately as possible:The learning process gis described simply as minimizing a loss function: $ L(x, g(f(x))) $- $L$ is a loss function penalizing $g(f(x))$ for being dissimiliar from $x$ (such as mean squared error)- $f$ is the encoder function- $g$ is the decoder function  Follow Along Extremely Simple Autoencoder
###Code
import tensorflow as tf
import numpy as np
import os
%load_ext tensorboard
# needed to update link
# use this link, here -- it works!
URL_ = "https://github.com/LambdaSchool/DS-Unit-4-Sprint-2-Neural-Networks/blob/main/quickdraw10.npz?raw=true"
# create directory to store images that we'll we will be using to train out auto-encoders
path_to_zip = tf.keras.utils.get_file('./quickdraw10.npz', origin=URL_, extract=False)
data = np.load(path_to_zip)
x_train = data['arr_0']
y_train = data['arr_1']
print(x_train.shape)
print(y_train.shape)
# data is loaded in already as 1D row vectors
x_train[0].shape
class_names = ['apple',
'anvil',
'airplane',
'banana',
'The Eiffel Tower',
'The Mona Lisa',
'The Great Wall of China',
'alarm clock',
'ant',
'asparagus']
import matplotlib.pyplot as plt
plt.figure(figsize=(10,5))
start = 0
# helper function used to plot images
for num, name in enumerate(class_names):
plt.subplot(2,5, num+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(x_train[start].reshape(28,28), cmap=plt.cm.binary)
plt.xlabel(name)
start += 10000
plt.show()
###Output
_____no_output_____
###Markdown
Prep data
###Code
from sklearn.utils import shuffle
# Shuffle
# also a good idea to suffice data before using it to build a model
x_train, y_train = shuffle(x_train, y_train)
# Normalize
# we are scaling the pixel values between 0 and 1 by dividing by the largest pixel value (i.e. 255)
max_pixel_value = x_train.max()
x_train = x_train.astype('float32') / max_pixel_value
print(x_train.shape)
# Check that our pixel values are indeed normalized
assert x_train.min() == 0.0
assert x_train.max() == 1.0
# YOUR CODE HERE
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
from tensorflow.keras.callbacks import EarlyStopping, TensorBoard
# build simple auto-encoder
# save input data dimensions to variable
input_dims = x_train.shape[1]
shape = (input_dims,)
# decoder dimensions (i.e. 784 dimensions)
decoding_dim = input_dims
# encoder output dimensions
latent_vect_dims = 32
# create input layer
inputs = Input(shape=shape)
# create encoder layer
# We can think of each layer as an individual function y = f(x)
# We don't think of f(x) as f times x, so don't think of this as Dense times x
# y = f(x) <=> layer_output = Dense(parameters)(layer_input)
# Dense layer is a mathematical function with inputs that are passed into it so
# that it may give outputs.
# We can even concieve of neural networks as composite functions
# With a Model class we have to give an extra 'layer of clarity' considering the
# flexibility of the NN architecture we can build in contrast to the simple,
# yet limited architectonics of the Sequential Class
# What comes from the inputs layer will go into the encoder layer
encoded = Dense(latent_vect_dims, activation="relu")(inputs)
# create decoder layer
# compressed vector we pass in has to be decoded/uncompressed. It does this to reconstruct the original image
decoded = Dense(decoding_dim, activation="sigmoid")(encoded)
# bring it all together using the model API. inputs is the original image, and outputs is our reconstructed image with some layers inbetween them
# Now we're using the Model class. Allows more flexibility for how we build our models.
#
autoencoder_simple = Model(inputs=inputs, outputs=decoded, name="simple_autoencoder")
# compiling our Model class model
autoencoder_simple.compile(optimizer='nadam', loss='binary_crossentropy')
autoencoder_simple.summary()
# Here we can see that the dimensions of our image were 784, were compressed to 32,
# before being reconstructed with its original 784 dimensions by the final/output layer
import os
import datetime
from tensorflow.keras.callbacks import TensorBoard
# tf.keras.callbacks.TesnorBoard()
# cut off training if loss doesn't decrease by a certain amount over X number of epoches
stop = EarlyStopping(monitor='val_loss', min_delta=0.001, patience=2)
now = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
logdir = os.path.join("logs", f"SimpleAutoencoder-{now}")
tensorboard = TensorBoard(log_dir=logdir)
autoencoder_simple.fit(x_train, # input image to encoder
x_train, # provide input image to decoder so the model learns how to reconstruct the input image
epochs=100,
batch_size=64,
shuffle=True,
validation_split=.2,
verbose = True,
callbacks=[stop, tensorboard])
###Output
Epoch 1/100
1250/1250 [==============================] - 6s 3ms/step - loss: 0.2935 - val_loss: 0.2430
Epoch 2/100
1250/1250 [==============================] - 3s 2ms/step - loss: 0.2333 - val_loss: 0.2282
Epoch 3/100
1250/1250 [==============================] - 3s 2ms/step - loss: 0.2255 - val_loss: 0.2245
Epoch 4/100
1250/1250 [==============================] - 3s 3ms/step - loss: 0.2234 - val_loss: 0.2236
Epoch 5/100
1250/1250 [==============================] - 3s 2ms/step - loss: 0.2227 - val_loss: 0.2229
Epoch 6/100
1250/1250 [==============================] - 3s 2ms/step - loss: 0.2223 - val_loss: 0.2225
Epoch 7/100
1250/1250 [==============================] - 3s 3ms/step - loss: 0.2221 - val_loss: 0.2223
###Markdown
Use Trained Model to Reconstruct Images
###Code
# encode and decode some images
# original images go in (i.e, x_train) and decoded images come out (i.e. a non-perfect reconstruction of x_train)
decoded_imgs = autoencoder_simple(x_train)
# use Matplotlib (don't ask)
import matplotlib.pyplot as plt
### helper fuction for plotting reconstructed and original images
n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_train[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].numpy().reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
ChallengeExpected to talk about the components of autoencoder and their purpose. Train an Autoencoder (Learn) OverviewAs long as our architecture maintains an hourglass shape, we can continue to add layers and create a deeper network. Follow Along Deep Autoencoder
###Code
# encoder -> decoder
# dim of each hidden layer: 784, 128, 64, 32 -> 64, 128, 784
# YOUR CODE HERE
# input layer
inputs = Input(shape=(784,))
# 1st encoding layer. Inputs to the first Dense layer are the original input dimensions
# Compresses input image into 128 dimension vector
encoded_1 = Dense(128, activation="relu")(inputs)
# 2nd encoding layer
# Compresses 128 dim vect into 64 dim vect
encoded_2 = Dense(64, activation="relu")(encoded_1)
# 3rd encoding layer
# Compresses 64 dim vect into 32 dim vect
# This is the final compression that the encoder performs
encoded_3 = Dense(32, activation="relu")(encoded_2)
## All following layers belong to the decoder ##
# 1st decoding layer
# Decompresses 32 dim vector into 64 dim vect
decoding_1 = Dense(64, activation="relu")(encoded_3)
# 2nd decoding layer
# Decompresses 64 dim vect into 128 dim vect
decoding_2 = Dense(128, activation="relu")(decoding_1)
# 3rd decoding layer
# Decompresses 128 dim vect into 784 dim vect
# Seeing as we're using binary crossentropy later for our compiling,
# we're using sigmoid for our output layers activation
decoding_3 = Dense(784, activation="sigmoid")(decoding_2)
# bring it all together using the Model API
autoencoder_deep = Model(inputs=inputs, outputs=decoding_3, name="autoencoder_deep")
autoencoder_deep.summary()
# compile & fit model
autoencoder_deep.compile(optimizer='nadam', loss='binary_crossentropy')
from tensorflow.keras.callbacks import TensorBoard
# tf.keras.callbacks.TesnorBoard()
stop = EarlyStopping(monitor='val_loss', min_delta=0.0001, patience=5)
logdir = os.path.join("logs", f"DeepAutoencoder")
tensorboard = TensorBoard(log_dir=logdir)
autoencoder_deep.fit(x_train,
x_train,
epochs=100,
batch_size=64,
shuffle=True,
validation_split=.2,
verbose = True,
callbacks=[stop, tensorboard],
workers=10)
###Output
Epoch 1/100
1250/1250 [==============================] - 6s 4ms/step - loss: 0.2868 - val_loss: 0.2509
Epoch 2/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.2424 - val_loss: 0.2346
Epoch 3/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.2294 - val_loss: 0.2255
Epoch 4/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.2217 - val_loss: 0.2183
Epoch 5/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.2164 - val_loss: 0.2144
Epoch 6/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.2125 - val_loss: 0.2109
Epoch 7/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.2092 - val_loss: 0.2087
Epoch 8/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.2067 - val_loss: 0.2064
Epoch 9/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.2048 - val_loss: 0.2047
Epoch 10/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.2032 - val_loss: 0.2044
Epoch 11/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.2019 - val_loss: 0.2019
Epoch 12/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.2006 - val_loss: 0.2009
Epoch 13/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1992 - val_loss: 0.1996
Epoch 14/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1982 - val_loss: 0.1985
Epoch 15/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1974 - val_loss: 0.1984
Epoch 16/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1967 - val_loss: 0.1974
Epoch 17/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1961 - val_loss: 0.1969
Epoch 18/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1957 - val_loss: 0.1963
Epoch 19/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1952 - val_loss: 0.1957
Epoch 20/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1948 - val_loss: 0.1955
Epoch 21/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1944 - val_loss: 0.1951
Epoch 22/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1941 - val_loss: 0.1947
Epoch 23/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1938 - val_loss: 0.1945
Epoch 24/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1935 - val_loss: 0.1944
Epoch 25/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1932 - val_loss: 0.1941
Epoch 26/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1930 - val_loss: 0.1934
Epoch 27/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1927 - val_loss: 0.1941
Epoch 28/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1924 - val_loss: 0.1931
Epoch 29/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1922 - val_loss: 0.1929
Epoch 30/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1920 - val_loss: 0.1926
Epoch 31/100
1250/1250 [==============================] - 5s 4ms/step - loss: 0.1918 - val_loss: 0.1930
Epoch 32/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1916 - val_loss: 0.1926
Epoch 33/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1914 - val_loss: 0.1925
Epoch 34/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1912 - val_loss: 0.1923
Epoch 35/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1911 - val_loss: 0.1928
Epoch 36/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1909 - val_loss: 0.1920
Epoch 37/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1907 - val_loss: 0.1916
Epoch 38/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1906 - val_loss: 0.1917
Epoch 39/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1905 - val_loss: 0.1913
Epoch 40/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1903 - val_loss: 0.1919
Epoch 41/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1902 - val_loss: 0.1915
Epoch 42/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1901 - val_loss: 0.1912
Epoch 43/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1900 - val_loss: 0.1911
Epoch 44/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1899 - val_loss: 0.1910
Epoch 45/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1897 - val_loss: 0.1909
Epoch 46/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1896 - val_loss: 0.1906
Epoch 47/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1895 - val_loss: 0.1905
Epoch 48/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1895 - val_loss: 0.1906
Epoch 49/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1894 - val_loss: 0.1902
Epoch 50/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1893 - val_loss: 0.1906
Epoch 51/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1891 - val_loss: 0.1904
Epoch 52/100
1250/1250 [==============================] - 4s 3ms/step - loss: 0.1891 - val_loss: 0.1904
Epoch 53/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1890 - val_loss: 0.1903
Epoch 54/100
1250/1250 [==============================] - 4s 4ms/step - loss: 0.1889 - val_loss: 0.1902
###Markdown
Use trained model to reconstruct images
###Code
decoded_imgs = autoencoder_deep.predict(x_train)
# use Matplotlib (don't ask)
import matplotlib.pyplot as plt
n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_train[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
Convolutional autoencoder> Since our inputs are images, it makes sense to use convolutional neural networks (convnets) as encoders and decoders. In practical settings, autoencoders applied to images are always convolutional autoencoders --they simply perform much better.> Let's implement one. The encoder will consist in a stack of Conv2D and MaxPooling2D layers (max pooling being used for spatial down-sampling), while the decoder will consist in a stack of Conv2D and UpSampling2D layers.
###Code
# we need to transfor our row vectors back into matrices
# because the convolutional and pooling layers expect images in the form of matrices
x_train = x_train.reshape((x_train.shape[0], 28, 28))
x_train[0].shape
###Output
_____no_output_____
###Markdown
Example Image of a Conv Autoencoder 
###Code
4 * 4* 8
from tensorflow.keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D, Flatten, Reshape
from tensorflow.keras.models import Model
# YOUR CODE HERE
# Our model architecture we're building here is meant to mimic the image above
# define some paramters || dims of each, individual sample
input_shape = (28,28, 1)
# weight matrix parameters
weight_matrix_size = (3,3)
pooling_size = (2,2)
# input layer
inputs=Input(shape=input_shape)
# All these encoded variables are being assigning the output and passing
# it into the next as an input until we reach the end
# encoding layers. 16 weight matrices which will output 16 map matrices
encoded = Conv2D(16, weight_matrix_size, activation="relu", padding="same")(inputs)
encoded = MaxPooling2D(pooling_size, padding="same")(encoded)
encoded = Conv2D(8, weight_matrix_size, activation="relu", padding="same")(encoded)
encoded = MaxPooling2D(pooling_size, padding="same")(encoded)
# padding="same" means there are the same amount of values on both sides
encoded = Conv2D(8, weight_matrix_size, activation="relu", padding="same")(encoded)
# At this point the data is in the following shape. (4,4,8)
# 4 by 4 matrix with 8 of them stacked ontop of each other (Rank 4 Tensor)
encoded = MaxPooling2D(pooling_size, padding="same")(encoded)
# flatten 3D tensor into 1D vector in preperation for the Dense layer
encoded_vect = Flatten()(encoded)
# 128 = 4*4*8
encoded_vect = Dense(128, activation="relu")(encoded_vect)
# Reshaping 1D vectors into 2D matrices
# Convolutional layers expect a 2D input, this is why we are juggling
# these shapes back and forth
encoded = Reshape((4,4,8))(encoded_vect)
### decoding layers ###
decoded = Conv2D(8, weight_matrix_size, activation="relu", padding="same")(encoded)
decoded = UpSampling2D(pooling_size)(decoded)
decoded = Conv2D(8, weight_matrix_size, activation="relu", padding="same")(decoded)
decoded = UpSampling2D(pooling_size)(decoded)
decoded = Conv2D(16, weight_matrix_size, activation="relu")(decoded)
decoded = UpSampling2D(pooling_size)(decoded)
# because this is the final reconstruction of the original image
# we must necessarily use a single weight matrix for the convolution
# so that the final output is a 2D matrix and not a rank 3 Tensor (i.e. a volume)
decoded = Conv2D(1, weight_matrix_size, activation="sigmoid", padding="same")(decoded)
# bring it all together using the Mode API
conv_autoencoder = Model(inputs=inputs, outputs=decoded, name="conv_autoencoder")
conv_autoencoder.summary()
# compile & fit model
from tensorflow.keras.callbacks import EarlyStopping, TensorBoard
import os
import datetime
conv_autoencoder.compile(optimizer='nadam', loss='binary_crossentropy')
from tensorflow.keras.callbacks import TensorBoard
# tf.keras.callbacks.TesnorBoard()
stop = EarlyStopping(monitor='val_loss', min_delta=0.01, patience=5)
logdir = os.path.join("logs", f"ConvolutionalAutoencoder")
tensorboard = TensorBoard(log_dir=logdir)
conv_autoencoder.fit(x_train,
x_train,
epochs=50,
batch_size=32,
shuffle=True,
validation_split=.2,
verbose = True,
callbacks=[stop, tensorboard],
workers=10)
import matplotlib.pyplot as plt
decoded_imgs = conv_autoencoder.predict(x_train)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i+1)
plt.imshow(x_train[i])
plt.title(class_names[y_train[i]])
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n+1)
plt.imshow(decoded_imgs[i].reshape(28,28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
Visualization of the Representations
###Code
# we have isolated the encoder portion of our auto-encoder so that we can access the encoder vector (i.e. the output of the encoder)
encoder = Model(inputs=inputs, outputs=encoded)
# the predictions (i.e. the output) of our encoder model are the original images encoder into a smaller dim space (i.e. the encoder vectors)
encoded_imgs = encoder.predict(x_train)
n = 10
plt.figure(figsize=(20, 8))
for i in range(1, n):
ax = plt.subplot(1, n, i)
plt.imshow(encoded_imgs[i].reshape(4, 4 * 8).T)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# these images are the encoded vectors for some of the images in the x_train
# notice that we really can't interpret them, this is the price we pay for non-linear dimentionality reduction
# the features in the encoded vectors are non-linear combinations of the input features
# this is the same give and take that we make with PCA - which is linear dimentionality reduction
# here's the link for the cool interactive visual for PCA that I used in class: https://setosa.io/ev/principal-component-analysis/
# What we pick up in extra dimensions, we lose in intepretability
# We use auto encoders for dimensionality reduction analagous to PCA
###Output
_____no_output_____
###Markdown
ChallengeYou will train an autoencoder at some point in the near future. Information Retrieval with Autoencoders (Learn) OverviewA common usecase for autoencoders is for reverse image search. Let's try to draw an image and see what's most similiar in our dataset. To accomplish this we will need to slice our autoendoer in half to extract our reduced features. :) Follow AlongWe are going to perform the following: - Build an encoder model- Train a NearestNeighbors on encoded images- Choose a query image - Find similar encoded images using the trained NearestNeighbors model- Check our results, make sure that the similar image is in fact similar Build an encoder modelUse the `Model` class and the encoder layers to build an encoded model. Remember that we first need to train a full autoencoder model, which as an encoder and decoder, before we can "break off" the trained encoder portion.
###Code
encoded.shape
# we have isolated the encoder portion of our auto-encoder so that we can access the encoder vector
# (i.e. the output of the encoder)
# YOUR CODE HERE
encoded_flat = Flatten()(encoded)
# Inputs will be original images, output will be the flattened images
encoder = Model(inputs=inputs, outputs=encoded_flat)
# Model is creating 1D encoded images
encoded_imgs = encoder.predict(x_train)
# now we can pass in our row vectors into a NN model
encoded_imgs.shape
###Output
_____no_output_____
###Markdown
Build a NearestNeighbors modelWe need to train a NearestNeighbors model on the encoded images.
###Code
from sklearn.neighbors import NearestNeighbors
# fit KNN on encoded images (i.e. the encoded vectors)
nn = NearestNeighbors(n_neighbors=10, algorithm='ball_tree')
# pass in the encoded images (i.e. the encoded vectors )
nn.fit(encoded_imgs)
###Output
_____no_output_____
###Markdown
Select a query imageWe need to chose an image that we will pass into NearestNeighbors in order to find similar images.
###Code
# get a query image
query = 27
# this is the image that we want to pass into NearestNeighbors in order to find similar images
# this will be done by looking at the distance between the encoder vectors of the images
plt.title(class_names[y_train[query]])
plt.imshow(x_train[query]);
###Output
_____no_output_____
###Markdown
Find Similar Images - Use the encoder to encode our query image- Use NearestNeighbors to find similar images- Check our results
###Code
query_img = x_train[query]
query_img.shape
# YOUR CODE HERE
query_img_reshaped = np.expand_dims(query_img, 0)
query_img_reshaped.shape
# encode query image using encoder model
query_img_encoded = encoder.predict(query_img_reshaped)
query_img_encoded.shape
neigh_dist, neigh_ind = nn.kneighbors(query_img_encoded)
neigh_dist.round(3)[0][1:]
# Rounding to the third decimal place make this array easier to read
nearest_neight_index = neigh_ind[0][1:][3]
# Last bracket number allows us to query whatever image we want (within a range of 0,8)
plt.imshow(x_train[nearest_neight_index]);
###Output
_____no_output_____ |
DataScience_Project1_Predict_products_sales_in_Walmart/Final_Version/station2.ipynb | ###Markdown
전체 데이터로 OLS
###Code
target1 = station['units']
target2 = station['log1p_units']
station.drop(columns=['units','log1p_units'],inplace=True)
station.tail()
len(station)
df1 = pd.concat([station,target1], axis=1)
df2 = pd.concat([station,target2], axis=1)
df2.to_csv("station2.csv", sep=",", index=False)
###Output
_____no_output_____
###Markdown
1. OLS : df1 (units)
###Code
model1 = sm.OLS.from_formula('units ~ tmax + tmin + tavg + dewpoint + wetbulb + heat + cool + preciptotal + stnpressure + sealevel \
+ resultspeed + C(resultdir) + avgspeed + sunset + sunrise + daytime + C(year) + C(month) + relative_humility \
+ windchill + weekend + C(rainY) + C(item_nbr)+ 0', data = df1)
result1 = model1.fit()
print(result1.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: units R-squared: 0.915
Model: OLS Adj. R-squared: 0.915
Method: Least Squares F-statistic: 5801.
Date: Fri, 06 Jul 2018 Prob (F-statistic): 0.00
Time: 01:47:20 Log-Likelihood: -2.7907e+05
No. Observations: 94572 AIC: 5.585e+05
Df Residuals: 94396 BIC: 5.601e+05
Df Model: 175
Covariance Type: nonrobust
======================================================================================
coef std err t P>|t| [0.025 0.975]
--------------------------------------------------------------------------------------
C(resultdir)[1.0] -4.4768 5.999 -0.746 0.455 -16.234 7.281
C(resultdir)[2.0] -4.3837 5.991 -0.732 0.464 -16.127 7.359
C(resultdir)[3.0] -4.3659 6.003 -0.727 0.467 -16.132 7.400
C(resultdir)[4.0] -4.2497 5.997 -0.709 0.479 -16.004 7.505
C(resultdir)[5.0] -4.3251 6.001 -0.721 0.471 -16.087 7.436
C(resultdir)[6.0] -4.4028 6.009 -0.733 0.464 -16.180 7.374
C(resultdir)[7.0] -4.4898 5.999 -0.748 0.454 -16.248 7.268
C(resultdir)[8.0] -4.0611 6.000 -0.677 0.499 -15.822 7.700
C(resultdir)[9.0] -4.2762 6.003 -0.712 0.476 -16.042 7.490
C(resultdir)[10.0] -4.6659 6.001 -0.777 0.437 -16.429 7.097
C(resultdir)[11.0] -4.5637 6.007 -0.760 0.447 -16.338 7.210
C(resultdir)[12.0] -4.4039 6.012 -0.732 0.464 -16.188 7.380
C(resultdir)[13.0] -4.3396 5.995 -0.724 0.469 -16.091 7.411
C(resultdir)[14.0] -4.8741 6.001 -0.812 0.417 -16.635 6.887
C(resultdir)[15.0] -4.4767 6.001 -0.746 0.456 -16.239 7.286
C(resultdir)[16.0] -4.3924 6.007 -0.731 0.465 -16.166 7.381
C(resultdir)[17.0] -4.1661 6.010 -0.693 0.488 -15.945 7.613
C(resultdir)[18.0] -4.4224 6.004 -0.737 0.461 -16.191 7.346
C(resultdir)[19.0] -4.5436 6.002 -0.757 0.449 -16.308 7.220
C(resultdir)[20.0] -4.2053 5.998 -0.701 0.483 -15.961 7.551
C(resultdir)[21.0] -4.5655 6.000 -0.761 0.447 -16.325 7.194
C(resultdir)[22.0] -4.3486 5.998 -0.725 0.468 -16.104 7.407
C(resultdir)[23.0] -4.4075 6.001 -0.735 0.463 -16.168 7.354
C(resultdir)[24.0] -4.4556 5.999 -0.743 0.458 -16.214 7.302
C(resultdir)[25.0] -4.3894 5.999 -0.732 0.464 -16.147 7.368
C(resultdir)[26.0] -4.4055 5.994 -0.735 0.462 -16.154 7.343
C(resultdir)[27.0] -4.4325 5.990 -0.740 0.459 -16.173 7.308
C(resultdir)[28.0] -4.3815 5.989 -0.732 0.464 -16.121 7.358
C(resultdir)[29.0] -4.4260 5.992 -0.739 0.460 -16.169 7.317
C(resultdir)[30.0] -4.4508 5.992 -0.743 0.458 -16.196 7.294
C(resultdir)[31.0] -4.4133 5.993 -0.736 0.461 -16.159 7.332
C(resultdir)[32.0] -4.4561 5.987 -0.744 0.457 -16.191 7.279
C(resultdir)[33.0] -4.4693 5.993 -0.746 0.456 -16.216 7.277
C(resultdir)[34.0] -4.4151 5.994 -0.737 0.461 -16.164 7.334
C(resultdir)[35.0] -4.2673 5.991 -0.712 0.476 -16.009 7.475
C(resultdir)[36.0] -4.4579 6.000 -0.743 0.458 -16.218 7.302
C(year)[T.2013] -0.2052 0.037 -5.578 0.000 -0.277 -0.133
C(year)[T.2014] -0.3567 0.044 -8.041 0.000 -0.444 -0.270
C(month)[T.2] 0.1404 0.098 1.427 0.153 -0.052 0.333
C(month)[T.3] 0.1560 0.140 1.118 0.263 -0.117 0.429
C(month)[T.4] 0.3166 0.220 1.438 0.150 -0.115 0.748
C(month)[T.5] 0.4347 0.281 1.545 0.122 -0.117 0.986
C(month)[T.6] 0.5271 0.301 1.749 0.080 -0.064 1.118
C(month)[T.7] 0.3737 0.282 1.325 0.185 -0.179 0.926
C(month)[T.8] 0.3857 0.237 1.624 0.104 -0.080 0.851
C(month)[T.9] 0.1574 0.206 0.763 0.446 -0.247 0.562
C(month)[T.10] -0.0012 0.209 -0.006 0.995 -0.411 0.409
C(month)[T.11] -0.0869 0.200 -0.435 0.664 -0.479 0.305
C(month)[T.12] -0.0307 0.134 -0.229 0.819 -0.294 0.232
C(rainY)[T.1] 0.0229 0.042 0.550 0.583 -0.059 0.105
C(item_nbr)[T.2] -4.747e-15 0.224 -2.12e-14 1.000 -0.440 0.440
C(item_nbr)[T.3] 1.804e-14 0.224 8.04e-14 1.000 -0.440 0.440
C(item_nbr)[T.4] 1.16e-13 0.224 5.17e-13 1.000 -0.440 0.440
C(item_nbr)[T.5] 9.908e-14 0.224 4.42e-13 1.000 -0.440 0.440
C(item_nbr)[T.6] -5.864e-14 0.224 -2.61e-13 1.000 -0.440 0.440
C(item_nbr)[T.7] 4.677e-14 0.224 2.08e-13 1.000 -0.440 0.440
C(item_nbr)[T.8] 5.132e-14 0.224 2.29e-13 1.000 -0.440 0.440
C(item_nbr)[T.9] -2.049e-14 0.224 -9.13e-14 1.000 -0.440 0.440
C(item_nbr)[T.10] 3.224e-14 0.224 1.44e-13 1.000 -0.440 0.440
C(item_nbr)[T.11] -1.406e-13 0.224 -6.27e-13 1.000 -0.440 0.440
C(item_nbr)[T.12] -3.333e-14 0.224 -1.49e-13 1.000 -0.440 0.440
C(item_nbr)[T.13] 3.475e-14 0.224 1.55e-13 1.000 -0.440 0.440
C(item_nbr)[T.14] 1.817e-13 0.224 8.1e-13 1.000 -0.440 0.440
C(item_nbr)[T.15] -2.081e-14 0.224 -9.27e-14 1.000 -0.440 0.440
C(item_nbr)[T.16] 32.8873 0.224 146.566 0.000 32.448 33.327
C(item_nbr)[T.17] 1.232e-13 0.224 5.49e-13 1.000 -0.440 0.440
C(item_nbr)[T.18] -1.931e-14 0.224 -8.61e-14 1.000 -0.440 0.440
C(item_nbr)[T.19] -3.912e-14 0.224 -1.74e-13 1.000 -0.440 0.440
C(item_nbr)[T.20] 2.489e-14 0.224 1.11e-13 1.000 -0.440 0.440
C(item_nbr)[T.21] 2.075e-15 0.224 9.25e-15 1.000 -0.440 0.440
C(item_nbr)[T.22] 3.785e-14 0.224 1.69e-13 1.000 -0.440 0.440
C(item_nbr)[T.23] 1.095e-13 0.224 4.88e-13 1.000 -0.440 0.440
C(item_nbr)[T.24] 1.626e-14 0.224 7.25e-14 1.000 -0.440 0.440
C(item_nbr)[T.25] 157.4754 0.224 701.808 0.000 157.036 157.915
C(item_nbr)[T.26] -3.193e-14 0.224 -1.42e-13 1.000 -0.440 0.440
C(item_nbr)[T.27] 8.115e-14 0.224 3.62e-13 1.000 -0.440 0.440
C(item_nbr)[T.28] 4.112e-14 0.224 1.83e-13 1.000 -0.440 0.440
C(item_nbr)[T.29] -1.446e-14 0.224 -6.45e-14 1.000 -0.440 0.440
C(item_nbr)[T.30] -3.916e-14 0.224 -1.75e-13 1.000 -0.440 0.440
C(item_nbr)[T.31] 3.358e-14 0.224 1.5e-13 1.000 -0.440 0.440
C(item_nbr)[T.32] 3.291e-14 0.224 1.47e-13 1.000 -0.440 0.440
C(item_nbr)[T.33] -3.071e-14 0.224 -1.37e-13 1.000 -0.440 0.440
C(item_nbr)[T.34] 1.336e-14 0.224 5.95e-14 1.000 -0.440 0.440
C(item_nbr)[T.35] -3.241e-14 0.224 -1.44e-13 1.000 -0.440 0.440
C(item_nbr)[T.36] -1.018e-13 0.224 -4.53e-13 1.000 -0.440 0.440
C(item_nbr)[T.37] -2.954e-15 0.224 -1.32e-14 1.000 -0.440 0.440
C(item_nbr)[T.38] 4.543e-14 0.224 2.02e-13 1.000 -0.440 0.440
C(item_nbr)[T.39] 0.1655 0.224 0.738 0.461 -0.274 0.605
C(item_nbr)[T.40] 5.264e-14 0.224 2.35e-13 1.000 -0.440 0.440
C(item_nbr)[T.41] 7.82e-15 0.224 3.49e-14 1.000 -0.440 0.440
C(item_nbr)[T.42] -1.966e-14 0.224 -8.76e-14 1.000 -0.440 0.440
C(item_nbr)[T.43] -8.832e-15 0.224 -3.94e-14 1.000 -0.440 0.440
C(item_nbr)[T.44] -3.533e-14 0.224 -1.57e-13 1.000 -0.440 0.440
C(item_nbr)[T.45] -2.063e-14 0.224 -9.19e-14 1.000 -0.440 0.440
C(item_nbr)[T.46] -1.289e-14 0.224 -5.75e-14 1.000 -0.440 0.440
C(item_nbr)[T.47] -6.164e-15 0.224 -2.75e-14 1.000 -0.440 0.440
C(item_nbr)[T.48] 8.91e-15 0.224 3.97e-14 1.000 -0.440 0.440
C(item_nbr)[T.49] 5.961e-15 0.224 2.66e-14 1.000 -0.440 0.440
C(item_nbr)[T.50] 0.3580 0.224 1.595 0.111 -0.082 0.798
C(item_nbr)[T.51] -4.244e-15 0.224 -1.89e-14 1.000 -0.440 0.440
C(item_nbr)[T.52] -3.931e-14 0.224 -1.75e-13 1.000 -0.440 0.440
C(item_nbr)[T.53] -6.737e-14 0.224 -3e-13 1.000 -0.440 0.440
C(item_nbr)[T.54] -8.456e-14 0.224 -3.77e-13 1.000 -0.440 0.440
C(item_nbr)[T.55] -2.693e-14 0.224 -1.2e-13 1.000 -0.440 0.440
C(item_nbr)[T.56] 1.124e-14 0.224 5.01e-14 1.000 -0.440 0.440
C(item_nbr)[T.57] -9.361e-15 0.224 -4.17e-14 1.000 -0.440 0.440
C(item_nbr)[T.58] 3.964e-15 0.224 1.77e-14 1.000 -0.440 0.440
C(item_nbr)[T.59] 2.67e-15 0.224 1.19e-14 1.000 -0.440 0.440
C(item_nbr)[T.60] -7.876e-15 0.224 -3.51e-14 1.000 -0.440 0.440
C(item_nbr)[T.61] -3.535e-15 0.224 -1.58e-14 1.000 -0.440 0.440
C(item_nbr)[T.62] 2.075e-14 0.224 9.25e-14 1.000 -0.440 0.440
C(item_nbr)[T.63] -9.557e-15 0.224 -4.26e-14 1.000 -0.440 0.440
C(item_nbr)[T.64] 0.7676 0.224 3.421 0.001 0.328 1.207
C(item_nbr)[T.65] 3.551e-16 0.224 1.58e-15 1.000 -0.440 0.440
C(item_nbr)[T.66] -2.002e-15 0.224 -8.92e-15 1.000 -0.440 0.440
C(item_nbr)[T.67] 7.073e-15 0.224 3.15e-14 1.000 -0.440 0.440
C(item_nbr)[T.68] 1.863e-15 0.224 8.3e-15 1.000 -0.440 0.440
C(item_nbr)[T.69] -1.089e-14 0.224 -4.85e-14 1.000 -0.440 0.440
C(item_nbr)[T.70] 1.269e-14 0.224 5.65e-14 1.000 -0.440 0.440
C(item_nbr)[T.71] 3.606e-15 0.224 1.61e-14 1.000 -0.440 0.440
C(item_nbr)[T.72] 2.683e-14 0.224 1.2e-13 1.000 -0.440 0.440
C(item_nbr)[T.73] 6.72e-15 0.224 3e-14 1.000 -0.440 0.440
C(item_nbr)[T.74] -2.687e-14 0.224 -1.2e-13 1.000 -0.440 0.440
C(item_nbr)[T.75] -3.167e-16 0.224 -1.41e-15 1.000 -0.440 0.440
C(item_nbr)[T.76] 8.797e-15 0.224 3.92e-14 1.000 -0.440 0.440
C(item_nbr)[T.77] 1.0340 0.224 4.608 0.000 0.594 1.474
C(item_nbr)[T.78] 3.666e-13 0.224 1.63e-12 1.000 -0.440 0.440
C(item_nbr)[T.79] -1.414e-14 0.224 -6.3e-14 1.000 -0.440 0.440
C(item_nbr)[T.80] -2.144e-15 0.224 -9.56e-15 1.000 -0.440 0.440
C(item_nbr)[T.81] -1.18e-15 0.224 -5.26e-15 1.000 -0.440 0.440
C(item_nbr)[T.82] 7.923e-15 0.224 3.53e-14 1.000 -0.440 0.440
C(item_nbr)[T.83] 7.955e-15 0.224 3.55e-14 1.000 -0.440 0.440
C(item_nbr)[T.84] 9.825e-15 0.224 4.38e-14 1.000 -0.440 0.440
C(item_nbr)[T.85] 0.0786 0.224 0.350 0.726 -0.361 0.518
C(item_nbr)[T.86] -3.204e-15 0.224 -1.43e-14 1.000 -0.440 0.440
C(item_nbr)[T.87] 6.288e-15 0.224 2.8e-14 1.000 -0.440 0.440
C(item_nbr)[T.88] -1.777e-14 0.224 -7.92e-14 1.000 -0.440 0.440
C(item_nbr)[T.89] 6.738e-15 0.224 3e-14 1.000 -0.440 0.440
C(item_nbr)[T.90] 5.494e-15 0.224 2.45e-14 1.000 -0.440 0.440
C(item_nbr)[T.91] 7.44e-15 0.224 3.32e-14 1.000 -0.440 0.440
C(item_nbr)[T.92] 1.194e-14 0.224 5.32e-14 1.000 -0.440 0.440
C(item_nbr)[T.93] 0.6819 0.224 3.039 0.002 0.242 1.122
C(item_nbr)[T.94] -8.296e-15 0.224 -3.7e-14 1.000 -0.440 0.440
C(item_nbr)[T.95] 9.141e-14 0.224 4.07e-13 1.000 -0.440 0.440
C(item_nbr)[T.96] 5.786e-15 0.224 2.58e-14 1.000 -0.440 0.440
C(item_nbr)[T.97] -2.494e-14 0.224 -1.11e-13 1.000 -0.440 0.440
C(item_nbr)[T.98] -4.906e-15 0.224 -2.19e-14 1.000 -0.440 0.440
C(item_nbr)[T.99] -4.644e-16 0.224 -2.07e-15 1.000 -0.440 0.440
C(item_nbr)[T.100] 9.506e-15 0.224 4.24e-14 1.000 -0.440 0.440
C(item_nbr)[T.101] -4.119e-15 0.224 -1.84e-14 1.000 -0.440 0.440
C(item_nbr)[T.102] 1.546e-15 0.224 6.89e-15 1.000 -0.440 0.440
C(item_nbr)[T.103] 3.624e-15 0.224 1.61e-14 1.000 -0.440 0.440
C(item_nbr)[T.104] 1.586e-14 0.224 7.07e-14 1.000 -0.440 0.440
C(item_nbr)[T.105] 3.856e-15 0.224 1.72e-14 1.000 -0.440 0.440
C(item_nbr)[T.106] 2.301e-14 0.224 1.03e-13 1.000 -0.440 0.440
C(item_nbr)[T.107] 1.998e-14 0.224 8.9e-14 1.000 -0.440 0.440
C(item_nbr)[T.108] 5.524e-14 0.224 2.46e-13 1.000 -0.440 0.440
C(item_nbr)[T.109] 2.235e-14 0.224 9.96e-14 1.000 -0.440 0.440
C(item_nbr)[T.110] 2.928e-14 0.224 1.3e-13 1.000 -0.440 0.440
C(item_nbr)[T.111] -4.102e-14 0.224 -1.83e-13 1.000 -0.440 0.440
tmax -0.0055 0.014 -0.389 0.697 -0.033 0.022
tmin 0.0056 0.014 0.405 0.685 -0.021 0.033
tavg 3.866e-05 0.013 0.003 0.998 -0.026 0.026
dewpoint -0.0028 0.017 -0.172 0.864 -0.035 0.030
wetbulb 0.0057 0.014 0.417 0.677 -0.021 0.033
heat -0.0011 0.010 -0.111 0.911 -0.020 0.018
cool 0.0068 0.006 1.064 0.287 -0.006 0.019
preciptotal -0.1204 0.074 -1.618 0.106 -0.266 0.025
stnpressure 0.2167 0.724 0.299 0.765 -1.202 1.636
sealevel -0.1029 0.692 -0.149 0.882 -1.459 1.253
resultspeed 0.0082 0.014 0.608 0.543 -0.018 0.035
avgspeed -0.0093 0.020 -0.469 0.639 -0.048 0.030
sunset 0.0011 0.004 0.313 0.754 -0.006 0.008
sunrise 0.0024 0.004 0.656 0.512 -0.005 0.010
daytime -0.0013 0.001 -2.251 0.024 -0.002 -0.000
relative_humility 0.0016 0.007 0.238 0.812 -0.012 0.015
windchill -0.0045 0.028 -0.160 0.873 -0.059 0.050
weekend 0.5450 0.034 16.052 0.000 0.478 0.612
==============================================================================
Omnibus: 153006.774 Durbin-Watson: 2.010
Prob(Omnibus): 0.000 Jarque-Bera (JB): 4165944523.895
Skew: 9.311 Prob(JB): 0.00
Kurtosis: 1031.040 Cond. No. 5.82e+16
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The smallest eigenvalue is 5.2e-23. This might indicate that there are
strong multicollinearity problems or that the design matrix is singular.
###Markdown
2. OLS : df1 (units) - 스케일링 - conditional number가 너무 높음.
###Code
model1_1 = sm.OLS.from_formula('units ~ scale(tmax) + scale(tmin) + scale(tavg) + scale(dewpoint) + scale(wetbulb) + scale(heat) + scale(cool)\
+ scale(preciptotal) + scale(stnpressure) + scale(sealevel) + scale(resultspeed) \
+ C(resultdir) + scale(avgspeed) + scale(sunset) + scale(sunrise) + scale(daytime) + C(year)\
+ C(month) + scale(relative_humility) + scale(windchill) + C(weekend) \
+ C(rainY) + C(store_nbr) + C(item_nbr) + 0', data = df1)
result1_1 = model1_1.fit()
print(result1_1.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: units R-squared: 0.915
Model: OLS Adj. R-squared: 0.915
Method: Least Squares F-statistic: 5801.
Date: Fri, 06 Jul 2018 Prob (F-statistic): 0.00
Time: 01:47:51 Log-Likelihood: -2.7907e+05
No. Observations: 94572 AIC: 5.585e+05
Df Residuals: 94396 BIC: 5.601e+05
Df Model: 175
Covariance Type: nonrobust
============================================================================================
coef std err t P>|t| [0.025 0.975]
--------------------------------------------------------------------------------------------
C(resultdir)[1.0] -0.2951 0.282 -1.047 0.295 -0.847 0.257
C(resultdir)[2.0] -0.2020 0.284 -0.710 0.478 -0.760 0.356
C(resultdir)[3.0] -0.1842 0.255 -0.724 0.469 -0.683 0.315
C(resultdir)[4.0] -0.0680 0.260 -0.262 0.793 -0.577 0.441
C(resultdir)[5.0] -0.1435 0.254 -0.564 0.572 -0.642 0.355
C(resultdir)[6.0] -0.2211 0.265 -0.833 0.405 -0.741 0.299
C(resultdir)[7.0] -0.3082 0.256 -1.205 0.228 -0.809 0.193
C(resultdir)[8.0] 0.1206 0.281 0.429 0.668 -0.430 0.671
C(resultdir)[9.0] -0.0946 0.277 -0.342 0.732 -0.637 0.447
C(resultdir)[10.0] -0.4843 0.321 -1.511 0.131 -1.113 0.144
C(resultdir)[11.0] -0.3820 0.296 -1.289 0.198 -0.963 0.199
C(resultdir)[12.0] -0.2223 0.290 -0.768 0.443 -0.790 0.345
C(resultdir)[13.0] -0.1580 0.274 -0.577 0.564 -0.694 0.378
C(resultdir)[14.0] -0.6924 0.302 -2.295 0.022 -1.284 -0.101
C(resultdir)[15.0] -0.2951 0.287 -1.028 0.304 -0.858 0.268
C(resultdir)[16.0] -0.2108 0.318 -0.663 0.507 -0.834 0.412
C(resultdir)[17.0] 0.0155 0.328 0.047 0.962 -0.627 0.658
C(resultdir)[18.0] -0.2407 0.286 -0.842 0.400 -0.801 0.320
C(resultdir)[19.0] -0.3620 0.271 -1.338 0.181 -0.892 0.168
C(resultdir)[20.0] -0.0236 0.265 -0.089 0.929 -0.543 0.495
C(resultdir)[21.0] -0.3839 0.249 -1.543 0.123 -0.872 0.104
C(resultdir)[22.0] -0.1670 0.241 -0.692 0.489 -0.640 0.306
C(resultdir)[23.0] -0.2258 0.245 -0.922 0.357 -0.706 0.254
C(resultdir)[24.0] -0.2740 0.243 -1.127 0.260 -0.750 0.202
C(resultdir)[25.0] -0.2077 0.241 -0.861 0.389 -0.680 0.265
C(resultdir)[26.0] -0.2239 0.243 -0.920 0.358 -0.701 0.253
C(resultdir)[27.0] -0.2509 0.243 -1.032 0.302 -0.728 0.226
C(resultdir)[28.0] -0.1999 0.241 -0.831 0.406 -0.671 0.272
C(resultdir)[29.0] -0.2443 0.241 -1.015 0.310 -0.716 0.228
C(resultdir)[30.0] -0.2692 0.246 -1.094 0.274 -0.752 0.213
C(resultdir)[31.0] -0.2317 0.246 -0.943 0.346 -0.714 0.250
C(resultdir)[32.0] -0.2745 0.243 -1.129 0.259 -0.751 0.202
C(resultdir)[33.0] -0.2876 0.257 -1.120 0.263 -0.791 0.216
C(resultdir)[34.0] -0.2335 0.266 -0.878 0.380 -0.754 0.288
C(resultdir)[35.0] -0.0857 0.264 -0.325 0.746 -0.603 0.432
C(resultdir)[36.0] -0.2762 0.291 -0.950 0.342 -0.846 0.293
C(year)[T.2013] -0.2052 0.037 -5.578 0.000 -0.277 -0.133
C(year)[T.2014] -0.3567 0.044 -8.041 0.000 -0.444 -0.270
C(month)[T.2] 0.1404 0.098 1.427 0.153 -0.052 0.333
C(month)[T.3] 0.1560 0.140 1.118 0.263 -0.117 0.429
C(month)[T.4] 0.3166 0.220 1.438 0.150 -0.115 0.748
C(month)[T.5] 0.4347 0.281 1.545 0.122 -0.117 0.986
C(month)[T.6] 0.5271 0.301 1.749 0.080 -0.064 1.118
C(month)[T.7] 0.3737 0.282 1.325 0.185 -0.179 0.926
C(month)[T.8] 0.3857 0.237 1.624 0.104 -0.080 0.851
C(month)[T.9] 0.1574 0.206 0.763 0.446 -0.247 0.562
C(month)[T.10] -0.0012 0.209 -0.006 0.995 -0.411 0.409
C(month)[T.11] -0.0869 0.200 -0.435 0.664 -0.479 0.305
C(month)[T.12] -0.0307 0.134 -0.229 0.819 -0.294 0.232
C(weekend)[T.1] 0.5450 0.034 16.052 0.000 0.478 0.612
C(rainY)[T.1] 0.0229 0.042 0.550 0.583 -0.059 0.105
C(item_nbr)[T.2] 3.907e-14 0.224 1.74e-13 1.000 -0.440 0.440
C(item_nbr)[T.3] 2.632e-14 0.224 1.17e-13 1.000 -0.440 0.440
C(item_nbr)[T.4] 2.699e-14 0.224 1.2e-13 1.000 -0.440 0.440
C(item_nbr)[T.5] 2.174e-14 0.224 9.69e-14 1.000 -0.440 0.440
C(item_nbr)[T.6] 4.394e-14 0.224 1.96e-13 1.000 -0.440 0.440
C(item_nbr)[T.7] -1.427e-14 0.224 -6.36e-14 1.000 -0.440 0.440
C(item_nbr)[T.8] 2.992e-14 0.224 1.33e-13 1.000 -0.440 0.440
C(item_nbr)[T.9] -1.828e-14 0.224 -8.15e-14 1.000 -0.440 0.440
C(item_nbr)[T.10] 2.107e-14 0.224 9.39e-14 1.000 -0.440 0.440
C(item_nbr)[T.11] 1.268e-14 0.224 5.65e-14 1.000 -0.440 0.440
C(item_nbr)[T.12] 1.252e-14 0.224 5.58e-14 1.000 -0.440 0.440
C(item_nbr)[T.13] 2.051e-14 0.224 9.14e-14 1.000 -0.440 0.440
C(item_nbr)[T.14] -6.253e-15 0.224 -2.79e-14 1.000 -0.440 0.440
C(item_nbr)[T.15] -9.764e-15 0.224 -4.35e-14 1.000 -0.440 0.440
C(item_nbr)[T.16] 32.8873 0.224 146.566 0.000 32.448 33.327
C(item_nbr)[T.17] -3.064e-14 0.224 -1.37e-13 1.000 -0.440 0.440
C(item_nbr)[T.18] 1.869e-14 0.224 8.33e-14 1.000 -0.440 0.440
C(item_nbr)[T.19] 2.216e-14 0.224 9.87e-14 1.000 -0.440 0.440
C(item_nbr)[T.20] -3.363e-14 0.224 -1.5e-13 1.000 -0.440 0.440
C(item_nbr)[T.21] 2.144e-14 0.224 9.56e-14 1.000 -0.440 0.440
C(item_nbr)[T.22] -4.863e-15 0.224 -2.17e-14 1.000 -0.440 0.440
C(item_nbr)[T.23] 5.93e-14 0.224 2.64e-13 1.000 -0.440 0.440
C(item_nbr)[T.24] -3.839e-16 0.224 -1.71e-15 1.000 -0.440 0.440
C(item_nbr)[T.25] 157.4754 0.224 701.808 0.000 157.036 157.915
C(item_nbr)[T.26] 3.639e-14 0.224 1.62e-13 1.000 -0.440 0.440
C(item_nbr)[T.27] 6.964e-15 0.224 3.1e-14 1.000 -0.440 0.440
C(item_nbr)[T.28] 9.637e-14 0.224 4.29e-13 1.000 -0.440 0.440
C(item_nbr)[T.29] -3.516e-16 0.224 -1.57e-15 1.000 -0.440 0.440
C(item_nbr)[T.30] 4.117e-14 0.224 1.83e-13 1.000 -0.440 0.440
C(item_nbr)[T.31] -1.203e-14 0.224 -5.36e-14 1.000 -0.440 0.440
C(item_nbr)[T.32] -6.765e-16 0.224 -3.02e-15 1.000 -0.440 0.440
C(item_nbr)[T.33] -3.617e-14 0.224 -1.61e-13 1.000 -0.440 0.440
C(item_nbr)[T.34] 3.191e-14 0.224 1.42e-13 1.000 -0.440 0.440
C(item_nbr)[T.35] -2.249e-14 0.224 -1e-13 1.000 -0.440 0.440
C(item_nbr)[T.36] -2.853e-14 0.224 -1.27e-13 1.000 -0.440 0.440
C(item_nbr)[T.37] 3.271e-14 0.224 1.46e-13 1.000 -0.440 0.440
C(item_nbr)[T.38] 4.423e-15 0.224 1.97e-14 1.000 -0.440 0.440
C(item_nbr)[T.39] 0.1655 0.224 0.738 0.461 -0.274 0.605
C(item_nbr)[T.40] 1.527e-14 0.224 6.8e-14 1.000 -0.440 0.440
C(item_nbr)[T.41] -5.017e-14 0.224 -2.24e-13 1.000 -0.440 0.440
C(item_nbr)[T.42] -9.393e-15 0.224 -4.19e-14 1.000 -0.440 0.440
C(item_nbr)[T.43] -6.359e-14 0.224 -2.83e-13 1.000 -0.440 0.440
C(item_nbr)[T.44] -7.888e-14 0.224 -3.52e-13 1.000 -0.440 0.440
C(item_nbr)[T.45] 2.46e-14 0.224 1.1e-13 1.000 -0.440 0.440
C(item_nbr)[T.46] -1.954e-15 0.224 -8.71e-15 1.000 -0.440 0.440
C(item_nbr)[T.47] -3.92e-14 0.224 -1.75e-13 1.000 -0.440 0.440
C(item_nbr)[T.48] 1.312e-14 0.224 5.85e-14 1.000 -0.440 0.440
C(item_nbr)[T.49] -1.658e-14 0.224 -7.39e-14 1.000 -0.440 0.440
C(item_nbr)[T.50] 0.3580 0.224 1.595 0.111 -0.082 0.798
C(item_nbr)[T.51] -3.917e-14 0.224 -1.75e-13 1.000 -0.440 0.440
C(item_nbr)[T.52] -1.774e-13 0.224 -7.91e-13 1.000 -0.440 0.440
C(item_nbr)[T.53] 3.574e-14 0.224 1.59e-13 1.000 -0.440 0.440
C(item_nbr)[T.54] -1.231e-14 0.224 -5.49e-14 1.000 -0.440 0.440
C(item_nbr)[T.55] 5.503e-14 0.224 2.45e-13 1.000 -0.440 0.440
C(item_nbr)[T.56] 1.903e-15 0.224 8.48e-15 1.000 -0.440 0.440
C(item_nbr)[T.57] 4.426e-15 0.224 1.97e-14 1.000 -0.440 0.440
C(item_nbr)[T.58] -7.554e-15 0.224 -3.37e-14 1.000 -0.440 0.440
C(item_nbr)[T.59] 6.4e-15 0.224 2.85e-14 1.000 -0.440 0.440
C(item_nbr)[T.60] 7.458e-15 0.224 3.32e-14 1.000 -0.440 0.440
C(item_nbr)[T.61] 2.446e-14 0.224 1.09e-13 1.000 -0.440 0.440
C(item_nbr)[T.62] 1.486e-14 0.224 6.62e-14 1.000 -0.440 0.440
C(item_nbr)[T.63] -2.665e-14 0.224 -1.19e-13 1.000 -0.440 0.440
C(item_nbr)[T.64] 0.7676 0.224 3.421 0.001 0.328 1.207
C(item_nbr)[T.65] 6.709e-15 0.224 2.99e-14 1.000 -0.440 0.440
C(item_nbr)[T.66] -3.226e-14 0.224 -1.44e-13 1.000 -0.440 0.440
C(item_nbr)[T.67] -7.141e-14 0.224 -3.18e-13 1.000 -0.440 0.440
C(item_nbr)[T.68] -8.531e-15 0.224 -3.8e-14 1.000 -0.440 0.440
C(item_nbr)[T.69] 5.688e-14 0.224 2.54e-13 1.000 -0.440 0.440
C(item_nbr)[T.70] 1.483e-14 0.224 6.61e-14 1.000 -0.440 0.440
C(item_nbr)[T.71] -1.508e-14 0.224 -6.72e-14 1.000 -0.440 0.440
C(item_nbr)[T.72] 1.82e-14 0.224 8.11e-14 1.000 -0.440 0.440
C(item_nbr)[T.73] 2.055e-14 0.224 9.16e-14 1.000 -0.440 0.440
C(item_nbr)[T.74] -2.749e-14 0.224 -1.23e-13 1.000 -0.440 0.440
C(item_nbr)[T.75] -4.247e-14 0.224 -1.89e-13 1.000 -0.440 0.440
C(item_nbr)[T.76] -9.669e-15 0.224 -4.31e-14 1.000 -0.440 0.440
C(item_nbr)[T.77] 1.0340 0.224 4.608 0.000 0.594 1.474
C(item_nbr)[T.78] -6.599e-16 0.224 -2.94e-15 1.000 -0.440 0.440
C(item_nbr)[T.79] -7.329e-15 0.224 -3.27e-14 1.000 -0.440 0.440
C(item_nbr)[T.80] -2.049e-14 0.224 -9.13e-14 1.000 -0.440 0.440
C(item_nbr)[T.81] -2.52e-14 0.224 -1.12e-13 1.000 -0.440 0.440
C(item_nbr)[T.82] -4.655e-15 0.224 -2.07e-14 1.000 -0.440 0.440
C(item_nbr)[T.83] 6.616e-15 0.224 2.95e-14 1.000 -0.440 0.440
C(item_nbr)[T.84] -1.411e-14 0.224 -6.29e-14 1.000 -0.440 0.440
C(item_nbr)[T.85] 0.0786 0.224 0.350 0.726 -0.361 0.518
C(item_nbr)[T.86] 1.256e-14 0.224 5.6e-14 1.000 -0.440 0.440
C(item_nbr)[T.87] -1.816e-14 0.224 -8.1e-14 1.000 -0.440 0.440
C(item_nbr)[T.88] -3.634e-14 0.224 -1.62e-13 1.000 -0.440 0.440
C(item_nbr)[T.89] -1.358e-14 0.224 -6.05e-14 1.000 -0.440 0.440
C(item_nbr)[T.90] -3.637e-16 0.224 -1.62e-15 1.000 -0.440 0.440
C(item_nbr)[T.91] 6.837e-14 0.224 3.05e-13 1.000 -0.440 0.440
C(item_nbr)[T.92] -1.03e-13 0.224 -4.59e-13 1.000 -0.440 0.440
C(item_nbr)[T.93] 0.6819 0.224 3.039 0.002 0.242 1.122
C(item_nbr)[T.94] 2.075e-14 0.224 9.25e-14 1.000 -0.440 0.440
C(item_nbr)[T.95] 2.596e-16 0.224 1.16e-15 1.000 -0.440 0.440
C(item_nbr)[T.96] -1.552e-15 0.224 -6.92e-15 1.000 -0.440 0.440
C(item_nbr)[T.97] 3.459e-16 0.224 1.54e-15 1.000 -0.440 0.440
C(item_nbr)[T.98] -1.338e-14 0.224 -5.96e-14 1.000 -0.440 0.440
C(item_nbr)[T.99] -3.634e-15 0.224 -1.62e-14 1.000 -0.440 0.440
C(item_nbr)[T.100] -1.04e-14 0.224 -4.64e-14 1.000 -0.440 0.440
C(item_nbr)[T.101] -1.976e-14 0.224 -8.81e-14 1.000 -0.440 0.440
C(item_nbr)[T.102] -2.114e-14 0.224 -9.42e-14 1.000 -0.440 0.440
C(item_nbr)[T.103] 1.634e-14 0.224 7.28e-14 1.000 -0.440 0.440
C(item_nbr)[T.104] -2.755e-14 0.224 -1.23e-13 1.000 -0.440 0.440
C(item_nbr)[T.105] -8.957e-15 0.224 -3.99e-14 1.000 -0.440 0.440
C(item_nbr)[T.106] 8.962e-15 0.224 3.99e-14 1.000 -0.440 0.440
C(item_nbr)[T.107] -1.595e-14 0.224 -7.11e-14 1.000 -0.440 0.440
C(item_nbr)[T.108] 2.187e-15 0.224 9.75e-15 1.000 -0.440 0.440
C(item_nbr)[T.109] -2.766e-14 0.224 -1.23e-13 1.000 -0.440 0.440
C(item_nbr)[T.110] -1.937e-15 0.224 -8.63e-15 1.000 -0.440 0.440
C(item_nbr)[T.111] -3.316e-14 0.224 -1.48e-13 1.000 -0.440 0.440
scale(tmax) -0.0984 0.258 -0.381 0.703 -0.605 0.408
scale(tmin) 0.0968 0.230 0.421 0.674 -0.354 0.548
scale(tavg) -0.0053 0.238 -0.022 0.982 -0.473 0.462
scale(dewpoint) -0.0538 0.313 -0.172 0.864 -0.667 0.559
scale(wetbulb) 0.0927 0.222 0.417 0.677 -0.343 0.528
scale(heat) -0.0163 0.147 -0.111 0.911 -0.304 0.271
scale(cool) 0.0318 0.030 1.064 0.287 -0.027 0.090
scale(preciptotal) -0.0310 0.019 -1.618 0.106 -0.069 0.007
scale(stnpressure) 0.0452 0.151 0.299 0.765 -0.251 0.341
scale(sealevel) -0.0222 0.149 -0.149 0.882 -0.315 0.271
scale(resultspeed) 0.0341 0.056 0.608 0.543 -0.076 0.144
scale(avgspeed) -0.0337 0.072 -0.469 0.639 -0.175 0.107
scale(sunset) 0.0309 0.228 0.136 0.892 -0.415 0.477
scale(sunrise) 0.1900 0.234 0.811 0.417 -0.269 0.649
scale(daytime) -0.0791 0.036 -2.220 0.026 -0.149 -0.009
scale(relative_humility) 0.0265 0.111 0.238 0.812 -0.192 0.245
scale(windchill) -0.0980 0.611 -0.160 0.873 -1.296 1.100
==============================================================================
Omnibus: 153006.774 Durbin-Watson: 2.010
Prob(Omnibus): 0.000 Jarque-Bera (JB): 4165944523.895
Skew: 9.311 Prob(JB): 0.00
Kurtosis: 1031.040 Cond. No. 5.43e+15
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The smallest eigenvalue is 3e-26. This might indicate that there are
strong multicollinearity problems or that the design matrix is singular.
###Markdown
스케일링을 했으나 conditional number가 크게 떨어지진 않았다. 3. OLS : df1 (units) - 아웃라이어 제거
###Code
# 아웃라이어 제거
# Cook's distance > 2 인 값 제거
influence = result1.get_influence()
cooks_d2, pvals = influence.cooks_distance
fox_cr = 4 / (len(df1) - 2)
idx_outlier = np.where(cooks_d2 > fox_cr)[0]
len(idx_outlier)
idx = list(set(range(len(df1))).difference(idx_outlier))
df1_1 = df1.iloc[idx, :].reset_index(drop=True)
df1_1.tail()
# OLS - df1_1
model1_1_1 = sm.OLS.from_formula('units ~ scale(tmax) + scale(tmin) + scale(tavg) + scale(dewpoint) + scale(wetbulb) + scale(heat) + scale(cool)\
+ scale(preciptotal) + scale(stnpressure) + scale(sealevel) + scale(resultspeed) \
+ C(resultdir) + scale(avgspeed) +scale(sunset) + scale(sunrise) + scale(daytime)\
+ C(year) + C(month) + scale(relative_humility) + scale(windchill) + C(weekend) \
+ C(rainY) + C(store_nbr) + C(item_nbr) + 0', data = df1_1)
result1_1_1 = model1_1_1.fit()
print(result1_1_1.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: units R-squared: 0.943
Model: OLS Adj. R-squared: 0.943
Method: Least Squares F-statistic: 8881.
Date: Fri, 06 Jul 2018 Prob (F-statistic): 0.00
Time: 01:48:30 Log-Likelihood: -1.7977e+05
No. Observations: 93420 AIC: 3.599e+05
Df Residuals: 93244 BIC: 3.615e+05
Df Model: 175
Covariance Type: nonrobust
============================================================================================
coef std err t P>|t| [0.025 0.975]
--------------------------------------------------------------------------------------------
C(resultdir)[1.0] -0.0107 0.101 -0.106 0.916 -0.210 0.188
C(resultdir)[2.0] -0.0047 0.102 -0.046 0.963 -0.205 0.196
C(resultdir)[3.0] 0.0301 0.092 0.329 0.742 -0.149 0.209
C(resultdir)[4.0] 0.0055 0.093 0.059 0.953 -0.178 0.189
C(resultdir)[5.0] -0.0373 0.091 -0.408 0.684 -0.216 0.142
C(resultdir)[6.0] 0.1063 0.095 1.114 0.265 -0.081 0.293
C(resultdir)[7.0] 0.0135 0.092 0.147 0.883 -0.167 0.194
C(resultdir)[8.0] 0.1054 0.101 1.042 0.297 -0.093 0.304
C(resultdir)[9.0] 0.0259 0.099 0.261 0.794 -0.169 0.221
C(resultdir)[10.0] 0.0675 0.115 0.585 0.558 -0.159 0.294
C(resultdir)[11.0] -0.1318 0.107 -1.236 0.216 -0.341 0.077
C(resultdir)[12.0] -0.0121 0.104 -0.117 0.907 -0.216 0.192
C(resultdir)[13.0] -0.0214 0.098 -0.217 0.828 -0.214 0.171
C(resultdir)[14.0] 0.0075 0.109 0.069 0.945 -0.205 0.220
C(resultdir)[15.0] 0.0279 0.103 0.270 0.787 -0.175 0.230
C(resultdir)[16.0] -0.0564 0.114 -0.494 0.622 -0.280 0.168
C(resultdir)[17.0] 0.0708 0.118 0.601 0.548 -0.160 0.302
C(resultdir)[18.0] 0.0341 0.103 0.331 0.740 -0.168 0.236
C(resultdir)[19.0] -0.0998 0.097 -1.025 0.305 -0.290 0.091
C(resultdir)[20.0] -0.0101 0.095 -0.106 0.916 -0.197 0.177
C(resultdir)[21.0] 0.0144 0.089 0.162 0.872 -0.161 0.190
C(resultdir)[22.0] 0.0204 0.087 0.235 0.814 -0.150 0.190
C(resultdir)[23.0] 0.0255 0.088 0.290 0.772 -0.147 0.198
C(resultdir)[24.0] -0.0209 0.087 -0.240 0.811 -0.192 0.150
C(resultdir)[25.0] 0.0415 0.087 0.479 0.632 -0.128 0.211
C(resultdir)[26.0] 0.0147 0.087 0.168 0.867 -0.157 0.186
C(resultdir)[27.0] 0.0149 0.087 0.171 0.864 -0.156 0.186
C(resultdir)[28.0] 0.0060 0.086 0.069 0.945 -0.164 0.175
C(resultdir)[29.0] 0.0230 0.087 0.265 0.791 -0.147 0.193
C(resultdir)[30.0] 0.0181 0.088 0.205 0.838 -0.155 0.192
C(resultdir)[31.0] -0.0038 0.088 -0.043 0.966 -0.177 0.169
C(resultdir)[32.0] 0.0420 0.087 0.480 0.631 -0.129 0.213
C(resultdir)[33.0] -0.0395 0.092 -0.428 0.669 -0.221 0.142
C(resultdir)[34.0] 0.0599 0.096 0.626 0.531 -0.128 0.247
C(resultdir)[35.0] 0.0180 0.095 0.190 0.849 -0.168 0.204
C(resultdir)[36.0] 0.0009 0.105 0.009 0.993 -0.204 0.206
C(year)[T.2013] -0.0239 0.013 -1.801 0.072 -0.050 0.002
C(year)[T.2014] -0.0475 0.016 -2.973 0.003 -0.079 -0.016
C(month)[T.2] -0.0698 0.035 -1.969 0.049 -0.139 -0.000
C(month)[T.3] -0.0250 0.050 -0.497 0.619 -0.124 0.074
C(month)[T.4] 0.0036 0.079 0.045 0.964 -0.152 0.159
C(month)[T.5] 0.0033 0.101 0.033 0.974 -0.195 0.202
C(month)[T.6] -0.0034 0.109 -0.032 0.975 -0.216 0.209
C(month)[T.7] -0.0528 0.102 -0.520 0.603 -0.252 0.146
C(month)[T.8] -0.0191 0.086 -0.223 0.823 -0.187 0.149
C(month)[T.9] -0.0102 0.074 -0.138 0.890 -0.156 0.136
C(month)[T.10] -0.0122 0.075 -0.162 0.871 -0.160 0.136
C(month)[T.11] 0.0558 0.072 0.774 0.439 -0.085 0.197
C(month)[T.12] 0.0054 0.048 0.111 0.911 -0.089 0.100
C(weekend)[T.1] 0.0710 0.012 5.800 0.000 0.047 0.095
C(rainY)[T.1] -0.0057 0.015 -0.379 0.705 -0.035 0.024
C(item_nbr)[T.2] -9.662e-15 0.080 -1.2e-13 1.000 -0.158 0.158
C(item_nbr)[T.3] -1.107e-14 0.080 -1.38e-13 1.000 -0.158 0.158
C(item_nbr)[T.4] -1.216e-14 0.080 -1.51e-13 1.000 -0.158 0.158
C(item_nbr)[T.5] -7.965e-15 0.080 -9.91e-14 1.000 -0.158 0.158
C(item_nbr)[T.6] 5.422e-14 0.080 6.75e-13 1.000 -0.158 0.158
C(item_nbr)[T.7] 3.468e-14 0.080 4.31e-13 1.000 -0.158 0.158
C(item_nbr)[T.8] 3.276e-13 0.080 4.08e-12 1.000 -0.158 0.158
C(item_nbr)[T.9] 2.018e-13 0.080 2.51e-12 1.000 -0.158 0.158
C(item_nbr)[T.10] 2.126e-14 0.080 2.64e-13 1.000 -0.158 0.158
C(item_nbr)[T.11] -9.061e-14 0.080 -1.13e-12 1.000 -0.158 0.158
C(item_nbr)[T.12] -5.319e-14 0.080 -6.62e-13 1.000 -0.158 0.158
C(item_nbr)[T.13] 5.869e-15 0.080 7.3e-14 1.000 -0.158 0.158
C(item_nbr)[T.14] -8.782e-15 0.080 -1.09e-13 1.000 -0.158 0.158
C(item_nbr)[T.15] -7.608e-15 0.080 -9.47e-14 1.000 -0.158 0.158
C(item_nbr)[T.16] 28.6974 0.100 287.249 0.000 28.502 28.893
C(item_nbr)[T.17] -1.307e-14 0.080 -1.63e-13 1.000 -0.158 0.158
C(item_nbr)[T.18] -1.791e-14 0.080 -2.23e-13 1.000 -0.158 0.158
C(item_nbr)[T.19] 1.759e-15 0.080 2.19e-14 1.000 -0.158 0.158
C(item_nbr)[T.20] -1.021e-14 0.080 -1.27e-13 1.000 -0.158 0.158
C(item_nbr)[T.21] -7.24e-15 0.080 -9.01e-14 1.000 -0.158 0.158
C(item_nbr)[T.22] -7.646e-15 0.080 -9.51e-14 1.000 -0.158 0.158
C(item_nbr)[T.23] -7.665e-15 0.080 -9.54e-14 1.000 -0.158 0.158
C(item_nbr)[T.24] -6.374e-15 0.080 -7.93e-14 1.000 -0.158 0.158
C(item_nbr)[T.25] 152.5770 0.139 1094.511 0.000 152.304 152.850
C(item_nbr)[T.26] -6.135e-15 0.080 -7.63e-14 1.000 -0.158 0.158
C(item_nbr)[T.27] -5.917e-15 0.080 -7.36e-14 1.000 -0.158 0.158
C(item_nbr)[T.28] -3.208e-15 0.080 -3.99e-14 1.000 -0.158 0.158
C(item_nbr)[T.29] -9.756e-15 0.080 -1.21e-13 1.000 -0.158 0.158
C(item_nbr)[T.30] -4.552e-15 0.080 -5.66e-14 1.000 -0.158 0.158
C(item_nbr)[T.31] -8e-15 0.080 -9.95e-14 1.000 -0.158 0.158
C(item_nbr)[T.32] -6.981e-15 0.080 -8.69e-14 1.000 -0.158 0.158
C(item_nbr)[T.33] -7.39e-15 0.080 -9.19e-14 1.000 -0.158 0.158
C(item_nbr)[T.34] -8.454e-15 0.080 -1.05e-13 1.000 -0.158 0.158
C(item_nbr)[T.35] -3.839e-15 0.080 -4.78e-14 1.000 -0.158 0.158
C(item_nbr)[T.36] -5.282e-15 0.080 -6.57e-14 1.000 -0.158 0.158
C(item_nbr)[T.37] -5.585e-15 0.080 -6.95e-14 1.000 -0.158 0.158
C(item_nbr)[T.38] -5.657e-15 0.080 -7.04e-14 1.000 -0.158 0.158
C(item_nbr)[T.39] 0.1539 0.080 1.915 0.056 -0.004 0.312
C(item_nbr)[T.40] -6.536e-15 0.080 -8.13e-14 1.000 -0.158 0.158
C(item_nbr)[T.41] -7.126e-15 0.080 -8.86e-14 1.000 -0.158 0.158
C(item_nbr)[T.42] -7.386e-15 0.080 -9.19e-14 1.000 -0.158 0.158
C(item_nbr)[T.43] -9.425e-15 0.080 -1.17e-13 1.000 -0.158 0.158
C(item_nbr)[T.44] -7.999e-15 0.080 -9.95e-14 1.000 -0.158 0.158
C(item_nbr)[T.45] -5.243e-15 0.080 -6.52e-14 1.000 -0.158 0.158
C(item_nbr)[T.46] -3.753e-15 0.080 -4.67e-14 1.000 -0.158 0.158
C(item_nbr)[T.47] -1.067e-15 0.080 -1.33e-14 1.000 -0.158 0.158
C(item_nbr)[T.48] -6.436e-15 0.080 -8.01e-14 1.000 -0.158 0.158
C(item_nbr)[T.49] -8.629e-15 0.080 -1.07e-13 1.000 -0.158 0.158
C(item_nbr)[T.50] 0.2815 0.081 3.496 0.000 0.124 0.439
C(item_nbr)[T.51] -5.386e-15 0.080 -6.7e-14 1.000 -0.158 0.158
C(item_nbr)[T.52] -2.705e-15 0.080 -3.37e-14 1.000 -0.158 0.158
C(item_nbr)[T.53] -4.897e-15 0.080 -6.09e-14 1.000 -0.158 0.158
C(item_nbr)[T.54] -5.403e-15 0.080 -6.72e-14 1.000 -0.158 0.158
C(item_nbr)[T.55] -7.352e-15 0.080 -9.15e-14 1.000 -0.158 0.158
C(item_nbr)[T.56] -6.614e-15 0.080 -8.23e-14 1.000 -0.158 0.158
C(item_nbr)[T.57] -7.036e-15 0.080 -8.75e-14 1.000 -0.158 0.158
C(item_nbr)[T.58] -6.813e-15 0.080 -8.48e-14 1.000 -0.158 0.158
C(item_nbr)[T.59] -7.694e-15 0.080 -9.57e-14 1.000 -0.158 0.158
C(item_nbr)[T.60] -5.844e-15 0.080 -7.27e-14 1.000 -0.158 0.158
C(item_nbr)[T.61] -5.816e-15 0.080 -7.24e-14 1.000 -0.158 0.158
C(item_nbr)[T.62] -6.211e-15 0.080 -7.73e-14 1.000 -0.158 0.158
C(item_nbr)[T.63] -7.676e-15 0.080 -9.55e-14 1.000 -0.158 0.158
C(item_nbr)[T.64] 0.7676 0.080 9.549 0.000 0.610 0.925
C(item_nbr)[T.65] -7.625e-15 0.080 -9.49e-14 1.000 -0.158 0.158
C(item_nbr)[T.66] -4.346e-15 0.080 -5.41e-14 1.000 -0.158 0.158
C(item_nbr)[T.67] -8.283e-15 0.080 -1.03e-13 1.000 -0.158 0.158
C(item_nbr)[T.68] -5.379e-15 0.080 -6.69e-14 1.000 -0.158 0.158
C(item_nbr)[T.69] -5.666e-15 0.080 -7.05e-14 1.000 -0.158 0.158
C(item_nbr)[T.70] -7.633e-15 0.080 -9.5e-14 1.000 -0.158 0.158
C(item_nbr)[T.71] -3.156e-15 0.080 -3.93e-14 1.000 -0.158 0.158
C(item_nbr)[T.72] -7.388e-15 0.080 -9.19e-14 1.000 -0.158 0.158
C(item_nbr)[T.73] -6.552e-15 0.080 -8.15e-14 1.000 -0.158 0.158
C(item_nbr)[T.74] -4.49e-15 0.080 -5.59e-14 1.000 -0.158 0.158
C(item_nbr)[T.75] -9.193e-15 0.080 -1.14e-13 1.000 -0.158 0.158
C(item_nbr)[T.76] -1.057e-14 0.080 -1.31e-13 1.000 -0.158 0.158
C(item_nbr)[T.77] 0.9753 0.081 12.100 0.000 0.817 1.133
C(item_nbr)[T.78] -8.473e-15 0.080 -1.05e-13 1.000 -0.158 0.158
C(item_nbr)[T.79] -5.69e-15 0.080 -7.08e-14 1.000 -0.158 0.158
C(item_nbr)[T.80] -4.426e-15 0.080 -5.51e-14 1.000 -0.158 0.158
C(item_nbr)[T.81] -8.229e-15 0.080 -1.02e-13 1.000 -0.158 0.158
C(item_nbr)[T.82] -7.555e-15 0.080 -9.4e-14 1.000 -0.158 0.158
C(item_nbr)[T.83] -6.418e-15 0.080 -7.98e-14 1.000 -0.158 0.158
C(item_nbr)[T.84] -6.301e-15 0.080 -7.84e-14 1.000 -0.158 0.158
C(item_nbr)[T.85] 0.0786 0.080 0.978 0.328 -0.079 0.236
C(item_nbr)[T.86] -6.374e-15 0.080 -7.93e-14 1.000 -0.158 0.158
C(item_nbr)[T.87] -5.674e-15 0.080 -7.06e-14 1.000 -0.158 0.158
C(item_nbr)[T.88] -4.964e-15 0.080 -6.18e-14 1.000 -0.158 0.158
C(item_nbr)[T.89] -6.829e-15 0.080 -8.5e-14 1.000 -0.158 0.158
C(item_nbr)[T.90] -6.666e-15 0.080 -8.29e-14 1.000 -0.158 0.158
C(item_nbr)[T.91] -5.038e-15 0.080 -6.27e-14 1.000 -0.158 0.158
C(item_nbr)[T.92] -6.261e-15 0.080 -7.79e-14 1.000 -0.158 0.158
C(item_nbr)[T.93] 0.4615 0.081 5.724 0.000 0.303 0.620
C(item_nbr)[T.94] -7.585e-15 0.080 -9.44e-14 1.000 -0.158 0.158
C(item_nbr)[T.95] -8.702e-15 0.080 -1.08e-13 1.000 -0.158 0.158
C(item_nbr)[T.96] -4.579e-15 0.080 -5.7e-14 1.000 -0.158 0.158
C(item_nbr)[T.97] -6.304e-15 0.080 -7.84e-14 1.000 -0.158 0.158
C(item_nbr)[T.98] -4.699e-15 0.080 -5.85e-14 1.000 -0.158 0.158
C(item_nbr)[T.99] -6.353e-15 0.080 -7.9e-14 1.000 -0.158 0.158
C(item_nbr)[T.100] -5.746e-15 0.080 -7.15e-14 1.000 -0.158 0.158
C(item_nbr)[T.101] -6.194e-15 0.080 -7.71e-14 1.000 -0.158 0.158
C(item_nbr)[T.102] -5.601e-15 0.080 -6.97e-14 1.000 -0.158 0.158
C(item_nbr)[T.103] -5.432e-15 0.080 -6.76e-14 1.000 -0.158 0.158
C(item_nbr)[T.104] -6.203e-15 0.080 -7.72e-14 1.000 -0.158 0.158
C(item_nbr)[T.105] -7.76e-15 0.080 -9.65e-14 1.000 -0.158 0.158
C(item_nbr)[T.106] -1.131e-14 0.080 -1.41e-13 1.000 -0.158 0.158
C(item_nbr)[T.107] -7.446e-15 0.080 -9.26e-14 1.000 -0.158 0.158
C(item_nbr)[T.108] -5.616e-15 0.080 -6.99e-14 1.000 -0.158 0.158
C(item_nbr)[T.109] -7.377e-15 0.080 -9.18e-14 1.000 -0.158 0.158
C(item_nbr)[T.110] 2.567e-15 0.080 3.19e-14 1.000 -0.158 0.158
C(item_nbr)[T.111] -2.627e-14 0.080 -3.27e-13 1.000 -0.158 0.158
scale(tmax) -0.0285 0.093 -0.307 0.759 -0.211 0.154
scale(tmin) -0.0047 0.083 -0.057 0.955 -0.167 0.158
scale(tavg) -0.0174 0.086 -0.202 0.840 -0.186 0.151
scale(dewpoint) 0.0239 0.113 0.212 0.832 -0.197 0.245
scale(wetbulb) 0.0005 0.080 0.006 0.996 -0.157 0.158
scale(heat) -0.0804 0.053 -1.521 0.128 -0.184 0.023
scale(cool) 0.0201 0.011 1.865 0.062 -0.001 0.041
scale(preciptotal) -0.0030 0.007 -0.429 0.668 -0.017 0.011
scale(stnpressure) 0.0664 0.054 1.219 0.223 -0.040 0.173
scale(sealevel) -0.0721 0.054 -1.339 0.181 -0.178 0.033
scale(resultspeed) 0.0214 0.020 1.058 0.290 -0.018 0.061
scale(avgspeed) -0.0290 0.026 -1.118 0.264 -0.080 0.022
scale(sunset) 0.1128 0.082 1.375 0.169 -0.048 0.274
scale(sunrise) 0.1155 0.084 1.369 0.171 -0.050 0.281
scale(daytime) -8.722e-05 0.013 -0.007 0.995 -0.025 0.025
scale(relative_humility) 0.0011 0.040 0.028 0.978 -0.078 0.080
scale(windchill) -0.0719 0.220 -0.327 0.744 -0.504 0.360
==============================================================================
Omnibus: 159187.015 Durbin-Watson: 2.003
Prob(Omnibus): 0.000 Jarque-Bera (JB): 25819966227.843
Skew: -10.019 Prob(JB): 0.00
Kurtosis: 2578.434 Cond. No. 6.18e+15
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The smallest eigenvalue is 2.29e-26. This might indicate that there are
strong multicollinearity problems or that the design matrix is singular.
###Markdown
R square 약간 상승했으나, conditional number도 상승했다. 3-1. OLS : df1 (units) - 아웃라이어 제거 + tmax/tmin/tavg 제거 + dewpoint/wetbulb제거 + stnpressure/sealevel제거 + resultdir제거 + sunset/sunrise/daytime 제거
###Code
# OLS - df1_1
model1_1_1 = sm.OLS.from_formula('units ~ scale(heat) + scale(cool)\
+ scale(preciptotal) + scale(resultspeed) \
+ scale(avgspeed) + C(year) + C(month) + scale(relative_humility) + scale(windchill) + C(weekend) \
+ C(rainY) + C(store_nbr) + C(item_nbr) + 0', data = df1_1)
result1_1_1 = model1_1_1.fit()
print(result1_1_1.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: units R-squared: 0.943
Model: OLS Adj. R-squared: 0.943
Method: Least Squares F-statistic: 1.184e+04
Date: Fri, 06 Jul 2018 Prob (F-statistic): 0.00
Time: 01:48:44 Log-Likelihood: -1.8060e+05
No. Observations: 93968 AIC: 3.615e+05
Df Residuals: 93835 BIC: 3.627e+05
Df Model: 132
Covariance Type: nonrobust
============================================================================================
coef std err t P>|t| [0.025 0.975]
--------------------------------------------------------------------------------------------
C(year)[2012] 0.0487 0.062 0.783 0.434 -0.073 0.171
C(year)[2013] 0.0188 0.063 0.299 0.765 -0.104 0.142
C(year)[2014] 0.0010 0.064 0.015 0.988 -0.124 0.126
C(month)[T.2] -0.0472 0.027 -1.747 0.081 -0.100 0.006
C(month)[T.3] -0.0275 0.027 -1.029 0.303 -0.080 0.025
C(month)[T.4] -0.0355 0.031 -1.130 0.258 -0.097 0.026
C(month)[T.5] -0.0537 0.035 -1.523 0.128 -0.123 0.015
C(month)[T.6] -0.0469 0.039 -1.192 0.233 -0.124 0.030
C(month)[T.7] -0.0785 0.043 -1.818 0.069 -0.163 0.006
C(month)[T.8] -0.0428 0.041 -1.038 0.299 -0.124 0.038
C(month)[T.9] -0.0630 0.037 -1.703 0.088 -0.135 0.009
C(month)[T.10] -0.0991 0.033 -3.026 0.002 -0.163 -0.035
C(month)[T.11] -0.0315 0.030 -1.039 0.299 -0.091 0.028
C(month)[T.12] -0.0381 0.030 -1.278 0.201 -0.097 0.020
C(weekend)[T.1] 0.0709 0.012 5.925 0.000 0.047 0.094
C(rainY)[T.1] -0.0010 0.014 -0.067 0.947 -0.029 0.027
C(item_nbr)[T.2] 4.623e-15 0.080 5.78e-14 1.000 -0.157 0.157
C(item_nbr)[T.3] 2.849e-15 0.080 3.56e-14 1.000 -0.157 0.157
C(item_nbr)[T.4] -9.053e-14 0.080 -1.13e-12 1.000 -0.157 0.157
C(item_nbr)[T.5] 2.025e-14 0.080 2.53e-13 1.000 -0.157 0.157
C(item_nbr)[T.6] 1.062e-14 0.080 1.33e-13 1.000 -0.157 0.157
C(item_nbr)[T.7] 1.272e-13 0.080 1.59e-12 1.000 -0.157 0.157
C(item_nbr)[T.8] 1.751e-14 0.080 2.19e-13 1.000 -0.157 0.157
C(item_nbr)[T.9] -3.132e-15 0.080 -3.92e-14 1.000 -0.157 0.157
C(item_nbr)[T.10] 3.324e-15 0.080 4.16e-14 1.000 -0.157 0.157
C(item_nbr)[T.11] -4.445e-15 0.080 -5.56e-14 1.000 -0.157 0.157
C(item_nbr)[T.12] -7.099e-15 0.080 -8.88e-14 1.000 -0.157 0.157
C(item_nbr)[T.13] -7.041e-15 0.080 -8.81e-14 1.000 -0.157 0.157
C(item_nbr)[T.14] -1.985e-14 0.080 -2.48e-13 1.000 -0.157 0.157
C(item_nbr)[T.15] -4.064e-16 0.080 -5.08e-15 1.000 -0.157 0.157
C(item_nbr)[T.16] 28.7190 0.099 289.225 0.000 28.524 28.914
C(item_nbr)[T.17] -1.501e-14 0.080 -1.88e-13 1.000 -0.157 0.157
C(item_nbr)[T.18] -7.338e-15 0.080 -9.18e-14 1.000 -0.157 0.157
C(item_nbr)[T.19] 3.867e-17 0.080 4.84e-16 1.000 -0.157 0.157
C(item_nbr)[T.20] -4.47e-15 0.080 -5.59e-14 1.000 -0.157 0.157
C(item_nbr)[T.21] -1.846e-15 0.080 -2.31e-14 1.000 -0.157 0.157
C(item_nbr)[T.22] -4.338e-15 0.080 -5.43e-14 1.000 -0.157 0.157
C(item_nbr)[T.23] -6.086e-15 0.080 -7.61e-14 1.000 -0.157 0.157
C(item_nbr)[T.24] -4.645e-15 0.080 -5.81e-14 1.000 -0.157 0.157
C(item_nbr)[T.25] 152.5768 0.139 1098.078 0.000 152.304 152.849
C(item_nbr)[T.26] -4.822e-15 0.080 -6.03e-14 1.000 -0.157 0.157
C(item_nbr)[T.27] -6.318e-15 0.080 -7.9e-14 1.000 -0.157 0.157
C(item_nbr)[T.28] -8.93e-15 0.080 -1.12e-13 1.000 -0.157 0.157
C(item_nbr)[T.29] -5.305e-15 0.080 -6.64e-14 1.000 -0.157 0.157
C(item_nbr)[T.30] -7.9e-15 0.080 -9.88e-14 1.000 -0.157 0.157
C(item_nbr)[T.31] -9.633e-15 0.080 -1.21e-13 1.000 -0.157 0.157
C(item_nbr)[T.32] 4.532e-15 0.080 5.67e-14 1.000 -0.157 0.157
C(item_nbr)[T.33] -8.366e-15 0.080 -1.05e-13 1.000 -0.157 0.157
C(item_nbr)[T.34] -1.059e-14 0.080 -1.32e-13 1.000 -0.157 0.157
C(item_nbr)[T.35] -1.041e-14 0.080 -1.3e-13 1.000 -0.157 0.157
C(item_nbr)[T.36] -9.4e-15 0.080 -1.18e-13 1.000 -0.157 0.157
C(item_nbr)[T.37] -8.482e-15 0.080 -1.06e-13 1.000 -0.157 0.157
C(item_nbr)[T.38] -8.258e-15 0.080 -1.03e-13 1.000 -0.157 0.157
C(item_nbr)[T.39] 0.1530 0.080 1.914 0.056 -0.004 0.310
C(item_nbr)[T.40] -6.035e-15 0.080 -7.55e-14 1.000 -0.157 0.157
C(item_nbr)[T.41] -1.929e-14 0.080 -2.41e-13 1.000 -0.157 0.157
C(item_nbr)[T.42] -8.413e-15 0.080 -1.05e-13 1.000 -0.157 0.157
C(item_nbr)[T.43] -9.211e-15 0.080 -1.15e-13 1.000 -0.157 0.157
C(item_nbr)[T.44] -5.719e-15 0.080 -7.15e-14 1.000 -0.157 0.157
C(item_nbr)[T.45] -9.552e-15 0.080 -1.19e-13 1.000 -0.157 0.157
C(item_nbr)[T.46] -9.416e-15 0.080 -1.18e-13 1.000 -0.157 0.157
C(item_nbr)[T.47] -1.334e-14 0.080 -1.67e-13 1.000 -0.157 0.157
C(item_nbr)[T.48] -5.598e-15 0.080 -7e-14 1.000 -0.157 0.157
C(item_nbr)[T.49] -8.024e-16 0.080 -1e-14 1.000 -0.157 0.157
C(item_nbr)[T.50] 0.2904 0.080 3.627 0.000 0.133 0.447
C(item_nbr)[T.51] -6.59e-15 0.080 -8.24e-14 1.000 -0.157 0.157
C(item_nbr)[T.52] -5.585e-15 0.080 -6.99e-14 1.000 -0.157 0.157
C(item_nbr)[T.53] -3.969e-15 0.080 -4.97e-14 1.000 -0.157 0.157
C(item_nbr)[T.54] -7.562e-15 0.080 -9.46e-14 1.000 -0.157 0.157
C(item_nbr)[T.55] -9.503e-15 0.080 -1.19e-13 1.000 -0.157 0.157
C(item_nbr)[T.56] -4.577e-15 0.080 -5.73e-14 1.000 -0.157 0.157
C(item_nbr)[T.57] -9.745e-15 0.080 -1.22e-13 1.000 -0.157 0.157
C(item_nbr)[T.58] -7.751e-15 0.080 -9.7e-14 1.000 -0.157 0.157
C(item_nbr)[T.59] -4.214e-15 0.080 -5.27e-14 1.000 -0.157 0.157
C(item_nbr)[T.60] -6.606e-15 0.080 -8.26e-14 1.000 -0.157 0.157
C(item_nbr)[T.61] -9.403e-15 0.080 -1.18e-13 1.000 -0.157 0.157
C(item_nbr)[T.62] -7.428e-15 0.080 -9.29e-14 1.000 -0.157 0.157
C(item_nbr)[T.63] -3.393e-15 0.080 -4.24e-14 1.000 -0.157 0.157
C(item_nbr)[T.64] 0.7631 0.080 9.546 0.000 0.606 0.920
C(item_nbr)[T.65] -5.681e-15 0.080 -7.11e-14 1.000 -0.157 0.157
C(item_nbr)[T.66] -7.241e-15 0.080 -9.06e-14 1.000 -0.157 0.157
C(item_nbr)[T.67] -6.97e-15 0.080 -8.72e-14 1.000 -0.157 0.157
C(item_nbr)[T.68] -6.13e-15 0.080 -7.67e-14 1.000 -0.157 0.157
C(item_nbr)[T.69] -8.121e-15 0.080 -1.02e-13 1.000 -0.157 0.157
C(item_nbr)[T.70] -7.338e-15 0.080 -9.18e-14 1.000 -0.157 0.157
C(item_nbr)[T.71] -6.273e-15 0.080 -7.85e-14 1.000 -0.157 0.157
C(item_nbr)[T.72] -5.449e-15 0.080 -6.82e-14 1.000 -0.157 0.157
C(item_nbr)[T.73] -7.84e-15 0.080 -9.81e-14 1.000 -0.157 0.157
C(item_nbr)[T.74] -9.786e-15 0.080 -1.22e-13 1.000 -0.157 0.157
C(item_nbr)[T.75] -5.72e-15 0.080 -7.15e-14 1.000 -0.157 0.157
C(item_nbr)[T.76] -5.993e-15 0.080 -7.5e-14 1.000 -0.157 0.157
C(item_nbr)[T.77] 0.9694 0.080 12.095 0.000 0.812 1.127
C(item_nbr)[T.78] -5.422e-15 0.080 -6.78e-14 1.000 -0.157 0.157
C(item_nbr)[T.79] -1.015e-14 0.080 -1.27e-13 1.000 -0.157 0.157
C(item_nbr)[T.80] -7.18e-15 0.080 -8.98e-14 1.000 -0.157 0.157
C(item_nbr)[T.81] -2.306e-15 0.080 -2.88e-14 1.000 -0.157 0.157
C(item_nbr)[T.82] -6.422e-15 0.080 -8.03e-14 1.000 -0.157 0.157
C(item_nbr)[T.83] -4.96e-15 0.080 -6.2e-14 1.000 -0.157 0.157
C(item_nbr)[T.84] -8.657e-15 0.080 -1.08e-13 1.000 -0.157 0.157
C(item_nbr)[T.85] 0.0782 0.080 0.978 0.328 -0.079 0.235
C(item_nbr)[T.86] -6.077e-15 0.080 -7.6e-14 1.000 -0.157 0.157
C(item_nbr)[T.87] -8.902e-15 0.080 -1.11e-13 1.000 -0.157 0.157
C(item_nbr)[T.88] -1.1e-14 0.080 -1.38e-13 1.000 -0.157 0.157
C(item_nbr)[T.89] -6.387e-15 0.080 -7.99e-14 1.000 -0.157 0.157
C(item_nbr)[T.90] -4.827e-15 0.080 -6.04e-14 1.000 -0.157 0.157
C(item_nbr)[T.91] -5.38e-15 0.080 -6.73e-14 1.000 -0.157 0.157
C(item_nbr)[T.92] -7.635e-15 0.080 -9.55e-14 1.000 -0.157 0.157
C(item_nbr)[T.93] 0.4585 0.080 5.719 0.000 0.301 0.616
C(item_nbr)[T.94] -8.353e-15 0.080 -1.04e-13 1.000 -0.157 0.157
C(item_nbr)[T.95] -6.518e-15 0.080 -8.15e-14 1.000 -0.157 0.157
C(item_nbr)[T.96] -7.728e-15 0.080 -9.67e-14 1.000 -0.157 0.157
C(item_nbr)[T.97] -8.115e-15 0.080 -1.02e-13 1.000 -0.157 0.157
C(item_nbr)[T.98] -4.888e-15 0.080 -6.11e-14 1.000 -0.157 0.157
C(item_nbr)[T.99] -7.713e-15 0.080 -9.65e-14 1.000 -0.157 0.157
C(item_nbr)[T.100] -6.241e-15 0.080 -7.81e-14 1.000 -0.157 0.157
C(item_nbr)[T.101] -7.228e-15 0.080 -9.04e-14 1.000 -0.157 0.157
C(item_nbr)[T.102] -4.369e-15 0.080 -5.47e-14 1.000 -0.157 0.157
C(item_nbr)[T.103] -8.968e-15 0.080 -1.12e-13 1.000 -0.157 0.157
C(item_nbr)[T.104] -3.838e-15 0.080 -4.8e-14 1.000 -0.157 0.157
C(item_nbr)[T.105] -6.385e-15 0.080 -7.99e-14 1.000 -0.157 0.157
C(item_nbr)[T.106] -5.096e-15 0.080 -6.37e-14 1.000 -0.157 0.157
C(item_nbr)[T.107] -1.159e-14 0.080 -1.45e-13 1.000 -0.157 0.157
C(item_nbr)[T.108] -5.668e-15 0.080 -7.09e-14 1.000 -0.157 0.157
C(item_nbr)[T.109] -2.407e-15 0.080 -3.01e-14 1.000 -0.157 0.157
C(item_nbr)[T.110] -1.253e-14 0.080 -1.57e-13 1.000 -0.157 0.157
C(item_nbr)[T.111] -1.374e-14 0.080 -1.72e-13 1.000 -0.157 0.157
scale(heat) -0.0740 0.046 -1.597 0.110 -0.165 0.017
scale(cool) 0.0207 0.009 2.350 0.019 0.003 0.038
scale(preciptotal) -0.0009 0.006 -0.139 0.889 -0.013 0.012
scale(resultspeed) 0.0285 0.017 1.653 0.098 -0.005 0.062
scale(avgspeed) -0.0348 0.018 -1.920 0.055 -0.070 0.001
scale(relative_humility) 0.0125 0.008 1.531 0.126 -0.004 0.028
scale(windchill) -0.0880 0.050 -1.758 0.079 -0.186 0.010
==============================================================================
Omnibus: 160342.749 Durbin-Watson: 2.002
Prob(Omnibus): 0.000 Jarque-Bera (JB): 26240830020.689
Skew: -10.050 Prob(JB): 0.00
Kurtosis: 2591.757 Cond. No. 188.
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
4. 변수변환 : df2 (log1p_units)
###Code
model2 = sm.OLS.from_formula('log1p_units ~ scale(tmax) + scale(tmin) + scale(tavg) + scale(dewpoint) + scale(wetbulb) + scale(heat) + scale(cool)\
+ scale(preciptotal) + scale(stnpressure) + scale(sealevel) + scale(resultspeed) \
+ C(resultdir) + scale(avgspeed) + scale(sunset) + scale(sunrise) + scale(daytime) + C(year) + C(month) + scale(relative_humility) + scale(windchill) + C(weekend) \
+ C(rainY) + C(store_nbr) + C(item_nbr) + 0', data = df2)
result2 = model2.fit()
print(result2.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: log1p_units R-squared: 0.949
Model: OLS Adj. R-squared: 0.949
Method: Least Squares F-statistic: 1.009e+04
Date: Fri, 06 Jul 2018 Prob (F-statistic): 0.00
Time: 01:49:16 Log-Likelihood: 57334.
No. Observations: 94572 AIC: -1.143e+05
Df Residuals: 94396 BIC: -1.127e+05
Df Model: 175
Covariance Type: nonrobust
============================================================================================
coef std err t P>|t| [0.025 0.975]
--------------------------------------------------------------------------------------------
C(resultdir)[1.0] -0.0095 0.008 -1.186 0.235 -0.025 0.006
C(resultdir)[2.0] -0.0077 0.008 -0.950 0.342 -0.024 0.008
C(resultdir)[3.0] -0.0086 0.007 -1.188 0.235 -0.023 0.006
C(resultdir)[4.0] -0.0077 0.007 -1.045 0.296 -0.022 0.007
C(resultdir)[5.0] -0.0140 0.007 -1.932 0.053 -0.028 0.000
C(resultdir)[6.0] -0.0095 0.008 -1.256 0.209 -0.024 0.005
C(resultdir)[7.0] -0.0172 0.007 -2.356 0.018 -0.031 -0.003
C(resultdir)[8.0] -0.0108 0.008 -1.343 0.179 -0.026 0.005
C(resultdir)[9.0] -0.0036 0.008 -0.455 0.649 -0.019 0.012
C(resultdir)[10.0] -0.0241 0.009 -2.641 0.008 -0.042 -0.006
C(resultdir)[11.0] -0.0109 0.008 -1.288 0.198 -0.027 0.006
C(resultdir)[12.0] -0.0128 0.008 -1.546 0.122 -0.029 0.003
C(resultdir)[13.0] -0.0132 0.008 -1.693 0.091 -0.029 0.002
C(resultdir)[14.0] -0.0289 0.009 -3.353 0.001 -0.046 -0.012
C(resultdir)[15.0] -0.0104 0.008 -1.275 0.202 -0.026 0.006
C(resultdir)[16.0] -0.0112 0.009 -1.230 0.219 -0.029 0.007
C(resultdir)[17.0] -0.0055 0.009 -0.593 0.553 -0.024 0.013
C(resultdir)[18.0] -0.0101 0.008 -1.237 0.216 -0.026 0.006
C(resultdir)[19.0] -0.0128 0.008 -1.655 0.098 -0.028 0.002
C(resultdir)[20.0] -0.0046 0.008 -0.605 0.545 -0.019 0.010
C(resultdir)[21.0] -0.0104 0.007 -1.459 0.145 -0.024 0.004
C(resultdir)[22.0] -0.0091 0.007 -1.323 0.186 -0.023 0.004
C(resultdir)[23.0] -0.0105 0.007 -1.508 0.132 -0.024 0.003
C(resultdir)[24.0] -0.0116 0.007 -1.667 0.095 -0.025 0.002
C(resultdir)[25.0] -0.0111 0.007 -1.615 0.106 -0.025 0.002
C(resultdir)[26.0] -0.0096 0.007 -1.377 0.169 -0.023 0.004
C(resultdir)[27.0] -0.0131 0.007 -1.886 0.059 -0.027 0.001
C(resultdir)[28.0] -0.0112 0.007 -1.639 0.101 -0.025 0.002
C(resultdir)[29.0] -0.0128 0.007 -1.867 0.062 -0.026 0.001
C(resultdir)[30.0] -0.0108 0.007 -1.533 0.125 -0.025 0.003
C(resultdir)[31.0] -0.0125 0.007 -1.788 0.074 -0.026 0.001
C(resultdir)[32.0] -0.0114 0.007 -1.642 0.101 -0.025 0.002
C(resultdir)[33.0] -0.0140 0.007 -1.911 0.056 -0.028 0.000
C(resultdir)[34.0] -0.0079 0.008 -1.048 0.295 -0.023 0.007
C(resultdir)[35.0] -0.0058 0.008 -0.766 0.444 -0.021 0.009
C(resultdir)[36.0] -0.0051 0.008 -0.616 0.538 -0.021 0.011
C(year)[T.2013] -0.0050 0.001 -4.785 0.000 -0.007 -0.003
C(year)[T.2014] 0.0026 0.001 2.063 0.039 0.000 0.005
C(month)[T.2] 0.0005 0.003 0.172 0.864 -0.005 0.006
C(month)[T.3] 0.0086 0.004 2.174 0.030 0.001 0.016
C(month)[T.4] 0.0107 0.006 1.703 0.089 -0.002 0.023
C(month)[T.5] 0.0148 0.008 1.840 0.066 -0.001 0.030
C(month)[T.6] 0.0162 0.009 1.886 0.059 -0.001 0.033
C(month)[T.7] 0.0132 0.008 1.635 0.102 -0.003 0.029
C(month)[T.8] 0.0122 0.007 1.801 0.072 -0.001 0.025
C(month)[T.9] 0.0058 0.006 0.982 0.326 -0.006 0.017
C(month)[T.10] 0.0051 0.006 0.850 0.395 -0.007 0.017
C(month)[T.11] 0.0081 0.006 1.425 0.154 -0.003 0.019
C(month)[T.12] 0.0091 0.004 2.376 0.018 0.002 0.017
C(weekend)[T.1] 0.0081 0.001 8.333 0.000 0.006 0.010
C(rainY)[T.1] 0.0025 0.001 2.075 0.038 0.000 0.005
C(item_nbr)[T.2] 1.057e-15 0.006 1.65e-13 1.000 -0.013 0.013
C(item_nbr)[T.3] 4.362e-16 0.006 6.82e-14 1.000 -0.013 0.013
C(item_nbr)[T.4] 6.198e-16 0.006 9.69e-14 1.000 -0.013 0.013
C(item_nbr)[T.5] -1.983e-16 0.006 -3.1e-14 1.000 -0.013 0.013
C(item_nbr)[T.6] 7.423e-16 0.006 1.16e-13 1.000 -0.013 0.013
C(item_nbr)[T.7] -6.026e-16 0.006 -9.42e-14 1.000 -0.013 0.013
C(item_nbr)[T.8] 6.812e-16 0.006 1.06e-13 1.000 -0.013 0.013
C(item_nbr)[T.9] -8.679e-16 0.006 -1.36e-13 1.000 -0.013 0.013
C(item_nbr)[T.10] 1.815e-16 0.006 2.84e-14 1.000 -0.013 0.013
C(item_nbr)[T.11] 4.449e-16 0.006 6.95e-14 1.000 -0.013 0.013
C(item_nbr)[T.12] 5.021e-16 0.006 7.85e-14 1.000 -0.013 0.013
C(item_nbr)[T.13] 3.607e-16 0.006 5.64e-14 1.000 -0.013 0.013
C(item_nbr)[T.14] -1.061e-15 0.006 -1.66e-13 1.000 -0.013 0.013
C(item_nbr)[T.15] -9.762e-16 0.006 -1.53e-13 1.000 -0.013 0.013
C(item_nbr)[T.16] 3.4076 0.006 532.447 0.000 3.395 3.420
C(item_nbr)[T.17] -1.415e-15 0.006 -2.21e-13 1.000 -0.013 0.013
C(item_nbr)[T.18] 3.839e-16 0.006 6e-14 1.000 -0.013 0.013
C(item_nbr)[T.19] 4.348e-16 0.006 6.79e-14 1.000 -0.013 0.013
C(item_nbr)[T.20] -1.091e-15 0.006 -1.7e-13 1.000 -0.013 0.013
C(item_nbr)[T.21] 8.929e-16 0.006 1.4e-13 1.000 -0.013 0.013
C(item_nbr)[T.22] -8.413e-16 0.006 -1.31e-13 1.000 -0.013 0.013
C(item_nbr)[T.23] 1.432e-15 0.006 2.24e-13 1.000 -0.013 0.013
C(item_nbr)[T.24] -6.151e-16 0.006 -9.61e-14 1.000 -0.013 0.013
C(item_nbr)[T.25] 5.0039 0.006 781.874 0.000 4.991 5.016
C(item_nbr)[T.26] 4.683e-16 0.006 7.32e-14 1.000 -0.013 0.013
C(item_nbr)[T.27] 2.294e-16 0.006 3.58e-14 1.000 -0.013 0.013
C(item_nbr)[T.28] 2.719e-15 0.006 4.25e-13 1.000 -0.013 0.013
C(item_nbr)[T.29] -3.958e-16 0.006 -6.18e-14 1.000 -0.013 0.013
C(item_nbr)[T.30] 2.562e-15 0.006 4e-13 1.000 -0.013 0.013
C(item_nbr)[T.31] -8.097e-16 0.006 -1.27e-13 1.000 -0.013 0.013
C(item_nbr)[T.32] -4.631e-16 0.006 -7.24e-14 1.000 -0.013 0.013
C(item_nbr)[T.33] -1.485e-15 0.006 -2.32e-13 1.000 -0.013 0.013
C(item_nbr)[T.34] 2.384e-16 0.006 3.73e-14 1.000 -0.013 0.013
C(item_nbr)[T.35] -9.765e-16 0.006 -1.53e-13 1.000 -0.013 0.013
C(item_nbr)[T.36] -1.48e-15 0.006 -2.31e-13 1.000 -0.013 0.013
C(item_nbr)[T.37] 7.837e-17 0.006 1.22e-14 1.000 -0.013 0.013
C(item_nbr)[T.38] -6.161e-16 0.006 -9.63e-14 1.000 -0.013 0.013
C(item_nbr)[T.39] 0.0793 0.006 12.385 0.000 0.067 0.092
C(item_nbr)[T.40] 4.208e-16 0.006 6.58e-14 1.000 -0.013 0.013
C(item_nbr)[T.41] -8.65e-16 0.006 -1.35e-13 1.000 -0.013 0.013
C(item_nbr)[T.42] -1.652e-15 0.006 -2.58e-13 1.000 -0.013 0.013
C(item_nbr)[T.43] -2.651e-15 0.006 -4.14e-13 1.000 -0.013 0.013
C(item_nbr)[T.44] -2.219e-15 0.006 -3.47e-13 1.000 -0.013 0.013
C(item_nbr)[T.45] 1.05e-15 0.006 1.64e-13 1.000 -0.013 0.013
C(item_nbr)[T.46] -3.529e-16 0.006 -5.51e-14 1.000 -0.013 0.013
C(item_nbr)[T.47] -1.379e-15 0.006 -2.16e-13 1.000 -0.013 0.013
C(item_nbr)[T.48] -8.269e-16 0.006 -1.29e-13 1.000 -0.013 0.013
C(item_nbr)[T.49] -8.693e-16 0.006 -1.36e-13 1.000 -0.013 0.013
C(item_nbr)[T.50] 0.1360 0.006 21.254 0.000 0.123 0.149
C(item_nbr)[T.51] -1.424e-15 0.006 -2.22e-13 1.000 -0.013 0.013
C(item_nbr)[T.52] -6.055e-15 0.006 -9.46e-13 1.000 -0.013 0.013
C(item_nbr)[T.53] 4.247e-16 0.006 6.64e-14 1.000 -0.013 0.013
C(item_nbr)[T.54] -8.56e-16 0.006 -1.34e-13 1.000 -0.013 0.013
C(item_nbr)[T.55] 1.638e-15 0.006 2.56e-13 1.000 -0.013 0.013
C(item_nbr)[T.56] -7.542e-16 0.006 -1.18e-13 1.000 -0.013 0.013
C(item_nbr)[T.57] -5.295e-16 0.006 -8.27e-14 1.000 -0.013 0.013
C(item_nbr)[T.58] -4.384e-16 0.006 -6.85e-14 1.000 -0.013 0.013
C(item_nbr)[T.59] -1.574e-16 0.006 -2.46e-14 1.000 -0.013 0.013
C(item_nbr)[T.60] 1.054e-18 0.006 1.65e-16 1.000 -0.013 0.013
C(item_nbr)[T.61] 8.294e-16 0.006 1.3e-13 1.000 -0.013 0.013
C(item_nbr)[T.62] -1.302e-16 0.006 -2.03e-14 1.000 -0.013 0.013
C(item_nbr)[T.63] -4.827e-16 0.006 -7.54e-14 1.000 -0.013 0.013
C(item_nbr)[T.64] 0.3245 0.006 50.702 0.000 0.312 0.337
C(item_nbr)[T.65] 3.368e-16 0.006 5.26e-14 1.000 -0.013 0.013
C(item_nbr)[T.66] -1.146e-15 0.006 -1.79e-13 1.000 -0.013 0.013
C(item_nbr)[T.67] -3.691e-15 0.006 -5.77e-13 1.000 -0.013 0.013
C(item_nbr)[T.68] 8.916e-16 0.006 1.39e-13 1.000 -0.013 0.013
C(item_nbr)[T.69] 1.421e-15 0.006 2.22e-13 1.000 -0.013 0.013
C(item_nbr)[T.70] 1.272e-15 0.006 1.99e-13 1.000 -0.013 0.013
C(item_nbr)[T.71] -1.112e-15 0.006 -1.74e-13 1.000 -0.013 0.013
C(item_nbr)[T.72] 4.533e-16 0.006 7.08e-14 1.000 -0.013 0.013
C(item_nbr)[T.73] 1.108e-15 0.006 1.73e-13 1.000 -0.013 0.013
C(item_nbr)[T.74] -1.077e-15 0.006 -1.68e-13 1.000 -0.013 0.013
C(item_nbr)[T.75] -1.992e-15 0.006 -3.11e-13 1.000 -0.013 0.013
C(item_nbr)[T.76] -5.166e-16 0.006 -8.07e-14 1.000 -0.013 0.013
C(item_nbr)[T.77] 0.4063 0.006 63.484 0.000 0.394 0.419
C(item_nbr)[T.78] -5.004e-16 0.006 -7.82e-14 1.000 -0.013 0.013
C(item_nbr)[T.79] -6.578e-16 0.006 -1.03e-13 1.000 -0.013 0.013
C(item_nbr)[T.80] -1.55e-15 0.006 -2.42e-13 1.000 -0.013 0.013
C(item_nbr)[T.81] -1.36e-15 0.006 -2.13e-13 1.000 -0.013 0.013
C(item_nbr)[T.82] -2.253e-17 0.006 -3.52e-15 1.000 -0.013 0.013
C(item_nbr)[T.83] 2.168e-16 0.006 3.39e-14 1.000 -0.013 0.013
C(item_nbr)[T.84] -1.081e-15 0.006 -1.69e-13 1.000 -0.013 0.013
C(item_nbr)[T.85] 0.0465 0.006 7.268 0.000 0.034 0.059
C(item_nbr)[T.86] 5.511e-16 0.006 8.61e-14 1.000 -0.013 0.013
C(item_nbr)[T.87] -8.665e-16 0.006 -1.35e-13 1.000 -0.013 0.013
C(item_nbr)[T.88] -2.034e-15 0.006 -3.18e-13 1.000 -0.013 0.013
C(item_nbr)[T.89] -8.115e-16 0.006 -1.27e-13 1.000 -0.013 0.013
C(item_nbr)[T.90] -2.393e-16 0.006 -3.74e-14 1.000 -0.013 0.013
C(item_nbr)[T.91] 2.271e-15 0.006 3.55e-13 1.000 -0.013 0.013
C(item_nbr)[T.92] -3.599e-15 0.006 -5.62e-13 1.000 -0.013 0.013
C(item_nbr)[T.93] 0.2054 0.006 32.101 0.000 0.193 0.218
C(item_nbr)[T.94] 7.049e-16 0.006 1.1e-13 1.000 -0.013 0.013
C(item_nbr)[T.95] -2.107e-16 0.006 -3.29e-14 1.000 -0.013 0.013
C(item_nbr)[T.96] -4.652e-16 0.006 -7.27e-14 1.000 -0.013 0.013
C(item_nbr)[T.97] -2.393e-16 0.006 -3.74e-14 1.000 -0.013 0.013
C(item_nbr)[T.98] -6.802e-16 0.006 -1.06e-13 1.000 -0.013 0.013
C(item_nbr)[T.99] -2.214e-16 0.006 -3.46e-14 1.000 -0.013 0.013
C(item_nbr)[T.100] -9.873e-16 0.006 -1.54e-13 1.000 -0.013 0.013
C(item_nbr)[T.101] -6.577e-16 0.006 -1.03e-13 1.000 -0.013 0.013
C(item_nbr)[T.102] -9.683e-16 0.006 -1.51e-13 1.000 -0.013 0.013
C(item_nbr)[T.103] 9.49e-16 0.006 1.48e-13 1.000 -0.013 0.013
C(item_nbr)[T.104] -1.314e-15 0.006 -2.05e-13 1.000 -0.013 0.013
C(item_nbr)[T.105] -8.432e-16 0.006 -1.32e-13 1.000 -0.013 0.013
C(item_nbr)[T.106] -2.479e-16 0.006 -3.87e-14 1.000 -0.013 0.013
C(item_nbr)[T.107] -1.175e-15 0.006 -1.84e-13 1.000 -0.013 0.013
C(item_nbr)[T.108] -2.105e-16 0.006 -3.29e-14 1.000 -0.013 0.013
C(item_nbr)[T.109] -1.221e-15 0.006 -1.91e-13 1.000 -0.013 0.013
C(item_nbr)[T.110] -3.145e-16 0.006 -4.91e-14 1.000 -0.013 0.013
C(item_nbr)[T.111] -1.496e-15 0.006 -2.34e-13 1.000 -0.013 0.013
scale(tmax) 0.0113 0.007 1.532 0.125 -0.003 0.026
scale(tmin) 0.0133 0.007 2.019 0.044 0.000 0.026
scale(tavg) 0.0124 0.007 1.821 0.069 -0.001 0.026
scale(dewpoint) -0.0028 0.009 -0.313 0.754 -0.020 0.015
scale(wetbulb) -0.0034 0.006 -0.539 0.590 -0.016 0.009
scale(heat) 0.0010 0.004 0.241 0.810 -0.007 0.009
scale(cool) 1.523e-05 0.001 0.018 0.986 -0.002 0.002
scale(preciptotal) 0.0009 0.001 1.663 0.096 -0.000 0.002
scale(stnpressure) 0.0009 0.004 0.213 0.831 -0.008 0.009
scale(sealevel) -0.0017 0.004 -0.396 0.692 -0.010 0.007
scale(resultspeed) 0.0036 0.002 2.242 0.025 0.000 0.007
scale(avgspeed) -0.0066 0.002 -3.212 0.001 -0.011 -0.003
scale(sunset) 0.0042 0.006 0.654 0.513 -0.008 0.017
scale(sunrise) 0.0063 0.007 0.942 0.346 -0.007 0.019
scale(daytime) -0.0010 0.001 -0.958 0.338 -0.003 0.001
scale(relative_humility) 0.0013 0.003 0.401 0.689 -0.005 0.008
scale(windchill) -0.0340 0.017 -1.952 0.051 -0.068 0.000
==============================================================================
Omnibus: 98483.645 Durbin-Watson: 2.006
Prob(Omnibus): 0.000 Jarque-Bera (JB): 165555242.169
Skew: 4.125 Prob(JB): 0.00
Kurtosis: 207.806 Cond. No. 5.43e+15
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The smallest eigenvalue is 3e-26. This might indicate that there are
strong multicollinearity problems or that the design matrix is singular.
###Markdown
units에 log를 취하여 R square값은 올랐지만, 여전히 conditional number는 그대로. 상관관계가 높은 변수 제거해야함 5. 변수변환 : df2 (log1p_units) + 아웃라이어 제거
###Code
# 아웃라이어 제거
# Cook's distance > 2 인 값 제거
influence = result2.get_influence()
cooks_d2, pvals = influence.cooks_distance
fox_cr = 4 / (len(df2) - 2)
idx_outlier = np.where(cooks_d2 > fox_cr)[0]
len(idx_outlier)
idx = list(set(range(len(df2))).difference(idx_outlier))
df2_1 = df2.iloc[idx, :].reset_index(drop=True)
df2_1.tail()
# OLS - df2_1
model2_1 = sm.OLS.from_formula('log1p_units ~ scale(tmax) + scale(tmin) + scale(tavg) + scale(dewpoint) + scale(wetbulb) + scale(heat) + scale(cool)\
+ scale(preciptotal) + scale(stnpressure) + scale(sealevel) + scale(resultspeed) \
+ C(resultdir) + scale(avgspeed) + C(year) + C(month) + scale(relative_humility) + scale(windchill) + C(weekend) \
+ C(rainY) + C(store_nbr) + C(item_nbr) + 0', data = df2_1)
result2_1 = model2_1.fit()
print(result2_1.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: log1p_units R-squared: 0.987
Model: OLS Adj. R-squared: 0.987
Method: Least Squares F-statistic: 3.997e+04
Date: Fri, 06 Jul 2018 Prob (F-statistic): 0.00
Time: 01:49:51 Log-Likelihood: 1.4191e+05
No. Observations: 91800 AIC: -2.835e+05
Df Residuals: 91626 BIC: -2.818e+05
Df Model: 173
Covariance Type: nonrobust
============================================================================================
coef std err t P>|t| [0.025 0.975]
--------------------------------------------------------------------------------------------
C(resultdir)[1.0] 0.0009 0.003 0.347 0.728 -0.004 0.006
C(resultdir)[2.0] 0.0002 0.003 0.076 0.939 -0.005 0.006
C(resultdir)[3.0] 0.0015 0.002 0.650 0.516 -0.003 0.006
C(resultdir)[4.0] 0.0006 0.002 0.248 0.804 -0.004 0.005
C(resultdir)[5.0] -0.0002 0.002 -0.081 0.935 -0.005 0.004
C(resultdir)[6.0] 0.0016 0.002 0.665 0.506 -0.003 0.006
C(resultdir)[7.0] 0.0025 0.002 1.075 0.282 -0.002 0.007
C(resultdir)[8.0] 0.0021 0.003 0.783 0.434 -0.003 0.007
C(resultdir)[9.0] 0.0011 0.003 0.422 0.673 -0.004 0.006
C(resultdir)[10.0] 0.0024 0.003 0.733 0.463 -0.004 0.009
C(resultdir)[11.0] 0.0003 0.003 0.122 0.903 -0.005 0.006
C(resultdir)[12.0] 0.0022 0.003 0.800 0.424 -0.003 0.008
C(resultdir)[13.0] 0.0016 0.003 0.620 0.535 -0.003 0.007
C(resultdir)[14.0] 0.0011 0.003 0.388 0.698 -0.005 0.007
C(resultdir)[15.0] 0.0023 0.003 0.817 0.414 -0.003 0.008
C(resultdir)[16.0] 0.0023 0.003 0.757 0.449 -0.004 0.008
C(resultdir)[17.0] 0.0036 0.003 1.108 0.268 -0.003 0.010
C(resultdir)[18.0] 0.0023 0.003 0.822 0.411 -0.003 0.008
C(resultdir)[19.0] 9.359e-05 0.003 0.037 0.971 -0.005 0.005
C(resultdir)[20.0] 0.0010 0.002 0.411 0.681 -0.004 0.006
C(resultdir)[21.0] 0.0019 0.002 0.876 0.381 -0.002 0.006
C(resultdir)[22.0] 0.0017 0.002 0.809 0.418 -0.002 0.006
C(resultdir)[23.0] 0.0025 0.002 1.197 0.231 -0.002 0.007
C(resultdir)[24.0] 0.0009 0.002 0.416 0.677 -0.003 0.005
C(resultdir)[25.0] 0.0018 0.002 0.866 0.386 -0.002 0.006
C(resultdir)[26.0] 0.0020 0.002 0.963 0.335 -0.002 0.006
C(resultdir)[27.0] 0.0023 0.002 1.111 0.267 -0.002 0.006
C(resultdir)[28.0] 0.0029 0.002 1.380 0.168 -0.001 0.007
C(resultdir)[29.0] 0.0025 0.002 1.211 0.226 -0.002 0.007
C(resultdir)[30.0] 0.0021 0.002 0.982 0.326 -0.002 0.006
C(resultdir)[31.0] 0.0002 0.002 0.116 0.908 -0.004 0.004
C(resultdir)[32.0] 0.0024 0.002 1.099 0.272 -0.002 0.007
C(resultdir)[33.0] 2.364e-05 0.002 0.010 0.992 -0.004 0.005
C(resultdir)[34.0] 0.0020 0.002 0.807 0.419 -0.003 0.007
C(resultdir)[35.0] 0.0025 0.002 1.035 0.301 -0.002 0.007
C(resultdir)[36.0] 0.0004 0.003 0.148 0.882 -0.005 0.006
C(year)[T.2013] 0.0007 0.000 1.572 0.116 -0.000 0.001
C(year)[T.2014] -0.0010 0.000 -2.050 0.040 -0.002 -4.45e-05
C(month)[T.2] -0.0008 0.001 -0.947 0.343 -0.003 0.001
C(month)[T.3] -0.0026 0.001 -2.902 0.004 -0.004 -0.001
C(month)[T.4] -0.0028 0.001 -2.683 0.007 -0.005 -0.001
C(month)[T.5] -0.0036 0.001 -3.018 0.003 -0.006 -0.001
C(month)[T.6] -0.0039 0.001 -2.874 0.004 -0.006 -0.001
C(month)[T.7] -0.0050 0.001 -3.363 0.001 -0.008 -0.002
C(month)[T.8] -0.0031 0.001 -2.158 0.031 -0.006 -0.000
C(month)[T.9] -0.0035 0.001 -2.725 0.006 -0.006 -0.001
C(month)[T.10] -0.0034 0.001 -3.120 0.002 -0.006 -0.001
C(month)[T.11] -0.0010 0.001 -0.968 0.333 -0.003 0.001
C(month)[T.12] -0.0005 0.001 -0.536 0.592 -0.002 0.001
C(weekend)[T.1] 0.0021 0.000 5.499 0.000 0.001 0.003
C(rainY)[T.1] 0.0005 0.000 1.147 0.252 -0.000 0.001
C(item_nbr)[T.2] 3.161e-17 0.003 1.26e-14 1.000 -0.005 0.005
C(item_nbr)[T.3] -1.229e-15 0.003 -4.91e-13 1.000 -0.005 0.005
C(item_nbr)[T.4] 6.056e-17 0.003 2.42e-14 1.000 -0.005 0.005
C(item_nbr)[T.5] -2.593e-15 0.003 -1.04e-12 1.000 -0.005 0.005
C(item_nbr)[T.6] 1.474e-15 0.003 5.89e-13 1.000 -0.005 0.005
C(item_nbr)[T.7] 8.827e-16 0.003 3.53e-13 1.000 -0.005 0.005
C(item_nbr)[T.8] 6.191e-16 0.003 2.48e-13 1.000 -0.005 0.005
C(item_nbr)[T.9] 4.36e-16 0.003 1.74e-13 1.000 -0.005 0.005
C(item_nbr)[T.10] 5.197e-15 0.003 2.08e-12 1.000 -0.005 0.005
C(item_nbr)[T.11] -7.089e-15 0.003 -2.83e-12 1.000 -0.005 0.005
C(item_nbr)[T.12] 1.102e-15 0.003 4.41e-13 1.000 -0.005 0.005
C(item_nbr)[T.13] -1.64e-15 0.003 -6.56e-13 1.000 -0.005 0.005
C(item_nbr)[T.14] 8.244e-16 0.003 3.3e-13 1.000 -0.005 0.005
C(item_nbr)[T.15] -1.502e-15 0.003 -6e-13 1.000 -0.005 0.005
C(item_nbr)[T.16] 3.2860 0.003 1022.285 0.000 3.280 3.292
C(item_nbr)[T.17] -1.16e-15 0.003 -4.64e-13 1.000 -0.005 0.005
C(item_nbr)[T.18] 8.262e-17 0.003 3.3e-14 1.000 -0.005 0.005
C(item_nbr)[T.19] 2.548e-16 0.003 1.02e-13 1.000 -0.005 0.005
C(item_nbr)[T.20] 1.187e-16 0.003 4.75e-14 1.000 -0.005 0.005
C(item_nbr)[T.21] 3.921e-16 0.003 1.57e-13 1.000 -0.005 0.005
C(item_nbr)[T.22] 7.462e-17 0.003 2.98e-14 1.000 -0.005 0.005
C(item_nbr)[T.23] 1.601e-16 0.003 6.4e-14 1.000 -0.005 0.005
C(item_nbr)[T.24] 3.16e-16 0.003 1.26e-13 1.000 -0.005 0.005
C(item_nbr)[T.25] 4.9851 0.003 1801.151 0.000 4.980 4.991
C(item_nbr)[T.26] 2.056e-16 0.003 8.22e-14 1.000 -0.005 0.005
C(item_nbr)[T.27] 1.846e-16 0.003 7.38e-14 1.000 -0.005 0.005
C(item_nbr)[T.28] 2.614e-16 0.003 1.05e-13 1.000 -0.005 0.005
C(item_nbr)[T.29] 3.385e-16 0.003 1.35e-13 1.000 -0.005 0.005
C(item_nbr)[T.30] 1.327e-16 0.003 5.31e-14 1.000 -0.005 0.005
C(item_nbr)[T.31] 1.556e-16 0.003 6.22e-14 1.000 -0.005 0.005
C(item_nbr)[T.32] -2.462e-16 0.003 -9.85e-14 1.000 -0.005 0.005
C(item_nbr)[T.33] 4.694e-16 0.003 1.88e-13 1.000 -0.005 0.005
C(item_nbr)[T.34] 1.071e-16 0.003 4.28e-14 1.000 -0.005 0.005
C(item_nbr)[T.35] 7.798e-17 0.003 3.12e-14 1.000 -0.005 0.005
C(item_nbr)[T.36] -9.853e-17 0.003 -3.94e-14 1.000 -0.005 0.005
C(item_nbr)[T.37] 1.653e-16 0.003 6.61e-14 1.000 -0.005 0.005
C(item_nbr)[T.38] 4.977e-16 0.003 1.99e-13 1.000 -0.005 0.005
C(item_nbr)[T.39] 0.0268 0.003 10.506 0.000 0.022 0.032
C(item_nbr)[T.40] 1.221e-16 0.003 4.88e-14 1.000 -0.005 0.005
C(item_nbr)[T.41] 2.288e-16 0.003 9.15e-14 1.000 -0.005 0.005
C(item_nbr)[T.42] 4.838e-16 0.003 1.93e-13 1.000 -0.005 0.005
C(item_nbr)[T.43] -2.432e-16 0.003 -9.72e-14 1.000 -0.005 0.005
C(item_nbr)[T.44] 4.076e-16 0.003 1.63e-13 1.000 -0.005 0.005
C(item_nbr)[T.45] 6.051e-17 0.003 2.42e-14 1.000 -0.005 0.005
C(item_nbr)[T.46] -3.732e-17 0.003 -1.49e-14 1.000 -0.005 0.005
C(item_nbr)[T.47] 4.895e-16 0.003 1.96e-13 1.000 -0.005 0.005
C(item_nbr)[T.48] 1.008e-16 0.003 4.03e-14 1.000 -0.005 0.005
C(item_nbr)[T.49] 3.197e-16 0.003 1.28e-13 1.000 -0.005 0.005
C(item_nbr)[T.50] 0.0548 0.003 21.242 0.000 0.050 0.060
C(item_nbr)[T.51] -4.768e-17 0.003 -1.91e-14 1.000 -0.005 0.005
C(item_nbr)[T.52] 1.032e-16 0.003 4.13e-14 1.000 -0.005 0.005
C(item_nbr)[T.53] 3.162e-16 0.003 1.26e-13 1.000 -0.005 0.005
C(item_nbr)[T.54] 1.941e-16 0.003 7.76e-14 1.000 -0.005 0.005
C(item_nbr)[T.55] 1.019e-18 0.003 4.07e-16 1.000 -0.005 0.005
C(item_nbr)[T.56] 1.597e-16 0.003 6.39e-14 1.000 -0.005 0.005
C(item_nbr)[T.57] 3.29e-16 0.003 1.32e-13 1.000 -0.005 0.005
C(item_nbr)[T.58] 4.815e-16 0.003 1.93e-13 1.000 -0.005 0.005
C(item_nbr)[T.59] 2.25e-16 0.003 9e-14 1.000 -0.005 0.005
C(item_nbr)[T.60] 4.795e-16 0.003 1.92e-13 1.000 -0.005 0.005
C(item_nbr)[T.61] 1.886e-16 0.003 7.54e-14 1.000 -0.005 0.005
C(item_nbr)[T.62] 2.259e-16 0.003 9.03e-14 1.000 -0.005 0.005
C(item_nbr)[T.63] 2.308e-16 0.003 9.23e-14 1.000 -0.005 0.005
C(item_nbr)[T.64] 1.2733 0.023 54.967 0.000 1.228 1.319
C(item_nbr)[T.65] 3.369e-16 0.003 1.35e-13 1.000 -0.005 0.005
C(item_nbr)[T.66] 8.564e-17 0.003 3.42e-14 1.000 -0.005 0.005
C(item_nbr)[T.67] 8.34e-17 0.003 3.33e-14 1.000 -0.005 0.005
C(item_nbr)[T.68] 3.242e-16 0.003 1.3e-13 1.000 -0.005 0.005
C(item_nbr)[T.69] 3.302e-16 0.003 1.32e-13 1.000 -0.005 0.005
C(item_nbr)[T.70] 3.237e-17 0.003 1.29e-14 1.000 -0.005 0.005
C(item_nbr)[T.71] 3.475e-16 0.003 1.39e-13 1.000 -0.005 0.005
C(item_nbr)[T.72] 3.95e-16 0.003 1.58e-13 1.000 -0.005 0.005
C(item_nbr)[T.73] 3.209e-16 0.003 1.28e-13 1.000 -0.005 0.005
C(item_nbr)[T.74] 1.55e-16 0.003 6.2e-14 1.000 -0.005 0.005
C(item_nbr)[T.75] 4.026e-16 0.003 1.61e-13 1.000 -0.005 0.005
C(item_nbr)[T.76] 2.544e-16 0.003 1.02e-13 1.000 -0.005 0.005
C(item_nbr)[T.77] 0.3059 0.013 22.744 0.000 0.280 0.332
C(item_nbr)[T.78] 1.506e-16 0.003 6.02e-14 1.000 -0.005 0.005
C(item_nbr)[T.79] 2.793e-16 0.003 1.12e-13 1.000 -0.005 0.005
C(item_nbr)[T.80] 2.402e-16 0.003 9.6e-14 1.000 -0.005 0.005
C(item_nbr)[T.81] 3.806e-16 0.003 1.52e-13 1.000 -0.005 0.005
C(item_nbr)[T.82] 1.73e-16 0.003 6.92e-14 1.000 -0.005 0.005
C(item_nbr)[T.83] 1.028e-16 0.003 4.11e-14 1.000 -0.005 0.005
C(item_nbr)[T.84] 2.007e-17 0.003 8.02e-15 1.000 -0.005 0.005
C(item_nbr)[T.85] 0.0035 0.003 1.368 0.171 -0.002 0.008
C(item_nbr)[T.86] 1.179e-16 0.003 4.71e-14 1.000 -0.005 0.005
C(item_nbr)[T.87] 1.461e-16 0.003 5.84e-14 1.000 -0.005 0.005
C(item_nbr)[T.88] 2.082e-16 0.003 8.32e-14 1.000 -0.005 0.005
C(item_nbr)[T.89] 2.69e-16 0.003 1.08e-13 1.000 -0.005 0.005
C(item_nbr)[T.90] 1.959e-16 0.003 7.83e-14 1.000 -0.005 0.005
C(item_nbr)[T.91] 1.029e-16 0.003 4.12e-14 1.000 -0.005 0.005
C(item_nbr)[T.92] 2.25e-16 0.003 9e-14 1.000 -0.005 0.005
C(item_nbr)[T.93] 0.0087 0.003 3.322 0.001 0.004 0.014
C(item_nbr)[T.94] 2.562e-16 0.003 1.02e-13 1.000 -0.005 0.005
C(item_nbr)[T.95] 2.131e-16 0.003 8.52e-14 1.000 -0.005 0.005
C(item_nbr)[T.96] 4.634e-16 0.003 1.85e-13 1.000 -0.005 0.005
C(item_nbr)[T.97] 4.066e-16 0.003 1.63e-13 1.000 -0.005 0.005
C(item_nbr)[T.98] 3.43e-16 0.003 1.37e-13 1.000 -0.005 0.005
C(item_nbr)[T.99] 2.437e-16 0.003 9.75e-14 1.000 -0.005 0.005
C(item_nbr)[T.100] 3.688e-16 0.003 1.47e-13 1.000 -0.005 0.005
C(item_nbr)[T.101] 6.957e-17 0.003 2.78e-14 1.000 -0.005 0.005
C(item_nbr)[T.102] 3.049e-16 0.003 1.22e-13 1.000 -0.005 0.005
C(item_nbr)[T.103] 1.187e-16 0.003 4.75e-14 1.000 -0.005 0.005
C(item_nbr)[T.104] 3.95e-16 0.003 1.58e-13 1.000 -0.005 0.005
C(item_nbr)[T.105] 4.952e-16 0.003 1.98e-13 1.000 -0.005 0.005
C(item_nbr)[T.106] 2.044e-16 0.003 8.17e-14 1.000 -0.005 0.005
C(item_nbr)[T.107] -5.246e-16 0.003 -2.1e-13 1.000 -0.005 0.005
C(item_nbr)[T.108] 5.924e-16 0.003 2.37e-13 1.000 -0.005 0.005
C(item_nbr)[T.109] -3.009e-16 0.003 -1.2e-13 1.000 -0.005 0.005
C(item_nbr)[T.110] -6.091e-16 0.003 -2.44e-13 1.000 -0.005 0.005
C(item_nbr)[T.111] 1.451e-15 0.003 5.8e-13 1.000 -0.005 0.005
scale(tmax) 0.0038 0.003 1.289 0.198 -0.002 0.009
scale(tmin) 0.0025 0.003 0.952 0.341 -0.003 0.008
scale(tavg) 0.0032 0.003 1.183 0.237 -0.002 0.008
scale(dewpoint) -0.0030 0.004 -0.860 0.390 -0.010 0.004
scale(wetbulb) 0.0003 0.003 0.116 0.908 -0.005 0.005
scale(heat) -0.0006 0.002 -0.380 0.704 -0.004 0.003
scale(cool) 0.0003 0.000 0.973 0.331 -0.000 0.001
scale(preciptotal) 4.545e-05 0.000 0.210 0.834 -0.000 0.000
scale(stnpressure) -0.0009 0.002 -0.507 0.612 -0.004 0.002
scale(sealevel) 0.0006 0.002 0.383 0.702 -0.003 0.004
scale(resultspeed) 0.0015 0.001 2.360 0.018 0.000 0.003
scale(avgspeed) -0.0024 0.001 -2.973 0.003 -0.004 -0.001
scale(relative_humility) 0.0013 0.001 1.037 0.300 -0.001 0.004
scale(windchill) -0.0078 0.007 -1.129 0.259 -0.021 0.006
==============================================================================
Omnibus: 183394.822 Durbin-Watson: 2.004
Prob(Omnibus): 0.000 Jarque-Bera (JB): 27282229171.210
Skew: -14.775 Prob(JB): 0.00
Kurtosis: 2673.531 Cond. No. 2.25e+15
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The smallest eigenvalue is 1.34e-25. This might indicate that there are
strong multicollinearity problems or that the design matrix is singular.
###Markdown
설명력이 더 올라갔다, conditional number는 약간 낮아짐 6. 변수변환 : df2 (log1p_units) + 아웃라이어 제거 + preciptotal 변수변환
###Code
# OLS - df2_1_1
model2_1_1 = sm.OLS.from_formula('log1p_units ~ scale(tmax) + scale(tmin) + scale(tavg) + scale(dewpoint) + scale(wetbulb) + scale(heat) + scale(cool)\
+ scale(np.log1p(preciptotal)) + scale(stnpressure) + scale(sealevel) + scale(resultspeed) \
+ C(resultdir) + scale(avgspeed) + scale(sunset) + scale(sunrise) + scale(daytime) \
+ C(year) + C(month) + scale(relative_humility) + scale(windchill) + C(weekend) \
+ C(rainY) + C(store_nbr) + C(item_nbr) + 0', data = df2_1)
result = model2_1_1.fit()
result2_1_1 = model2_1_1.fit()
print(result2_1_1.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: log1p_units R-squared: 0.987
Model: OLS Adj. R-squared: 0.987
Method: Least Squares F-statistic: 3.951e+04
Date: Fri, 06 Jul 2018 Prob (F-statistic): 0.00
Time: 00:44:08 Log-Likelihood: 1.4191e+05
No. Observations: 91800 AIC: -2.835e+05
Df Residuals: 91624 BIC: -2.818e+05
Df Model: 175
Covariance Type: nonrobust
================================================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------------------------
C(resultdir)[1.0] 9.959e-05 0.003 0.031 0.975 -0.006 0.006
C(resultdir)[2.0] -0.0006 0.003 -0.178 0.859 -0.007 0.006
C(resultdir)[3.0] 0.0008 0.003 0.262 0.793 -0.005 0.006
C(resultdir)[4.0] -0.0002 0.003 -0.053 0.958 -0.006 0.006
C(resultdir)[5.0] -0.0009 0.003 -0.330 0.742 -0.007 0.005
C(resultdir)[6.0] 0.0010 0.003 0.319 0.750 -0.005 0.007
C(resultdir)[7.0] 0.0019 0.003 0.647 0.518 -0.004 0.007
C(resultdir)[8.0] 0.0014 0.003 0.432 0.666 -0.005 0.008
C(resultdir)[9.0] 0.0003 0.003 0.107 0.915 -0.006 0.006
C(resultdir)[10.0] 0.0018 0.004 0.509 0.611 -0.005 0.009
C(resultdir)[11.0] -0.0003 0.003 -0.103 0.918 -0.007 0.006
C(resultdir)[12.0] 0.0017 0.003 0.507 0.612 -0.005 0.008
C(resultdir)[13.0] 0.0009 0.003 0.277 0.782 -0.005 0.007
C(resultdir)[14.0] 0.0004 0.003 0.108 0.914 -0.006 0.007
C(resultdir)[15.0] 0.0016 0.003 0.486 0.627 -0.005 0.008
C(resultdir)[16.0] 0.0015 0.004 0.416 0.677 -0.006 0.009
C(resultdir)[17.0] 0.0030 0.004 0.806 0.420 -0.004 0.010
C(resultdir)[18.0] 0.0016 0.003 0.483 0.629 -0.005 0.008
C(resultdir)[19.0] -0.0006 0.003 -0.198 0.843 -0.007 0.005
C(resultdir)[20.0] 0.0003 0.003 0.089 0.929 -0.006 0.006
C(resultdir)[21.0] 0.0012 0.003 0.428 0.669 -0.004 0.007
C(resultdir)[22.0] 0.0010 0.003 0.375 0.708 -0.004 0.006
C(resultdir)[23.0] 0.0018 0.003 0.646 0.519 -0.004 0.007
C(resultdir)[24.0] 0.0001 0.003 0.052 0.959 -0.005 0.005
C(resultdir)[25.0] 0.0011 0.003 0.420 0.674 -0.004 0.006
C(resultdir)[26.0] 0.0013 0.003 0.458 0.647 -0.004 0.007
C(resultdir)[27.0] 0.0016 0.003 0.573 0.567 -0.004 0.007
C(resultdir)[28.0] 0.0021 0.003 0.787 0.431 -0.003 0.007
C(resultdir)[29.0] 0.0018 0.003 0.654 0.513 -0.004 0.007
C(resultdir)[30.0] 0.0014 0.003 0.519 0.604 -0.004 0.007
C(resultdir)[31.0] -0.0004 0.003 -0.160 0.873 -0.006 0.005
C(resultdir)[32.0] 0.0017 0.003 0.605 0.545 -0.004 0.007
C(resultdir)[33.0] -0.0007 0.003 -0.238 0.812 -0.006 0.005
C(resultdir)[34.0] 0.0012 0.003 0.409 0.683 -0.005 0.007
C(resultdir)[35.0] 0.0017 0.003 0.570 0.569 -0.004 0.008
C(resultdir)[36.0] -0.0003 0.003 -0.094 0.925 -0.007 0.006
C(year)[T.2013] 0.0007 0.000 1.596 0.110 -0.000 0.001
C(year)[T.2014] -0.0009 0.001 -1.886 0.059 -0.002 3.71e-05
C(month)[T.2] -0.0001 0.001 -0.113 0.910 -0.002 0.002
C(month)[T.3] -0.0016 0.002 -0.990 0.322 -0.005 0.002
C(month)[T.4] -0.0017 0.002 -0.663 0.507 -0.007 0.003
C(month)[T.5] -0.0021 0.003 -0.671 0.502 -0.008 0.004
C(month)[T.6] -0.0020 0.003 -0.580 0.562 -0.009 0.005
C(month)[T.7] -0.0031 0.003 -0.975 0.330 -0.009 0.003
C(month)[T.8] -0.0017 0.003 -0.624 0.533 -0.007 0.004
C(month)[T.9] -0.0031 0.002 -1.316 0.188 -0.008 0.002
C(month)[T.10] -0.0040 0.002 -1.703 0.089 -0.009 0.001
C(month)[T.11] -0.0021 0.002 -0.937 0.349 -0.007 0.002
C(month)[T.12] -0.0014 0.002 -0.936 0.349 -0.004 0.002
C(weekend)[T.1] 0.0021 0.000 5.507 0.000 0.001 0.003
C(rainY)[T.1] 0.0005 0.000 1.124 0.261 -0.000 0.001
C(item_nbr)[T.2] 4.107e-16 0.003 1.64e-13 1.000 -0.005 0.005
C(item_nbr)[T.3] 1.464e-15 0.003 5.85e-13 1.000 -0.005 0.005
C(item_nbr)[T.4] -9.251e-16 0.003 -3.7e-13 1.000 -0.005 0.005
C(item_nbr)[T.5] 1.556e-15 0.003 6.22e-13 1.000 -0.005 0.005
C(item_nbr)[T.6] -2.995e-15 0.003 -1.2e-12 1.000 -0.005 0.005
C(item_nbr)[T.7] -1.444e-15 0.003 -5.77e-13 1.000 -0.005 0.005
C(item_nbr)[T.8] -1.997e-15 0.003 -7.98e-13 1.000 -0.005 0.005
C(item_nbr)[T.9] -2.106e-15 0.003 -8.42e-13 1.000 -0.005 0.005
C(item_nbr)[T.10] -1.138e-15 0.003 -4.55e-13 1.000 -0.005 0.005
C(item_nbr)[T.11] -2.75e-15 0.003 -1.1e-12 1.000 -0.005 0.005
C(item_nbr)[T.12] -2.242e-15 0.003 -8.96e-13 1.000 -0.005 0.005
C(item_nbr)[T.13] 3.416e-15 0.003 1.37e-12 1.000 -0.005 0.005
C(item_nbr)[T.14] -5.701e-15 0.003 -2.28e-12 1.000 -0.005 0.005
C(item_nbr)[T.15] -2.309e-15 0.003 -9.23e-13 1.000 -0.005 0.005
C(item_nbr)[T.16] 3.2860 0.003 1022.279 0.000 3.280 3.292
C(item_nbr)[T.17] -1.383e-15 0.003 -5.53e-13 1.000 -0.005 0.005
C(item_nbr)[T.18] -1.426e-14 0.003 -5.7e-12 1.000 -0.005 0.005
C(item_nbr)[T.19] -1.587e-15 0.003 -6.35e-13 1.000 -0.005 0.005
C(item_nbr)[T.20] 1.11e-16 0.003 4.44e-14 1.000 -0.005 0.005
C(item_nbr)[T.21] -7.518e-16 0.003 -3.01e-13 1.000 -0.005 0.005
C(item_nbr)[T.22] -3.989e-16 0.003 -1.6e-13 1.000 -0.005 0.005
C(item_nbr)[T.23] -5.721e-16 0.003 -2.29e-13 1.000 -0.005 0.005
C(item_nbr)[T.24] 7.474e-17 0.003 2.99e-14 1.000 -0.005 0.005
C(item_nbr)[T.25] 4.9851 0.003 1801.141 0.000 4.980 4.991
C(item_nbr)[T.26] -6.745e-16 0.003 -2.7e-13 1.000 -0.005 0.005
C(item_nbr)[T.27] -4.871e-16 0.003 -1.95e-13 1.000 -0.005 0.005
C(item_nbr)[T.28] -9.644e-16 0.003 -3.86e-13 1.000 -0.005 0.005
C(item_nbr)[T.29] -4.516e-16 0.003 -1.81e-13 1.000 -0.005 0.005
C(item_nbr)[T.30] -5.348e-16 0.003 -2.14e-13 1.000 -0.005 0.005
C(item_nbr)[T.31] 6.628e-18 0.003 2.65e-15 1.000 -0.005 0.005
C(item_nbr)[T.32] -5.346e-16 0.003 -2.14e-13 1.000 -0.005 0.005
C(item_nbr)[T.33] -5.74e-16 0.003 -2.3e-13 1.000 -0.005 0.005
C(item_nbr)[T.34] -5.907e-16 0.003 -2.36e-13 1.000 -0.005 0.005
C(item_nbr)[T.35] -6.682e-16 0.003 -2.67e-13 1.000 -0.005 0.005
C(item_nbr)[T.36] -5.459e-16 0.003 -2.18e-13 1.000 -0.005 0.005
C(item_nbr)[T.37] -5.095e-16 0.003 -2.04e-13 1.000 -0.005 0.005
C(item_nbr)[T.38] -4.797e-16 0.003 -1.92e-13 1.000 -0.005 0.005
C(item_nbr)[T.39] 0.0268 0.003 10.506 0.000 0.022 0.032
C(item_nbr)[T.40] -7.749e-16 0.003 -3.1e-13 1.000 -0.005 0.005
C(item_nbr)[T.41] -4.186e-16 0.003 -1.67e-13 1.000 -0.005 0.005
C(item_nbr)[T.42] -2.064e-16 0.003 -8.25e-14 1.000 -0.005 0.005
C(item_nbr)[T.43] -9.536e-16 0.003 -3.81e-13 1.000 -0.005 0.005
C(item_nbr)[T.44] -4.47e-17 0.003 -1.79e-14 1.000 -0.005 0.005
C(item_nbr)[T.45] -6.409e-16 0.003 -2.56e-13 1.000 -0.005 0.005
C(item_nbr)[T.46] -8.497e-16 0.003 -3.4e-13 1.000 -0.005 0.005
C(item_nbr)[T.47] -4.582e-16 0.003 -1.83e-13 1.000 -0.005 0.005
C(item_nbr)[T.48] -4.809e-16 0.003 -1.92e-13 1.000 -0.005 0.005
C(item_nbr)[T.49] -6.014e-16 0.003 -2.4e-13 1.000 -0.005 0.005
C(item_nbr)[T.50] 0.0548 0.003 21.245 0.000 0.050 0.060
C(item_nbr)[T.51] -1.016e-16 0.003 -4.06e-14 1.000 -0.005 0.005
C(item_nbr)[T.52] -6.353e-16 0.003 -2.54e-13 1.000 -0.005 0.005
C(item_nbr)[T.53] -3.131e-16 0.003 -1.25e-13 1.000 -0.005 0.005
C(item_nbr)[T.54] -4.51e-16 0.003 -1.8e-13 1.000 -0.005 0.005
C(item_nbr)[T.55] -6.41e-16 0.003 -2.56e-13 1.000 -0.005 0.005
C(item_nbr)[T.56] -3.784e-16 0.003 -1.51e-13 1.000 -0.005 0.005
C(item_nbr)[T.57] -4.508e-16 0.003 -1.8e-13 1.000 -0.005 0.005
C(item_nbr)[T.58] -5.996e-16 0.003 -2.4e-13 1.000 -0.005 0.005
C(item_nbr)[T.59] -3.195e-16 0.003 -1.28e-13 1.000 -0.005 0.005
C(item_nbr)[T.60] -7.767e-16 0.003 -3.11e-13 1.000 -0.005 0.005
C(item_nbr)[T.61] -3.675e-16 0.003 -1.47e-13 1.000 -0.005 0.005
C(item_nbr)[T.62] -5.543e-16 0.003 -2.22e-13 1.000 -0.005 0.005
C(item_nbr)[T.63] -6.852e-16 0.003 -2.74e-13 1.000 -0.005 0.005
C(item_nbr)[T.64] 1.2729 0.023 54.942 0.000 1.228 1.318
C(item_nbr)[T.65] -4.468e-16 0.003 -1.79e-13 1.000 -0.005 0.005
C(item_nbr)[T.66] -9.441e-16 0.003 -3.77e-13 1.000 -0.005 0.005
C(item_nbr)[T.67] -7.2e-16 0.003 -2.88e-13 1.000 -0.005 0.005
C(item_nbr)[T.68] -5.465e-16 0.003 -2.19e-13 1.000 -0.005 0.005
C(item_nbr)[T.69] -4.467e-16 0.003 -1.79e-13 1.000 -0.005 0.005
C(item_nbr)[T.70] -4.995e-16 0.003 -2e-13 1.000 -0.005 0.005
C(item_nbr)[T.71] -4.02e-16 0.003 -1.61e-13 1.000 -0.005 0.005
C(item_nbr)[T.72] -5.72e-16 0.003 -2.29e-13 1.000 -0.005 0.005
C(item_nbr)[T.73] -6.718e-16 0.003 -2.69e-13 1.000 -0.005 0.005
C(item_nbr)[T.74] -6.864e-16 0.003 -2.74e-13 1.000 -0.005 0.005
C(item_nbr)[T.75] -5.222e-16 0.003 -2.09e-13 1.000 -0.005 0.005
C(item_nbr)[T.76] -3.809e-16 0.003 -1.52e-13 1.000 -0.005 0.005
C(item_nbr)[T.77] 0.3058 0.013 22.732 0.000 0.279 0.332
C(item_nbr)[T.78] -4.388e-16 0.003 -1.75e-13 1.000 -0.005 0.005
C(item_nbr)[T.79] -1.978e-16 0.003 -7.91e-14 1.000 -0.005 0.005
C(item_nbr)[T.80] -5.018e-16 0.003 -2.01e-13 1.000 -0.005 0.005
C(item_nbr)[T.81] -3.598e-16 0.003 -1.44e-13 1.000 -0.005 0.005
C(item_nbr)[T.82] -6.782e-16 0.003 -2.71e-13 1.000 -0.005 0.005
C(item_nbr)[T.83] -5.567e-16 0.003 -2.23e-13 1.000 -0.005 0.005
C(item_nbr)[T.84] -6.024e-16 0.003 -2.41e-13 1.000 -0.005 0.005
C(item_nbr)[T.85] 0.0035 0.003 1.368 0.171 -0.002 0.008
C(item_nbr)[T.86] -3.448e-16 0.003 -1.38e-13 1.000 -0.005 0.005
C(item_nbr)[T.87] -5.261e-16 0.003 -2.1e-13 1.000 -0.005 0.005
C(item_nbr)[T.88] -4.949e-16 0.003 -1.98e-13 1.000 -0.005 0.005
C(item_nbr)[T.89] -4.883e-16 0.003 -1.95e-13 1.000 -0.005 0.005
C(item_nbr)[T.90] -5.944e-16 0.003 -2.38e-13 1.000 -0.005 0.005
C(item_nbr)[T.91] -6.268e-16 0.003 -2.51e-13 1.000 -0.005 0.005
C(item_nbr)[T.92] -5.229e-16 0.003 -2.09e-13 1.000 -0.005 0.005
C(item_nbr)[T.93] 0.0087 0.003 3.321 0.001 0.004 0.014
C(item_nbr)[T.94] -4.997e-16 0.003 -2e-13 1.000 -0.005 0.005
C(item_nbr)[T.95] -4.772e-16 0.003 -1.91e-13 1.000 -0.005 0.005
C(item_nbr)[T.96] -5.175e-16 0.003 -2.07e-13 1.000 -0.005 0.005
C(item_nbr)[T.97] -4.253e-16 0.003 -1.7e-13 1.000 -0.005 0.005
C(item_nbr)[T.98] -6.271e-16 0.003 -2.51e-13 1.000 -0.005 0.005
C(item_nbr)[T.99] -4.532e-16 0.003 -1.81e-13 1.000 -0.005 0.005
C(item_nbr)[T.100] -5.169e-16 0.003 -2.07e-13 1.000 -0.005 0.005
C(item_nbr)[T.101] -4.317e-16 0.003 -1.73e-13 1.000 -0.005 0.005
C(item_nbr)[T.102] -5.811e-16 0.003 -2.32e-13 1.000 -0.005 0.005
C(item_nbr)[T.103] -7.254e-16 0.003 -2.9e-13 1.000 -0.005 0.005
C(item_nbr)[T.104] -6.029e-16 0.003 -2.41e-13 1.000 -0.005 0.005
C(item_nbr)[T.105] -4.925e-16 0.003 -1.97e-13 1.000 -0.005 0.005
C(item_nbr)[T.106] -2.775e-16 0.003 -1.11e-13 1.000 -0.005 0.005
C(item_nbr)[T.107] -3.317e-16 0.003 -1.33e-13 1.000 -0.005 0.005
C(item_nbr)[T.108] -1.99e-16 0.003 -7.96e-14 1.000 -0.005 0.005
C(item_nbr)[T.109] -4.693e-16 0.003 -1.88e-13 1.000 -0.005 0.005
C(item_nbr)[T.110] -6.881e-16 0.003 -2.75e-13 1.000 -0.005 0.005
C(item_nbr)[T.111] -8.046e-17 0.003 -3.22e-14 1.000 -0.005 0.005
scale(tmax) 0.0038 0.003 1.314 0.189 -0.002 0.010
scale(tmin) 0.0027 0.003 1.039 0.299 -0.002 0.008
scale(tavg) 0.0033 0.003 1.238 0.216 -0.002 0.009
scale(dewpoint) -0.0029 0.004 -0.822 0.411 -0.010 0.004
scale(wetbulb) 0.0004 0.003 0.165 0.869 -0.005 0.005
scale(heat) -0.0004 0.002 -0.257 0.797 -0.004 0.003
scale(cool) 0.0003 0.000 0.951 0.342 -0.000 0.001
scale(np.log1p(preciptotal)) 6.648e-05 0.000 0.290 0.772 -0.000 0.001
scale(stnpressure) -0.0010 0.002 -0.588 0.556 -0.004 0.002
scale(sealevel) 0.0008 0.002 0.468 0.640 -0.003 0.004
scale(resultspeed) 0.0015 0.001 2.295 0.022 0.000 0.003
scale(avgspeed) -0.0024 0.001 -2.943 0.003 -0.004 -0.001
scale(sunset) -0.0019 0.003 -0.741 0.459 -0.007 0.003
scale(sunrise) -0.0013 0.003 -0.482 0.629 -0.006 0.004
scale(daytime) -0.0003 0.000 -0.835 0.404 -0.001 0.000
scale(relative_humility) 0.0012 0.001 0.965 0.335 -0.001 0.004
scale(windchill) -0.0082 0.007 -1.182 0.237 -0.022 0.005
==============================================================================
Omnibus: 183394.460 Durbin-Watson: 2.004
Prob(Omnibus): 0.000 Jarque-Bera (JB): 27276446356.518
Skew: -14.775 Prob(JB): 0.00
Kurtosis: 2673.248 Cond. No. 5.56e+15
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The smallest eigenvalue is 2.77e-26. This might indicate that there are
strong multicollinearity problems or that the design matrix is singular.
###Markdown
R^2값이 1에 가까워지고 조건수는 변화없어 과최적화가 의심 6 - 1. 변수변환 : df2 (log1p_units) + 아웃라이어 제거 + preciptotal 변수변환 + tmax/tmin/tavg/sunset/sunrise/daytime/stnpressure/sealevel제거(VIF에 근거)
###Code
# OLS - df2_1_1
model2_1_1 = sm.OLS.from_formula('log1p_units ~ scale(dewpoint) + scale(wetbulb) + scale(heat) + scale(cool)\
+ scale(np.log1p(preciptotal)) + scale(resultspeed) \
+ C(resultdir) + scale(avgspeed) + C(year) + C(month) + scale(relative_humility) + scale(windchill) + C(weekend) \
+ C(rainY) + C(store_nbr) + C(item_nbr) + 0', data = df2_1)
result = model2_1_1.fit()
result2_1_1 = model2_1_1.fit()
print(result2_1_1.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: log1p_units R-squared: 0.987
Model: OLS Adj. R-squared: 0.987
Method: Least Squares F-statistic: 4.092e+04
Date: Fri, 06 Jul 2018 Prob (F-statistic): 0.00
Time: 01:09:58 Log-Likelihood: 1.4191e+05
No. Observations: 91800 AIC: -2.835e+05
Df Residuals: 91630 BIC: -2.819e+05
Df Model: 169
Covariance Type: nonrobust
================================================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------------------------
C(resultdir)[1.0] 0.0009 0.003 0.347 0.728 -0.004 0.006
C(resultdir)[2.0] 0.0001 0.003 0.054 0.957 -0.005 0.005
C(resultdir)[3.0] 0.0014 0.002 0.611 0.541 -0.003 0.006
C(resultdir)[4.0] 0.0006 0.002 0.260 0.795 -0.004 0.005
C(resultdir)[5.0] -0.0003 0.002 -0.134 0.893 -0.005 0.004
C(resultdir)[6.0] 0.0016 0.002 0.654 0.513 -0.003 0.006
C(resultdir)[7.0] 0.0023 0.002 0.999 0.318 -0.002 0.007
C(resultdir)[8.0] 0.0021 0.003 0.800 0.423 -0.003 0.007
C(resultdir)[9.0] 0.0009 0.003 0.352 0.725 -0.004 0.006
C(resultdir)[10.0] 0.0024 0.003 0.730 0.466 -0.004 0.009
C(resultdir)[11.0] -5.74e-05 0.003 -0.020 0.984 -0.006 0.005
C(resultdir)[12.0] 0.0020 0.003 0.719 0.472 -0.003 0.007
C(resultdir)[13.0] 0.0017 0.003 0.683 0.494 -0.003 0.007
C(resultdir)[14.0] 0.0013 0.003 0.438 0.661 -0.004 0.007
C(resultdir)[15.0] 0.0023 0.003 0.827 0.409 -0.003 0.008
C(resultdir)[16.0] 0.0023 0.003 0.755 0.450 -0.004 0.008
C(resultdir)[17.0] 0.0037 0.003 1.150 0.250 -0.003 0.010
C(resultdir)[18.0] 0.0021 0.003 0.782 0.434 -0.003 0.007
C(resultdir)[19.0] -8.595e-05 0.003 -0.034 0.973 -0.005 0.005
C(resultdir)[20.0] 0.0010 0.002 0.405 0.685 -0.004 0.006
C(resultdir)[21.0] 0.0018 0.002 0.835 0.404 -0.002 0.006
C(resultdir)[22.0] 0.0017 0.002 0.812 0.417 -0.002 0.006
C(resultdir)[23.0] 0.0025 0.002 1.230 0.219 -0.002 0.007
C(resultdir)[24.0] 0.0009 0.002 0.415 0.678 -0.003 0.005
C(resultdir)[25.0] 0.0019 0.002 0.930 0.353 -0.002 0.006
C(resultdir)[26.0] 0.0021 0.002 1.007 0.314 -0.002 0.006
C(resultdir)[27.0] 0.0025 0.002 1.221 0.222 -0.002 0.007
C(resultdir)[28.0] 0.0030 0.002 1.469 0.142 -0.001 0.007
C(resultdir)[29.0] 0.0026 0.002 1.281 0.200 -0.001 0.007
C(resultdir)[30.0] 0.0023 0.002 1.071 0.284 -0.002 0.007
C(resultdir)[31.0] 0.0004 0.002 0.189 0.850 -0.004 0.005
C(resultdir)[32.0] 0.0025 0.002 1.171 0.242 -0.002 0.007
C(resultdir)[33.0] 0.0002 0.002 0.084 0.933 -0.004 0.005
C(resultdir)[34.0] 0.0019 0.002 0.790 0.429 -0.003 0.007
C(resultdir)[35.0] 0.0026 0.002 1.073 0.283 -0.002 0.007
C(resultdir)[36.0] 0.0004 0.003 0.138 0.890 -0.005 0.006
C(year)[T.2013] 0.0006 0.000 1.459 0.145 -0.000 0.001
C(year)[T.2014] -0.0011 0.000 -2.217 0.027 -0.002 -0.000
C(month)[T.2] -0.0009 0.001 -1.017 0.309 -0.003 0.001
C(month)[T.3] -0.0026 0.001 -2.992 0.003 -0.004 -0.001
C(month)[T.4] -0.0028 0.001 -2.648 0.008 -0.005 -0.001
C(month)[T.5] -0.0036 0.001 -3.053 0.002 -0.006 -0.001
C(month)[T.6] -0.0039 0.001 -2.894 0.004 -0.007 -0.001
C(month)[T.7] -0.0050 0.001 -3.349 0.001 -0.008 -0.002
C(month)[T.8] -0.0031 0.001 -2.191 0.028 -0.006 -0.000
C(month)[T.9] -0.0036 0.001 -2.842 0.004 -0.006 -0.001
C(month)[T.10] -0.0036 0.001 -3.253 0.001 -0.006 -0.001
C(month)[T.11] -0.0012 0.001 -1.258 0.208 -0.003 0.001
C(month)[T.12] -0.0006 0.001 -0.635 0.525 -0.003 0.001
C(weekend)[T.1] 0.0021 0.000 5.468 0.000 0.001 0.003
C(rainY)[T.1] 0.0006 0.000 1.301 0.193 -0.000 0.002
C(item_nbr)[T.2] -1.212e-15 0.003 -4.85e-13 1.000 -0.005 0.005
C(item_nbr)[T.3] 1.086e-15 0.003 4.34e-13 1.000 -0.005 0.005
C(item_nbr)[T.4] 1.023e-15 0.003 4.09e-13 1.000 -0.005 0.005
C(item_nbr)[T.5] -5.495e-16 0.003 -2.2e-13 1.000 -0.005 0.005
C(item_nbr)[T.6] 1.305e-15 0.003 5.22e-13 1.000 -0.005 0.005
C(item_nbr)[T.7] -9.465e-16 0.003 -3.78e-13 1.000 -0.005 0.005
C(item_nbr)[T.8] -7.579e-16 0.003 -3.03e-13 1.000 -0.005 0.005
C(item_nbr)[T.9] 4.615e-15 0.003 1.85e-12 1.000 -0.005 0.005
C(item_nbr)[T.10] -3.653e-16 0.003 -1.46e-13 1.000 -0.005 0.005
C(item_nbr)[T.11] -3.684e-15 0.003 -1.47e-12 1.000 -0.005 0.005
C(item_nbr)[T.12] -7.72e-16 0.003 -3.09e-13 1.000 -0.005 0.005
C(item_nbr)[T.13] 1.668e-17 0.003 6.67e-15 1.000 -0.005 0.005
C(item_nbr)[T.14] 9.828e-16 0.003 3.93e-13 1.000 -0.005 0.005
C(item_nbr)[T.15] 1.177e-17 0.003 4.71e-15 1.000 -0.005 0.005
C(item_nbr)[T.16] 3.2859 0.003 1022.301 0.000 3.280 3.292
C(item_nbr)[T.17] -3.099e-16 0.003 -1.24e-13 1.000 -0.005 0.005
C(item_nbr)[T.18] -3.72e-16 0.003 -1.49e-13 1.000 -0.005 0.005
C(item_nbr)[T.19] -2.436e-16 0.003 -9.74e-14 1.000 -0.005 0.005
C(item_nbr)[T.20] -4.011e-16 0.003 -1.6e-13 1.000 -0.005 0.005
C(item_nbr)[T.21] -1.73e-16 0.003 -6.92e-14 1.000 -0.005 0.005
C(item_nbr)[T.22] -1.956e-16 0.003 -7.82e-14 1.000 -0.005 0.005
C(item_nbr)[T.23] -1.261e-16 0.003 -5.04e-14 1.000 -0.005 0.005
C(item_nbr)[T.24] 8.615e-17 0.003 3.44e-14 1.000 -0.005 0.005
C(item_nbr)[T.25] 4.9851 0.003 1801.170 0.000 4.980 4.991
C(item_nbr)[T.26] -8.611e-17 0.003 -3.44e-14 1.000 -0.005 0.005
C(item_nbr)[T.27] -5.819e-17 0.003 -2.33e-14 1.000 -0.005 0.005
C(item_nbr)[T.28] -2.018e-16 0.003 -8.07e-14 1.000 -0.005 0.005
C(item_nbr)[T.29] 4.22e-17 0.003 1.69e-14 1.000 -0.005 0.005
C(item_nbr)[T.30] -9.165e-17 0.003 -3.66e-14 1.000 -0.005 0.005
C(item_nbr)[T.31] -1.37e-16 0.003 -5.48e-14 1.000 -0.005 0.005
C(item_nbr)[T.32] 1.636e-16 0.003 6.54e-14 1.000 -0.005 0.005
C(item_nbr)[T.33] 9.644e-17 0.003 3.86e-14 1.000 -0.005 0.005
C(item_nbr)[T.34] -1.43e-16 0.003 -5.72e-14 1.000 -0.005 0.005
C(item_nbr)[T.35] -5.041e-17 0.003 -2.02e-14 1.000 -0.005 0.005
C(item_nbr)[T.36] -4.125e-16 0.003 -1.65e-13 1.000 -0.005 0.005
C(item_nbr)[T.37] -4.942e-17 0.003 -1.98e-14 1.000 -0.005 0.005
C(item_nbr)[T.38] -1.236e-17 0.003 -4.94e-15 1.000 -0.005 0.005
C(item_nbr)[T.39] 0.0268 0.003 10.506 0.000 0.022 0.032
C(item_nbr)[T.40] 3.154e-16 0.003 1.26e-13 1.000 -0.005 0.005
C(item_nbr)[T.41] 2.007e-17 0.003 8.03e-15 1.000 -0.005 0.005
C(item_nbr)[T.42] -1.361e-16 0.003 -5.44e-14 1.000 -0.005 0.005
C(item_nbr)[T.43] 3.818e-17 0.003 1.53e-14 1.000 -0.005 0.005
C(item_nbr)[T.44] -8.961e-17 0.003 -3.58e-14 1.000 -0.005 0.005
C(item_nbr)[T.45] 3.753e-16 0.003 1.5e-13 1.000 -0.005 0.005
C(item_nbr)[T.46] -1.905e-16 0.003 -7.62e-14 1.000 -0.005 0.005
C(item_nbr)[T.47] 6.988e-18 0.003 2.79e-15 1.000 -0.005 0.005
C(item_nbr)[T.48] 2.203e-16 0.003 8.81e-14 1.000 -0.005 0.005
C(item_nbr)[T.49] -2.363e-17 0.003 -9.45e-15 1.000 -0.005 0.005
C(item_nbr)[T.50] 0.0548 0.003 21.243 0.000 0.050 0.060
C(item_nbr)[T.51] -1.623e-17 0.003 -6.49e-15 1.000 -0.005 0.005
C(item_nbr)[T.52] -3.867e-17 0.003 -1.55e-14 1.000 -0.005 0.005
C(item_nbr)[T.53] 7.719e-17 0.003 3.09e-14 1.000 -0.005 0.005
C(item_nbr)[T.54] -1.926e-17 0.003 -7.7e-15 1.000 -0.005 0.005
C(item_nbr)[T.55] 4.765e-17 0.003 1.91e-14 1.000 -0.005 0.005
C(item_nbr)[T.56] 7.352e-17 0.003 2.94e-14 1.000 -0.005 0.005
C(item_nbr)[T.57] -1.956e-17 0.003 -7.82e-15 1.000 -0.005 0.005
C(item_nbr)[T.58] -1.087e-16 0.003 -4.35e-14 1.000 -0.005 0.005
C(item_nbr)[T.59] 1.36e-17 0.003 5.44e-15 1.000 -0.005 0.005
C(item_nbr)[T.60] -4.727e-17 0.003 -1.89e-14 1.000 -0.005 0.005
C(item_nbr)[T.61] -7.778e-17 0.003 -3.11e-14 1.000 -0.005 0.005
C(item_nbr)[T.62] -2.378e-17 0.003 -9.51e-15 1.000 -0.005 0.005
C(item_nbr)[T.63] -1.521e-16 0.003 -6.08e-14 1.000 -0.005 0.005
C(item_nbr)[T.64] 1.2735 0.023 54.977 0.000 1.228 1.319
C(item_nbr)[T.65] -4.131e-17 0.003 -1.65e-14 1.000 -0.005 0.005
C(item_nbr)[T.66] 1.054e-17 0.003 4.21e-15 1.000 -0.005 0.005
C(item_nbr)[T.67] -3.141e-16 0.003 -1.26e-13 1.000 -0.005 0.005
C(item_nbr)[T.68] -6.316e-17 0.003 -2.53e-14 1.000 -0.005 0.005
C(item_nbr)[T.69] 5.852e-17 0.003 2.34e-14 1.000 -0.005 0.005
C(item_nbr)[T.70] 6.446e-17 0.003 2.58e-14 1.000 -0.005 0.005
C(item_nbr)[T.71] 8.584e-17 0.003 3.43e-14 1.000 -0.005 0.005
C(item_nbr)[T.72] 5.066e-17 0.003 2.03e-14 1.000 -0.005 0.005
C(item_nbr)[T.73] 5e-18 0.003 2e-15 1.000 -0.005 0.005
C(item_nbr)[T.74] 9.838e-17 0.003 3.93e-14 1.000 -0.005 0.005
C(item_nbr)[T.75] 4.336e-17 0.003 1.73e-14 1.000 -0.005 0.005
C(item_nbr)[T.76] 7.688e-17 0.003 3.07e-14 1.000 -0.005 0.005
C(item_nbr)[T.77] 0.3060 0.013 22.748 0.000 0.280 0.332
C(item_nbr)[T.78] 2.542e-17 0.003 1.02e-14 1.000 -0.005 0.005
C(item_nbr)[T.79] -9.616e-17 0.003 -3.84e-14 1.000 -0.005 0.005
C(item_nbr)[T.80] -2.456e-17 0.003 -9.82e-15 1.000 -0.005 0.005
C(item_nbr)[T.81] -5.399e-17 0.003 -2.16e-14 1.000 -0.005 0.005
C(item_nbr)[T.82] -1.02e-17 0.003 -4.08e-15 1.000 -0.005 0.005
C(item_nbr)[T.83] -2.302e-17 0.003 -9.21e-15 1.000 -0.005 0.005
C(item_nbr)[T.84] 1.825e-17 0.003 7.3e-15 1.000 -0.005 0.005
C(item_nbr)[T.85] 0.0035 0.003 1.369 0.171 -0.002 0.008
C(item_nbr)[T.86] 6.95e-17 0.003 2.78e-14 1.000 -0.005 0.005
C(item_nbr)[T.87] 7.979e-17 0.003 3.19e-14 1.000 -0.005 0.005
C(item_nbr)[T.88] -1.396e-16 0.003 -5.58e-14 1.000 -0.005 0.005
C(item_nbr)[T.89] -1.041e-17 0.003 -4.16e-15 1.000 -0.005 0.005
C(item_nbr)[T.90] 5.151e-17 0.003 2.06e-14 1.000 -0.005 0.005
C(item_nbr)[T.91] 1.12e-16 0.003 4.48e-14 1.000 -0.005 0.005
C(item_nbr)[T.92] 1.795e-17 0.003 7.18e-15 1.000 -0.005 0.005
C(item_nbr)[T.93] 0.0087 0.003 3.320 0.001 0.004 0.014
C(item_nbr)[T.94] 2.129e-16 0.003 8.51e-14 1.000 -0.005 0.005
C(item_nbr)[T.95] 1.837e-16 0.003 7.34e-14 1.000 -0.005 0.005
C(item_nbr)[T.96] -2.078e-16 0.003 -8.31e-14 1.000 -0.005 0.005
C(item_nbr)[T.97] -7.092e-18 0.003 -2.84e-15 1.000 -0.005 0.005
C(item_nbr)[T.98] -8.172e-17 0.003 -3.27e-14 1.000 -0.005 0.005
C(item_nbr)[T.99] -1.82e-17 0.003 -7.28e-15 1.000 -0.005 0.005
C(item_nbr)[T.100] -2.554e-17 0.003 -1.02e-14 1.000 -0.005 0.005
C(item_nbr)[T.101] -1.212e-16 0.003 -4.85e-14 1.000 -0.005 0.005
C(item_nbr)[T.102] 2.252e-16 0.003 9e-14 1.000 -0.005 0.005
C(item_nbr)[T.103] 1.602e-16 0.003 6.41e-14 1.000 -0.005 0.005
C(item_nbr)[T.104] -1.766e-16 0.003 -7.06e-14 1.000 -0.005 0.005
C(item_nbr)[T.105] 6.864e-18 0.003 2.74e-15 1.000 -0.005 0.005
C(item_nbr)[T.106] 3.634e-16 0.003 1.45e-13 1.000 -0.005 0.005
C(item_nbr)[T.107] -8.958e-17 0.003 -3.58e-14 1.000 -0.005 0.005
C(item_nbr)[T.108] 1.125e-16 0.003 4.5e-14 1.000 -0.005 0.005
C(item_nbr)[T.109] -4.483e-16 0.003 -1.79e-13 1.000 -0.005 0.005
C(item_nbr)[T.110] 1.16e-15 0.003 4.64e-13 1.000 -0.005 0.005
C(item_nbr)[T.111] -8.541e-16 0.003 -3.42e-13 1.000 -0.005 0.005
scale(dewpoint) -0.0010 0.003 -0.313 0.754 -0.007 0.005
scale(wetbulb) 0.0008 0.002 0.321 0.748 -0.004 0.006
scale(heat) -0.0012 0.002 -0.780 0.435 -0.004 0.002
scale(cool) 0.0004 0.000 1.116 0.264 -0.000 0.001
scale(np.log1p(preciptotal)) 9.106e-05 0.000 0.406 0.685 -0.000 0.001
scale(resultspeed) 0.0014 0.001 2.224 0.026 0.000 0.003
scale(avgspeed) -0.0018 0.001 -2.607 0.009 -0.003 -0.000
scale(relative_humility) 0.0005 0.001 0.410 0.682 -0.002 0.003
scale(windchill) -0.0013 0.003 -0.368 0.713 -0.008 0.005
==============================================================================
Omnibus: 183413.314 Durbin-Watson: 2.003
Prob(Omnibus): 0.000 Jarque-Bera (JB): 27293153360.084
Skew: -14.779 Prob(JB): 0.00
Kurtosis: 2674.066 Cond. No. 296.
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
6 - 2. 변수변환 : df2 (log1p_units) + 아웃라이어 제거 + preciptotal 변수변환 + tmax/tmin/tavgsunset/sunrise/daytime/stnpressure/sealevel제거 + wetbulb/dewpoint제거(VIF에 근거) --> 아래 VIF부분으로 갈 것.
###Code
# OLS - df2_1_1
model2_1_1 = sm.OLS.from_formula('log1p_units ~ scale(heat) + scale(cool)\
+ scale(np.log1p(preciptotal)) + scale(resultspeed) \
+ C(resultdir) + scale(avgspeed) + C(year) + C(month) + scale(relative_humility) + scale(windchill) + C(weekend) \
+ C(rainY) + C(store_nbr) + C(item_nbr) + 0', data = df2_1)
result = model2_1_1.fit()
result2_1_1 = model2_1_1.fit()
print(result2_1_1.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: log1p_units R-squared: 0.987
Model: OLS Adj. R-squared: 0.987
Method: Least Squares F-statistic: 4.141e+04
Date: Fri, 06 Jul 2018 Prob (F-statistic): 0.00
Time: 01:13:57 Log-Likelihood: 1.4191e+05
No. Observations: 91800 AIC: -2.835e+05
Df Residuals: 91632 BIC: -2.819e+05
Df Model: 167
Covariance Type: nonrobust
================================================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------------------------
C(resultdir)[1.0] 0.0009 0.003 0.353 0.724 -0.004 0.006
C(resultdir)[2.0] 0.0002 0.003 0.057 0.954 -0.005 0.005
C(resultdir)[3.0] 0.0014 0.002 0.607 0.544 -0.003 0.006
C(resultdir)[4.0] 0.0006 0.002 0.250 0.802 -0.004 0.005
C(resultdir)[5.0] -0.0003 0.002 -0.136 0.892 -0.005 0.004
C(resultdir)[6.0] 0.0016 0.002 0.654 0.513 -0.003 0.006
C(resultdir)[7.0] 0.0023 0.002 0.986 0.324 -0.002 0.007
C(resultdir)[8.0] 0.0021 0.003 0.789 0.430 -0.003 0.007
C(resultdir)[9.0] 0.0009 0.003 0.350 0.726 -0.004 0.006
C(resultdir)[10.0] 0.0023 0.003 0.728 0.467 -0.004 0.009
C(resultdir)[11.0] -7.544e-05 0.003 -0.027 0.979 -0.006 0.005
C(resultdir)[12.0] 0.0019 0.003 0.702 0.483 -0.003 0.007
C(resultdir)[13.0] 0.0017 0.003 0.683 0.495 -0.003 0.007
C(resultdir)[14.0] 0.0012 0.003 0.434 0.665 -0.004 0.007
C(resultdir)[15.0] 0.0022 0.003 0.817 0.414 -0.003 0.008
C(resultdir)[16.0] 0.0023 0.003 0.769 0.442 -0.004 0.008
C(resultdir)[17.0] 0.0037 0.003 1.153 0.249 -0.003 0.010
C(resultdir)[18.0] 0.0021 0.003 0.777 0.437 -0.003 0.007
C(resultdir)[19.0] -7.914e-05 0.003 -0.031 0.975 -0.005 0.005
C(resultdir)[20.0] 0.0010 0.002 0.416 0.677 -0.004 0.006
C(resultdir)[21.0] 0.0018 0.002 0.834 0.404 -0.002 0.006
C(resultdir)[22.0] 0.0017 0.002 0.808 0.419 -0.002 0.006
C(resultdir)[23.0] 0.0025 0.002 1.230 0.219 -0.002 0.007
C(resultdir)[24.0] 0.0009 0.002 0.415 0.678 -0.003 0.005
C(resultdir)[25.0] 0.0019 0.002 0.923 0.356 -0.002 0.006
C(resultdir)[26.0] 0.0021 0.002 1.002 0.316 -0.002 0.006
C(resultdir)[27.0] 0.0025 0.002 1.209 0.227 -0.002 0.007
C(resultdir)[28.0] 0.0030 0.002 1.458 0.145 -0.001 0.007
C(resultdir)[29.0] 0.0026 0.002 1.273 0.203 -0.001 0.007
C(resultdir)[30.0] 0.0023 0.002 1.061 0.289 -0.002 0.006
C(resultdir)[31.0] 0.0004 0.002 0.183 0.855 -0.004 0.005
C(resultdir)[32.0] 0.0025 0.002 1.185 0.236 -0.002 0.007
C(resultdir)[33.0] 0.0002 0.002 0.074 0.941 -0.004 0.005
C(resultdir)[34.0] 0.0019 0.002 0.792 0.428 -0.003 0.007
C(resultdir)[35.0] 0.0025 0.002 1.067 0.286 -0.002 0.007
C(resultdir)[36.0] 0.0004 0.003 0.135 0.893 -0.005 0.006
C(year)[T.2013] 0.0006 0.000 1.462 0.144 -0.000 0.001
C(year)[T.2014] -0.0011 0.000 -2.209 0.027 -0.002 -0.000
C(month)[T.2] -0.0009 0.001 -1.017 0.309 -0.003 0.001
C(month)[T.3] -0.0026 0.001 -2.976 0.003 -0.004 -0.001
C(month)[T.4] -0.0027 0.001 -2.620 0.009 -0.005 -0.001
C(month)[T.5] -0.0036 0.001 -3.063 0.002 -0.006 -0.001
C(month)[T.6] -0.0039 0.001 -2.953 0.003 -0.006 -0.001
C(month)[T.7] -0.0049 0.001 -3.455 0.001 -0.008 -0.002
C(month)[T.8] -0.0031 0.001 -2.278 0.023 -0.006 -0.000
C(month)[T.9] -0.0036 0.001 -2.911 0.004 -0.006 -0.001
C(month)[T.10] -0.0036 0.001 -3.280 0.001 -0.006 -0.001
C(month)[T.11] -0.0013 0.001 -1.265 0.206 -0.003 0.001
C(month)[T.12] -0.0006 0.001 -0.637 0.524 -0.003 0.001
C(weekend)[T.1] 0.0021 0.000 5.478 0.000 0.001 0.003
C(rainY)[T.1] 0.0006 0.000 1.286 0.198 -0.000 0.002
C(item_nbr)[T.2] -1.755e-15 0.003 -7.02e-13 1.000 -0.005 0.005
C(item_nbr)[T.3] 4.888e-15 0.003 1.95e-12 1.000 -0.005 0.005
C(item_nbr)[T.4] 1.193e-15 0.003 4.77e-13 1.000 -0.005 0.005
C(item_nbr)[T.5] -1.315e-15 0.003 -5.26e-13 1.000 -0.005 0.005
C(item_nbr)[T.6] 3.665e-15 0.003 1.47e-12 1.000 -0.005 0.005
C(item_nbr)[T.7] -6.194e-15 0.003 -2.48e-12 1.000 -0.005 0.005
C(item_nbr)[T.8] -1.51e-15 0.003 -6.04e-13 1.000 -0.005 0.005
C(item_nbr)[T.9] -3.833e-15 0.003 -1.53e-12 1.000 -0.005 0.005
C(item_nbr)[T.10] 9.281e-15 0.003 3.71e-12 1.000 -0.005 0.005
C(item_nbr)[T.11] 7.702e-16 0.003 3.08e-13 1.000 -0.005 0.005
C(item_nbr)[T.12] 6.56e-16 0.003 2.62e-13 1.000 -0.005 0.005
C(item_nbr)[T.13] -5.981e-16 0.003 -2.39e-13 1.000 -0.005 0.005
C(item_nbr)[T.14] 4.894e-17 0.003 1.96e-14 1.000 -0.005 0.005
C(item_nbr)[T.15] 2.813e-16 0.003 1.12e-13 1.000 -0.005 0.005
C(item_nbr)[T.16] 3.2859 0.003 1022.312 0.000 3.280 3.292
C(item_nbr)[T.17] -8.02e-17 0.003 -3.21e-14 1.000 -0.005 0.005
C(item_nbr)[T.18] -7.565e-17 0.003 -3.03e-14 1.000 -0.005 0.005
C(item_nbr)[T.19] -2.395e-16 0.003 -9.58e-14 1.000 -0.005 0.005
C(item_nbr)[T.20] -2.504e-18 0.003 -1e-15 1.000 -0.005 0.005
C(item_nbr)[T.21] -9.216e-17 0.003 -3.69e-14 1.000 -0.005 0.005
C(item_nbr)[T.22] -1.409e-16 0.003 -5.63e-14 1.000 -0.005 0.005
C(item_nbr)[T.23] 4.309e-16 0.003 1.72e-13 1.000 -0.005 0.005
C(item_nbr)[T.24] 1.389e-16 0.003 5.55e-14 1.000 -0.005 0.005
C(item_nbr)[T.25] 4.9851 0.003 1801.190 0.000 4.980 4.991
C(item_nbr)[T.26] 1.204e-16 0.003 4.81e-14 1.000 -0.005 0.005
C(item_nbr)[T.27] 2.937e-18 0.003 1.17e-15 1.000 -0.005 0.005
C(item_nbr)[T.28] -1.242e-16 0.003 -4.97e-14 1.000 -0.005 0.005
C(item_nbr)[T.29] 1.37e-16 0.003 5.48e-14 1.000 -0.005 0.005
C(item_nbr)[T.30] 4.531e-16 0.003 1.81e-13 1.000 -0.005 0.005
C(item_nbr)[T.31] 1.039e-16 0.003 4.16e-14 1.000 -0.005 0.005
C(item_nbr)[T.32] 4.686e-17 0.003 1.87e-14 1.000 -0.005 0.005
C(item_nbr)[T.33] 1.468e-16 0.003 5.87e-14 1.000 -0.005 0.005
C(item_nbr)[T.34] 1.915e-16 0.003 7.66e-14 1.000 -0.005 0.005
C(item_nbr)[T.35] 2.823e-16 0.003 1.13e-13 1.000 -0.005 0.005
C(item_nbr)[T.36] 1.026e-16 0.003 4.1e-14 1.000 -0.005 0.005
C(item_nbr)[T.37] 3.322e-16 0.003 1.33e-13 1.000 -0.005 0.005
C(item_nbr)[T.38] 2.205e-16 0.003 8.82e-14 1.000 -0.005 0.005
C(item_nbr)[T.39] 0.0268 0.003 10.506 0.000 0.022 0.032
C(item_nbr)[T.40] -3.71e-16 0.003 -1.48e-13 1.000 -0.005 0.005
C(item_nbr)[T.41] 1.29e-16 0.003 5.16e-14 1.000 -0.005 0.005
C(item_nbr)[T.42] 1.642e-17 0.003 6.57e-15 1.000 -0.005 0.005
C(item_nbr)[T.43] 3.414e-16 0.003 1.37e-13 1.000 -0.005 0.005
C(item_nbr)[T.44] 9.11e-17 0.003 3.64e-14 1.000 -0.005 0.005
C(item_nbr)[T.45] 2.425e-16 0.003 9.7e-14 1.000 -0.005 0.005
C(item_nbr)[T.46] 1.745e-16 0.003 6.98e-14 1.000 -0.005 0.005
C(item_nbr)[T.47] 1.097e-16 0.003 4.39e-14 1.000 -0.005 0.005
C(item_nbr)[T.48] 5.998e-17 0.003 2.4e-14 1.000 -0.005 0.005
C(item_nbr)[T.49] 1.65e-16 0.003 6.6e-14 1.000 -0.005 0.005
C(item_nbr)[T.50] 0.0548 0.003 21.244 0.000 0.050 0.060
C(item_nbr)[T.51] 2.623e-16 0.003 1.05e-13 1.000 -0.005 0.005
C(item_nbr)[T.52] 9.742e-17 0.003 3.9e-14 1.000 -0.005 0.005
C(item_nbr)[T.53] 4.273e-17 0.003 1.71e-14 1.000 -0.005 0.005
C(item_nbr)[T.54] 7.577e-17 0.003 3.03e-14 1.000 -0.005 0.005
C(item_nbr)[T.55] -6.364e-17 0.003 -2.54e-14 1.000 -0.005 0.005
C(item_nbr)[T.56] 1.363e-17 0.003 5.45e-15 1.000 -0.005 0.005
C(item_nbr)[T.57] 1.731e-16 0.003 6.92e-14 1.000 -0.005 0.005
C(item_nbr)[T.58] 1.999e-16 0.003 7.99e-14 1.000 -0.005 0.005
C(item_nbr)[T.59] 1.755e-16 0.003 7.02e-14 1.000 -0.005 0.005
C(item_nbr)[T.60] -3.455e-18 0.003 -1.38e-15 1.000 -0.005 0.005
C(item_nbr)[T.61] 2.199e-16 0.003 8.79e-14 1.000 -0.005 0.005
C(item_nbr)[T.62] 4.738e-17 0.003 1.89e-14 1.000 -0.005 0.005
C(item_nbr)[T.63] 5.902e-17 0.003 2.36e-14 1.000 -0.005 0.005
C(item_nbr)[T.64] 1.2735 0.023 54.977 0.000 1.228 1.319
C(item_nbr)[T.65] 2.112e-16 0.003 8.44e-14 1.000 -0.005 0.005
C(item_nbr)[T.66] 1.511e-16 0.003 6.04e-14 1.000 -0.005 0.005
C(item_nbr)[T.67] -5.594e-17 0.003 -2.24e-14 1.000 -0.005 0.005
C(item_nbr)[T.68] 2.368e-16 0.003 9.47e-14 1.000 -0.005 0.005
C(item_nbr)[T.69] 1.483e-16 0.003 5.93e-14 1.000 -0.005 0.005
C(item_nbr)[T.70] 2.377e-16 0.003 9.51e-14 1.000 -0.005 0.005
C(item_nbr)[T.71] 1.526e-16 0.003 6.1e-14 1.000 -0.005 0.005
C(item_nbr)[T.72] 2.527e-16 0.003 1.01e-13 1.000 -0.005 0.005
C(item_nbr)[T.73] -9.595e-17 0.003 -3.84e-14 1.000 -0.005 0.005
C(item_nbr)[T.74] 6.771e-17 0.003 2.71e-14 1.000 -0.005 0.005
C(item_nbr)[T.75] 3.018e-16 0.003 1.21e-13 1.000 -0.005 0.005
C(item_nbr)[T.76] 2.463e-16 0.003 9.85e-14 1.000 -0.005 0.005
C(item_nbr)[T.77] 0.3060 0.013 22.747 0.000 0.280 0.332
C(item_nbr)[T.78] -1.104e-16 0.003 -4.41e-14 1.000 -0.005 0.005
C(item_nbr)[T.79] 2.1e-16 0.003 8.4e-14 1.000 -0.005 0.005
C(item_nbr)[T.80] 2.478e-16 0.003 9.91e-14 1.000 -0.005 0.005
C(item_nbr)[T.81] -6.666e-17 0.003 -2.67e-14 1.000 -0.005 0.005
C(item_nbr)[T.82] 2.075e-16 0.003 8.3e-14 1.000 -0.005 0.005
C(item_nbr)[T.83] -8.399e-17 0.003 -3.36e-14 1.000 -0.005 0.005
C(item_nbr)[T.84] 4.076e-17 0.003 1.63e-14 1.000 -0.005 0.005
C(item_nbr)[T.85] 0.0035 0.003 1.369 0.171 -0.002 0.008
C(item_nbr)[T.86] 1.531e-16 0.003 6.12e-14 1.000 -0.005 0.005
C(item_nbr)[T.87] 7.266e-17 0.003 2.91e-14 1.000 -0.005 0.005
C(item_nbr)[T.88] 2.849e-16 0.003 1.14e-13 1.000 -0.005 0.005
C(item_nbr)[T.89] 3.086e-17 0.003 1.23e-14 1.000 -0.005 0.005
C(item_nbr)[T.90] -2.846e-17 0.003 -1.14e-14 1.000 -0.005 0.005
C(item_nbr)[T.91] 5.477e-17 0.003 2.19e-14 1.000 -0.005 0.005
C(item_nbr)[T.92] 2.234e-17 0.003 8.93e-15 1.000 -0.005 0.005
C(item_nbr)[T.93] 0.0087 0.003 3.321 0.001 0.004 0.014
C(item_nbr)[T.94] 3.103e-18 0.003 1.24e-15 1.000 -0.005 0.005
C(item_nbr)[T.95] 4.077e-17 0.003 1.63e-14 1.000 -0.005 0.005
C(item_nbr)[T.96] -1.696e-17 0.003 -6.78e-15 1.000 -0.005 0.005
C(item_nbr)[T.97] -2.697e-16 0.003 -1.08e-13 1.000 -0.005 0.005
C(item_nbr)[T.98] 7.519e-17 0.003 3.01e-14 1.000 -0.005 0.005
C(item_nbr)[T.99] -2.012e-16 0.003 -8.05e-14 1.000 -0.005 0.005
C(item_nbr)[T.100] -4.362e-17 0.003 -1.74e-14 1.000 -0.005 0.005
C(item_nbr)[T.101] 4.754e-16 0.003 1.9e-13 1.000 -0.005 0.005
C(item_nbr)[T.102] 4.155e-16 0.003 1.66e-13 1.000 -0.005 0.005
C(item_nbr)[T.103] 1.357e-16 0.003 5.42e-14 1.000 -0.005 0.005
C(item_nbr)[T.104] 4.577e-16 0.003 1.83e-13 1.000 -0.005 0.005
C(item_nbr)[T.105] 4.698e-16 0.003 1.88e-13 1.000 -0.005 0.005
C(item_nbr)[T.106] 1.877e-16 0.003 7.5e-14 1.000 -0.005 0.005
C(item_nbr)[T.107] -4.099e-16 0.003 -1.64e-13 1.000 -0.005 0.005
C(item_nbr)[T.108] -1.383e-15 0.003 -5.53e-13 1.000 -0.005 0.005
C(item_nbr)[T.109] -5.911e-16 0.003 -2.36e-13 1.000 -0.005 0.005
C(item_nbr)[T.110] 1.17e-15 0.003 4.68e-13 1.000 -0.005 0.005
C(item_nbr)[T.111] 1.942e-15 0.003 7.76e-13 1.000 -0.005 0.005
scale(heat) -0.0013 0.002 -0.802 0.423 -0.004 0.002
scale(cool) 0.0004 0.000 1.136 0.256 -0.000 0.001
scale(np.log1p(preciptotal)) 0.0001 0.000 0.453 0.651 -0.000 0.001
scale(resultspeed) 0.0014 0.001 2.268 0.023 0.000 0.003
scale(avgspeed) -0.0018 0.001 -2.803 0.005 -0.003 -0.001
scale(relative_humility) 0.0002 0.000 0.821 0.412 -0.000 0.001
scale(windchill) -0.0014 0.002 -0.842 0.400 -0.005 0.002
==============================================================================
Omnibus: 183413.075 Durbin-Watson: 2.003
Prob(Omnibus): 0.000 Jarque-Bera (JB): 27294064881.035
Skew: -14.779 Prob(JB): 0.00
Kurtosis: 2674.110 Cond. No. 232.
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
6 - 3. 변수변환 : df2 (log1p_units) + 아웃라이어 제거 + preciptotal 변수변환 + tmax/tmin/tavgsunset/sunrise/daytime/stnpressure/sealevel제거 + wetbulb/dewpoint제거+avgspeed/relative_humility제거(VIF에 근거) --> 아래 VIF부분으로 갈 것.
###Code
# OLS - df2_1_1
model2_1_1 = sm.OLS.from_formula('log1p_units ~ scale(heat) + scale(cool)\
+ scale(np.log1p(preciptotal)) + scale(resultspeed) \
+ C(resultdir) + C(year) + C(month) + scale(windchill) + C(weekend) \
+ C(rainY) + C(store_nbr) + C(item_nbr) + 0', data = df2_1)
result = model2_1_1.fit()
result2_1_1 = model2_1_1.fit()
print(result2_1_1.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: log1p_units R-squared: 0.987
Model: OLS Adj. R-squared: 0.987
Method: Least Squares F-statistic: 4.191e+04
Date: Fri, 06 Jul 2018 Prob (F-statistic): 0.00
Time: 01:21:36 Log-Likelihood: 1.4191e+05
No. Observations: 91800 AIC: -2.835e+05
Df Residuals: 91634 BIC: -2.819e+05
Df Model: 165
Covariance Type: nonrobust
================================================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------------------------
C(resultdir)[1.0] 0.0007 0.003 0.266 0.790 -0.004 0.006
C(resultdir)[2.0] 0.0004 0.003 0.162 0.872 -0.005 0.006
C(resultdir)[3.0] 0.0014 0.002 0.618 0.536 -0.003 0.006
C(resultdir)[4.0] 0.0006 0.002 0.271 0.786 -0.004 0.005
C(resultdir)[5.0] -0.0002 0.002 -0.078 0.938 -0.005 0.004
C(resultdir)[6.0] 0.0019 0.002 0.813 0.416 -0.003 0.007
C(resultdir)[7.0] 0.0025 0.002 1.079 0.281 -0.002 0.007
C(resultdir)[8.0] 0.0021 0.003 0.806 0.420 -0.003 0.007
C(resultdir)[9.0] 0.0011 0.003 0.445 0.656 -0.004 0.006
C(resultdir)[10.0] 0.0031 0.003 0.968 0.333 -0.003 0.009
C(resultdir)[11.0] -0.0003 0.003 -0.123 0.902 -0.006 0.005
C(resultdir)[12.0] 0.0017 0.003 0.631 0.528 -0.004 0.007
C(resultdir)[13.0] 0.0012 0.003 0.471 0.638 -0.004 0.006
C(resultdir)[14.0] 0.0012 0.003 0.405 0.686 -0.004 0.007
C(resultdir)[15.0] 0.0021 0.003 0.763 0.445 -0.003 0.007
C(resultdir)[16.0] 0.0017 0.003 0.579 0.563 -0.004 0.008
C(resultdir)[17.0] 0.0028 0.003 0.874 0.382 -0.003 0.009
C(resultdir)[18.0] 0.0021 0.003 0.764 0.445 -0.003 0.007
C(resultdir)[19.0] -0.0002 0.003 -0.085 0.932 -0.005 0.005
C(resultdir)[20.0] 0.0012 0.002 0.516 0.606 -0.003 0.006
C(resultdir)[21.0] 0.0018 0.002 0.844 0.398 -0.002 0.006
C(resultdir)[22.0] 0.0016 0.002 0.759 0.448 -0.003 0.006
C(resultdir)[23.0] 0.0025 0.002 1.211 0.226 -0.002 0.007
C(resultdir)[24.0] 0.0005 0.002 0.246 0.806 -0.004 0.005
C(resultdir)[25.0] 0.0017 0.002 0.811 0.418 -0.002 0.006
C(resultdir)[26.0] 0.0019 0.002 0.918 0.359 -0.002 0.006
C(resultdir)[27.0] 0.0022 0.002 1.083 0.279 -0.002 0.006
C(resultdir)[28.0] 0.0027 0.002 1.324 0.186 -0.001 0.007
C(resultdir)[29.0] 0.0023 0.002 1.143 0.253 -0.002 0.006
C(resultdir)[30.0] 0.0021 0.002 0.963 0.335 -0.002 0.006
C(resultdir)[31.0] 0.0001 0.002 0.056 0.956 -0.004 0.004
C(resultdir)[32.0] 0.0023 0.002 1.077 0.282 -0.002 0.006
C(resultdir)[33.0] -6.408e-05 0.002 -0.028 0.978 -0.005 0.004
C(resultdir)[34.0] 0.0018 0.002 0.764 0.445 -0.003 0.007
C(resultdir)[35.0] 0.0022 0.002 0.916 0.360 -0.002 0.007
C(resultdir)[36.0] -0.0004 0.003 -0.131 0.896 -0.006 0.005
C(year)[T.2013] 0.0005 0.000 1.305 0.192 -0.000 0.001
C(year)[T.2014] -0.0012 0.000 -2.529 0.011 -0.002 -0.000
C(month)[T.2] -0.0009 0.001 -1.053 0.292 -0.003 0.001
C(month)[T.3] -0.0027 0.001 -3.107 0.002 -0.004 -0.001
C(month)[T.4] -0.0030 0.001 -3.000 0.003 -0.005 -0.001
C(month)[T.5] -0.0035 0.001 -2.962 0.003 -0.006 -0.001
C(month)[T.6] -0.0036 0.001 -2.758 0.006 -0.006 -0.001
C(month)[T.7] -0.0047 0.001 -3.278 0.001 -0.007 -0.002
C(month)[T.8] -0.0025 0.001 -1.893 0.058 -0.005 9.07e-05
C(month)[T.9] -0.0031 0.001 -2.540 0.011 -0.005 -0.001
C(month)[T.10] -0.0031 0.001 -2.883 0.004 -0.005 -0.001
C(month)[T.11] -0.0011 0.001 -1.099 0.272 -0.003 0.001
C(month)[T.12] -0.0006 0.001 -0.600 0.548 -0.002 0.001
C(weekend)[T.1] 0.0021 0.000 5.452 0.000 0.001 0.003
C(rainY)[T.1] 0.0007 0.000 1.774 0.076 -7.52e-05 0.002
C(item_nbr)[T.2] -3.032e-17 0.003 -1.21e-14 1.000 -0.005 0.005
C(item_nbr)[T.3] -1.26e-15 0.003 -5.04e-13 1.000 -0.005 0.005
C(item_nbr)[T.4] 4.096e-16 0.003 1.64e-13 1.000 -0.005 0.005
C(item_nbr)[T.5] 4.665e-16 0.003 1.87e-13 1.000 -0.005 0.005
C(item_nbr)[T.6] 2.738e-16 0.003 1.09e-13 1.000 -0.005 0.005
C(item_nbr)[T.7] -2.603e-16 0.003 -1.04e-13 1.000 -0.005 0.005
C(item_nbr)[T.8] 1.397e-15 0.003 5.59e-13 1.000 -0.005 0.005
C(item_nbr)[T.9] 1.708e-15 0.003 6.83e-13 1.000 -0.005 0.005
C(item_nbr)[T.10] 1.139e-15 0.003 4.56e-13 1.000 -0.005 0.005
C(item_nbr)[T.11] 1.655e-15 0.003 6.62e-13 1.000 -0.005 0.005
C(item_nbr)[T.12] 3.149e-16 0.003 1.26e-13 1.000 -0.005 0.005
C(item_nbr)[T.13] 1.067e-15 0.003 4.27e-13 1.000 -0.005 0.005
C(item_nbr)[T.14] 1.586e-16 0.003 6.34e-14 1.000 -0.005 0.005
C(item_nbr)[T.15] 2.598e-16 0.003 1.04e-13 1.000 -0.005 0.005
C(item_nbr)[T.16] 3.2860 0.003 1022.294 0.000 3.280 3.292
C(item_nbr)[T.17] -2.841e-16 0.003 -1.14e-13 1.000 -0.005 0.005
C(item_nbr)[T.18] 6.401e-17 0.003 2.56e-14 1.000 -0.005 0.005
C(item_nbr)[T.19] -1.439e-16 0.003 -5.75e-14 1.000 -0.005 0.005
C(item_nbr)[T.20] -2.068e-16 0.003 -8.27e-14 1.000 -0.005 0.005
C(item_nbr)[T.21] 2.504e-16 0.003 1e-13 1.000 -0.005 0.005
C(item_nbr)[T.22] -1.784e-16 0.003 -7.13e-14 1.000 -0.005 0.005
C(item_nbr)[T.23] 3.349e-16 0.003 1.34e-13 1.000 -0.005 0.005
C(item_nbr)[T.24] 7.652e-17 0.003 3.06e-14 1.000 -0.005 0.005
C(item_nbr)[T.25] 4.9851 0.003 1801.123 0.000 4.980 4.991
C(item_nbr)[T.26] -1.353e-17 0.003 -5.41e-15 1.000 -0.005 0.005
C(item_nbr)[T.27] 2.017e-16 0.003 8.06e-14 1.000 -0.005 0.005
C(item_nbr)[T.28] 4.531e-16 0.003 1.81e-13 1.000 -0.005 0.005
C(item_nbr)[T.29] 1.17e-16 0.003 4.68e-14 1.000 -0.005 0.005
C(item_nbr)[T.30] 9.383e-17 0.003 3.75e-14 1.000 -0.005 0.005
C(item_nbr)[T.31] -2.818e-16 0.003 -1.13e-13 1.000 -0.005 0.005
C(item_nbr)[T.32] -6.296e-18 0.003 -2.52e-15 1.000 -0.005 0.005
C(item_nbr)[T.33] 2.839e-16 0.003 1.14e-13 1.000 -0.005 0.005
C(item_nbr)[T.34] -6.849e-17 0.003 -2.74e-14 1.000 -0.005 0.005
C(item_nbr)[T.35] 2.373e-16 0.003 9.49e-14 1.000 -0.005 0.005
C(item_nbr)[T.36] 6.838e-17 0.003 2.73e-14 1.000 -0.005 0.005
C(item_nbr)[T.37] 1.731e-16 0.003 6.92e-14 1.000 -0.005 0.005
C(item_nbr)[T.38] 1.728e-16 0.003 6.91e-14 1.000 -0.005 0.005
C(item_nbr)[T.39] 0.0268 0.003 10.508 0.000 0.022 0.032
C(item_nbr)[T.40] 2.611e-16 0.003 1.04e-13 1.000 -0.005 0.005
C(item_nbr)[T.41] 1.598e-16 0.003 6.39e-14 1.000 -0.005 0.005
C(item_nbr)[T.42] 1.603e-16 0.003 6.41e-14 1.000 -0.005 0.005
C(item_nbr)[T.43] 6.511e-17 0.003 2.6e-14 1.000 -0.005 0.005
C(item_nbr)[T.44] 3.526e-16 0.003 1.41e-13 1.000 -0.005 0.005
C(item_nbr)[T.45] 1.72e-16 0.003 6.88e-14 1.000 -0.005 0.005
C(item_nbr)[T.46] 1.04e-16 0.003 4.16e-14 1.000 -0.005 0.005
C(item_nbr)[T.47] 3.241e-16 0.003 1.3e-13 1.000 -0.005 0.005
C(item_nbr)[T.48] 1.595e-16 0.003 6.38e-14 1.000 -0.005 0.005
C(item_nbr)[T.49] 1.797e-16 0.003 7.19e-14 1.000 -0.005 0.005
C(item_nbr)[T.50] 0.0548 0.003 21.241 0.000 0.050 0.060
C(item_nbr)[T.51] 1.074e-16 0.003 4.29e-14 1.000 -0.005 0.005
C(item_nbr)[T.52] 2.523e-16 0.003 1.01e-13 1.000 -0.005 0.005
C(item_nbr)[T.53] 1.279e-16 0.003 5.12e-14 1.000 -0.005 0.005
C(item_nbr)[T.54] 2.465e-17 0.003 9.85e-15 1.000 -0.005 0.005
C(item_nbr)[T.55] 1.674e-16 0.003 6.69e-14 1.000 -0.005 0.005
C(item_nbr)[T.56] 1.26e-16 0.003 5.04e-14 1.000 -0.005 0.005
C(item_nbr)[T.57] 1.506e-16 0.003 6.02e-14 1.000 -0.005 0.005
C(item_nbr)[T.58] 1.256e-16 0.003 5.02e-14 1.000 -0.005 0.005
C(item_nbr)[T.59] -5.282e-17 0.003 -2.11e-14 1.000 -0.005 0.005
C(item_nbr)[T.60] 2.705e-16 0.003 1.08e-13 1.000 -0.005 0.005
C(item_nbr)[T.61] 1.825e-16 0.003 7.3e-14 1.000 -0.005 0.005
C(item_nbr)[T.62] 3.136e-16 0.003 1.25e-13 1.000 -0.005 0.005
C(item_nbr)[T.63] 2.525e-16 0.003 1.01e-13 1.000 -0.005 0.005
C(item_nbr)[T.64] 1.2736 0.023 54.980 0.000 1.228 1.319
C(item_nbr)[T.65] 1.799e-16 0.003 7.19e-14 1.000 -0.005 0.005
C(item_nbr)[T.66] 1.967e-16 0.003 7.86e-14 1.000 -0.005 0.005
C(item_nbr)[T.67] -6.735e-17 0.003 -2.69e-14 1.000 -0.005 0.005
C(item_nbr)[T.68] 3.645e-16 0.003 1.46e-13 1.000 -0.005 0.005
C(item_nbr)[T.69] 1.626e-16 0.003 6.5e-14 1.000 -0.005 0.005
C(item_nbr)[T.70] 2.704e-16 0.003 1.08e-13 1.000 -0.005 0.005
C(item_nbr)[T.71] -7.954e-18 0.003 -3.18e-15 1.000 -0.005 0.005
C(item_nbr)[T.72] 2.788e-16 0.003 1.11e-13 1.000 -0.005 0.005
C(item_nbr)[T.73] 1.251e-16 0.003 5e-14 1.000 -0.005 0.005
C(item_nbr)[T.74] 1.713e-16 0.003 6.85e-14 1.000 -0.005 0.005
C(item_nbr)[T.75] 4.968e-17 0.003 1.99e-14 1.000 -0.005 0.005
C(item_nbr)[T.76] 3.979e-16 0.003 1.59e-13 1.000 -0.005 0.005
C(item_nbr)[T.77] 0.3061 0.013 22.754 0.000 0.280 0.332
C(item_nbr)[T.78] 6.382e-17 0.003 2.55e-14 1.000 -0.005 0.005
C(item_nbr)[T.79] 2.756e-17 0.003 1.1e-14 1.000 -0.005 0.005
C(item_nbr)[T.80] 9.107e-17 0.003 3.64e-14 1.000 -0.005 0.005
C(item_nbr)[T.81] 1.071e-16 0.003 4.28e-14 1.000 -0.005 0.005
C(item_nbr)[T.82] 7.929e-17 0.003 3.17e-14 1.000 -0.005 0.005
C(item_nbr)[T.83] 6.34e-17 0.003 2.53e-14 1.000 -0.005 0.005
C(item_nbr)[T.84] 2.371e-16 0.003 9.48e-14 1.000 -0.005 0.005
C(item_nbr)[T.85] 0.0035 0.003 1.371 0.170 -0.001 0.008
C(item_nbr)[T.86] 1.449e-16 0.003 5.79e-14 1.000 -0.005 0.005
C(item_nbr)[T.87] 1.284e-16 0.003 5.13e-14 1.000 -0.005 0.005
C(item_nbr)[T.88] 7.734e-17 0.003 3.09e-14 1.000 -0.005 0.005
C(item_nbr)[T.89] 2.202e-16 0.003 8.8e-14 1.000 -0.005 0.005
C(item_nbr)[T.90] 3.182e-16 0.003 1.27e-13 1.000 -0.005 0.005
C(item_nbr)[T.91] 1.132e-16 0.003 4.52e-14 1.000 -0.005 0.005
C(item_nbr)[T.92] 4.831e-17 0.003 1.93e-14 1.000 -0.005 0.005
C(item_nbr)[T.93] 0.0087 0.003 3.319 0.001 0.004 0.014
C(item_nbr)[T.94] 1.939e-16 0.003 7.75e-14 1.000 -0.005 0.005
C(item_nbr)[T.95] 2.749e-16 0.003 1.1e-13 1.000 -0.005 0.005
C(item_nbr)[T.96] 2.602e-16 0.003 1.04e-13 1.000 -0.005 0.005
C(item_nbr)[T.97] 1.663e-16 0.003 6.65e-14 1.000 -0.005 0.005
C(item_nbr)[T.98] 4.421e-16 0.003 1.77e-13 1.000 -0.005 0.005
C(item_nbr)[T.99] 5.862e-16 0.003 2.34e-13 1.000 -0.005 0.005
C(item_nbr)[T.100] -3.713e-17 0.003 -1.48e-14 1.000 -0.005 0.005
C(item_nbr)[T.101] -8.28e-17 0.003 -3.31e-14 1.000 -0.005 0.005
C(item_nbr)[T.102] 4.324e-16 0.003 1.73e-13 1.000 -0.005 0.005
C(item_nbr)[T.103] 2.003e-16 0.003 8.01e-14 1.000 -0.005 0.005
C(item_nbr)[T.104] 1.771e-16 0.003 7.08e-14 1.000 -0.005 0.005
C(item_nbr)[T.105] 8.565e-16 0.003 3.42e-13 1.000 -0.005 0.005
C(item_nbr)[T.106] -6.634e-17 0.003 -2.65e-14 1.000 -0.005 0.005
C(item_nbr)[T.107] 2.519e-16 0.003 1.01e-13 1.000 -0.005 0.005
C(item_nbr)[T.108] -8.256e-16 0.003 -3.3e-13 1.000 -0.005 0.005
C(item_nbr)[T.109] 7.928e-16 0.003 3.17e-13 1.000 -0.005 0.005
C(item_nbr)[T.110] -2.12e-16 0.003 -8.48e-14 1.000 -0.005 0.005
C(item_nbr)[T.111] 1.622e-15 0.003 6.49e-13 1.000 -0.005 0.005
scale(heat) -0.0002 0.002 -0.123 0.902 -0.003 0.003
scale(cool) 0.0002 0.000 0.623 0.533 -0.000 0.001
scale(np.log1p(preciptotal)) -1.059e-06 0.000 -0.005 0.996 -0.000 0.000
scale(resultspeed) -0.0002 0.000 -0.927 0.354 -0.001 0.000
scale(windchill) -0.0003 0.002 -0.180 0.857 -0.004 0.003
==============================================================================
Omnibus: 183428.889 Durbin-Watson: 2.003
Prob(Omnibus): 0.000 Jarque-Bera (JB): 27310954330.584
Skew: -14.782 Prob(JB): 0.00
Kurtosis: 2674.937 Cond. No. 216.
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
wetbulb, dewpoint추가로 지웠더니 conditional number 232까지 감소 6 - 4. 변수변환 : df2 (log1p_units) + tmax/tmin/tavgsunset/sunrise/daytime/stnpressure/sealevel제거 + wetbulb/dewpoint제거+avgspeed/relative_humility제거(VIF에 근거) + 유의하지 않은 변수 제거 -> 정규화
###Code
# OLS - df2_1_1
model2_1_1 = sm.OLS.from_formula('log1p_units ~ C(month) + C(weekend) \
+ C(rainY) + C(item_nbr) + 0', data = df2)
result = model2_1_1.fit()
result2_1_1 = model2_1_1.fit()
print(result2_1_1.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: log1p_units R-squared: 0.949
Model: OLS Adj. R-squared: 0.949
Method: Least Squares F-statistic: 1.445e+04
Date: Fri, 06 Jul 2018 Prob (F-statistic): 0.00
Time: 02:04:50 Log-Likelihood: 57673.
No. Observations: 95127 AIC: -1.151e+05
Df Residuals: 95003 BIC: -1.139e+05
Df Model: 123
Covariance Type: nonrobust
======================================================================================
coef std err t P>|t| [0.025 0.975]
--------------------------------------------------------------------------------------
C(month)[1] -0.0027 0.005 -0.576 0.565 -0.012 0.007
C(month)[2] -0.0035 0.005 -0.738 0.461 -0.013 0.006
C(month)[3] 0.0003 0.005 0.060 0.952 -0.009 0.010
C(month)[4] -0.0039 0.005 -0.833 0.405 -0.013 0.005
C(month)[5] -0.0032 0.005 -0.671 0.502 -0.012 0.006
C(month)[6] -0.0020 0.005 -0.422 0.673 -0.011 0.007
C(month)[7] -0.0062 0.005 -1.312 0.190 -0.016 0.003
C(month)[8] -0.0040 0.005 -0.838 0.402 -0.013 0.005
C(month)[9] -0.0092 0.005 -1.948 0.051 -0.019 5.64e-05
C(month)[10] -0.0084 0.005 -1.778 0.075 -0.018 0.001
C(month)[11] -0.0024 0.005 -0.494 0.622 -0.012 0.007
C(month)[12] 0.0033 0.005 0.675 0.500 -0.006 0.013
C(weekend)[T.1] 0.0077 0.001 8.138 0.000 0.006 0.010
C(rainY)[T.1] 0.0030 0.001 3.488 0.000 0.001 0.005
C(item_nbr)[T.2] 2.901e-15 0.006 4.55e-13 1.000 -0.013 0.013
C(item_nbr)[T.3] 8.532e-16 0.006 1.34e-13 1.000 -0.013 0.013
C(item_nbr)[T.4] -1.904e-17 0.006 -2.98e-15 1.000 -0.013 0.013
C(item_nbr)[T.5] 1.734e-15 0.006 2.72e-13 1.000 -0.013 0.013
C(item_nbr)[T.6] -1.486e-15 0.006 -2.33e-13 1.000 -0.013 0.013
C(item_nbr)[T.7] 1.091e-15 0.006 1.71e-13 1.000 -0.013 0.013
C(item_nbr)[T.8] 6.014e-15 0.006 9.43e-13 1.000 -0.013 0.013
C(item_nbr)[T.9] 9.439e-16 0.006 1.48e-13 1.000 -0.013 0.013
C(item_nbr)[T.10] -1.179e-15 0.006 -1.85e-13 1.000 -0.013 0.013
C(item_nbr)[T.11] 2.696e-15 0.006 4.23e-13 1.000 -0.013 0.013
C(item_nbr)[T.12] -3.583e-16 0.006 -5.62e-14 1.000 -0.013 0.013
C(item_nbr)[T.13] 2.291e-15 0.006 3.59e-13 1.000 -0.013 0.013
C(item_nbr)[T.14] 3.003e-16 0.006 4.71e-14 1.000 -0.013 0.013
C(item_nbr)[T.15] -5.855e-15 0.006 -9.18e-13 1.000 -0.013 0.013
C(item_nbr)[T.16] 3.4073 0.006 534.128 0.000 3.395 3.420
C(item_nbr)[T.17] 2.669e-15 0.006 4.18e-13 1.000 -0.013 0.013
C(item_nbr)[T.18] 3.655e-15 0.006 5.73e-13 1.000 -0.013 0.013
C(item_nbr)[T.19] 1.19e-15 0.006 1.87e-13 1.000 -0.013 0.013
C(item_nbr)[T.20] -2.971e-17 0.006 -4.66e-15 1.000 -0.013 0.013
C(item_nbr)[T.21] 9.916e-16 0.006 1.55e-13 1.000 -0.013 0.013
C(item_nbr)[T.22] 6.159e-17 0.006 9.65e-15 1.000 -0.013 0.013
C(item_nbr)[T.23] -1.79e-15 0.006 -2.81e-13 1.000 -0.013 0.013
C(item_nbr)[T.24] 1.356e-15 0.006 2.13e-13 1.000 -0.013 0.013
C(item_nbr)[T.25] 5.0048 0.006 784.552 0.000 4.992 5.017
C(item_nbr)[T.26] 1.82e-15 0.006 2.85e-13 1.000 -0.013 0.013
C(item_nbr)[T.27] -2.975e-15 0.006 -4.66e-13 1.000 -0.013 0.013
C(item_nbr)[T.28] 1.635e-15 0.006 2.56e-13 1.000 -0.013 0.013
C(item_nbr)[T.29] -1.353e-15 0.006 -2.12e-13 1.000 -0.013 0.013
C(item_nbr)[T.30] 2.427e-15 0.006 3.8e-13 1.000 -0.013 0.013
C(item_nbr)[T.31] -2.62e-15 0.006 -4.11e-13 1.000 -0.013 0.013
C(item_nbr)[T.32] -2.81e-15 0.006 -4.4e-13 1.000 -0.013 0.013
C(item_nbr)[T.33] 1.026e-15 0.006 1.61e-13 1.000 -0.013 0.013
C(item_nbr)[T.34] 7.784e-15 0.006 1.22e-12 1.000 -0.013 0.013
C(item_nbr)[T.35] -4.762e-15 0.006 -7.46e-13 1.000 -0.013 0.013
C(item_nbr)[T.36] -3.181e-15 0.006 -4.99e-13 1.000 -0.013 0.013
C(item_nbr)[T.37] -8.851e-16 0.006 -1.39e-13 1.000 -0.013 0.013
C(item_nbr)[T.38] -3.574e-15 0.006 -5.6e-13 1.000 -0.013 0.013
C(item_nbr)[T.39] 0.0788 0.006 12.353 0.000 0.066 0.091
C(item_nbr)[T.40] 3.445e-15 0.006 5.4e-13 1.000 -0.013 0.013
C(item_nbr)[T.41] 2.619e-17 0.006 4.11e-15 1.000 -0.013 0.013
C(item_nbr)[T.42] -4.173e-16 0.006 -6.54e-14 1.000 -0.013 0.013
C(item_nbr)[T.43] 3.136e-15 0.006 4.92e-13 1.000 -0.013 0.013
C(item_nbr)[T.44] 2.048e-15 0.006 3.21e-13 1.000 -0.013 0.013
C(item_nbr)[T.45] 3.603e-15 0.006 5.65e-13 1.000 -0.013 0.013
C(item_nbr)[T.46] 1.268e-16 0.006 1.99e-14 1.000 -0.013 0.013
C(item_nbr)[T.47] 1.251e-15 0.006 1.96e-13 1.000 -0.013 0.013
C(item_nbr)[T.48] 3.825e-15 0.006 6e-13 1.000 -0.013 0.013
C(item_nbr)[T.49] 2.951e-15 0.006 4.63e-13 1.000 -0.013 0.013
C(item_nbr)[T.50] 0.1379 0.006 21.619 0.000 0.125 0.150
C(item_nbr)[T.51] 1.367e-15 0.006 2.14e-13 1.000 -0.013 0.013
C(item_nbr)[T.52] 3.427e-15 0.006 5.37e-13 1.000 -0.013 0.013
C(item_nbr)[T.53] -2.898e-15 0.006 -4.54e-13 1.000 -0.013 0.013
C(item_nbr)[T.54] -7.58e-17 0.006 -1.19e-14 1.000 -0.013 0.013
C(item_nbr)[T.55] 7.739e-16 0.006 1.21e-13 1.000 -0.013 0.013
C(item_nbr)[T.56] 3.641e-16 0.006 5.71e-14 1.000 -0.013 0.013
C(item_nbr)[T.57] 2.091e-15 0.006 3.28e-13 1.000 -0.013 0.013
C(item_nbr)[T.58] -3.509e-15 0.006 -5.5e-13 1.000 -0.013 0.013
C(item_nbr)[T.59] 1.192e-15 0.006 1.87e-13 1.000 -0.013 0.013
C(item_nbr)[T.60] 1.983e-15 0.006 3.11e-13 1.000 -0.013 0.013
C(item_nbr)[T.61] 2.523e-15 0.006 3.95e-13 1.000 -0.013 0.013
C(item_nbr)[T.62] -9.349e-16 0.006 -1.47e-13 1.000 -0.013 0.013
C(item_nbr)[T.63] 1.255e-15 0.006 1.97e-13 1.000 -0.013 0.013
C(item_nbr)[T.64] 0.3226 0.006 50.570 0.000 0.310 0.335
C(item_nbr)[T.65] -2.446e-15 0.006 -3.83e-13 1.000 -0.013 0.013
C(item_nbr)[T.66] 1.232e-15 0.006 1.93e-13 1.000 -0.013 0.013
C(item_nbr)[T.67] -3.922e-16 0.006 -6.15e-14 1.000 -0.013 0.013
C(item_nbr)[T.68] 1.918e-15 0.006 3.01e-13 1.000 -0.013 0.013
C(item_nbr)[T.69] 8.735e-16 0.006 1.37e-13 1.000 -0.013 0.013
C(item_nbr)[T.70] 1.58e-15 0.006 2.48e-13 1.000 -0.013 0.013
C(item_nbr)[T.71] -3.75e-16 0.006 -5.88e-14 1.000 -0.013 0.013
C(item_nbr)[T.72] -6.65e-16 0.006 -1.04e-13 1.000 -0.013 0.013
C(item_nbr)[T.73] -6.722e-16 0.006 -1.05e-13 1.000 -0.013 0.013
C(item_nbr)[T.74] 2.444e-15 0.006 3.83e-13 1.000 -0.013 0.013
C(item_nbr)[T.75] 1.767e-15 0.006 2.77e-13 1.000 -0.013 0.013
C(item_nbr)[T.76] -6.511e-16 0.006 -1.02e-13 1.000 -0.013 0.013
C(item_nbr)[T.77] 0.4039 0.006 63.318 0.000 0.391 0.416
C(item_nbr)[T.78] 1.252e-15 0.006 1.96e-13 1.000 -0.013 0.013
C(item_nbr)[T.79] -1.572e-15 0.006 -2.46e-13 1.000 -0.013 0.013
C(item_nbr)[T.80] 9.471e-16 0.006 1.48e-13 1.000 -0.013 0.013
C(item_nbr)[T.81] 3.655e-16 0.006 5.73e-14 1.000 -0.013 0.013
C(item_nbr)[T.82] 2.965e-15 0.006 4.65e-13 1.000 -0.013 0.013
C(item_nbr)[T.83] 2.309e-15 0.006 3.62e-13 1.000 -0.013 0.013
C(item_nbr)[T.84] -3.484e-15 0.006 -5.46e-13 1.000 -0.013 0.013
C(item_nbr)[T.85] 0.0462 0.006 7.249 0.000 0.034 0.059
C(item_nbr)[T.86] 1.048e-15 0.006 1.64e-13 1.000 -0.013 0.013
C(item_nbr)[T.87] 1.237e-15 0.006 1.94e-13 1.000 -0.013 0.013
C(item_nbr)[T.88] 2.648e-16 0.006 4.15e-14 1.000 -0.013 0.013
C(item_nbr)[T.89] 4.881e-17 0.006 7.65e-15 1.000 -0.013 0.013
C(item_nbr)[T.90] -2.229e-15 0.006 -3.49e-13 1.000 -0.013 0.013
C(item_nbr)[T.91] 1.817e-15 0.006 2.85e-13 1.000 -0.013 0.013
C(item_nbr)[T.92] 5.283e-15 0.006 8.28e-13 1.000 -0.013 0.013
C(item_nbr)[T.93] 0.2042 0.006 32.017 0.000 0.192 0.217
C(item_nbr)[T.94] -9.843e-16 0.006 -1.54e-13 1.000 -0.013 0.013
C(item_nbr)[T.95] -1.688e-15 0.006 -2.65e-13 1.000 -0.013 0.013
C(item_nbr)[T.96] 1.312e-15 0.006 2.06e-13 1.000 -0.013 0.013
C(item_nbr)[T.97] 5.332e-17 0.006 8.36e-15 1.000 -0.013 0.013
C(item_nbr)[T.98] 5.206e-16 0.006 8.16e-14 1.000 -0.013 0.013
C(item_nbr)[T.99] 4.609e-16 0.006 7.23e-14 1.000 -0.013 0.013
C(item_nbr)[T.100] 5.345e-16 0.006 8.38e-14 1.000 -0.013 0.013
C(item_nbr)[T.101] 2.129e-16 0.006 3.34e-14 1.000 -0.013 0.013
C(item_nbr)[T.102] -4.479e-16 0.006 -7.02e-14 1.000 -0.013 0.013
C(item_nbr)[T.103] 5.018e-15 0.006 7.87e-13 1.000 -0.013 0.013
C(item_nbr)[T.104] 6.774e-17 0.006 1.06e-14 1.000 -0.013 0.013
C(item_nbr)[T.105] -1.804e-15 0.006 -2.83e-13 1.000 -0.013 0.013
C(item_nbr)[T.106] -2.57e-16 0.006 -4.03e-14 1.000 -0.013 0.013
C(item_nbr)[T.107] 2.827e-16 0.006 4.43e-14 1.000 -0.013 0.013
C(item_nbr)[T.108] -1.403e-15 0.006 -2.2e-13 1.000 -0.013 0.013
C(item_nbr)[T.109] -1.616e-16 0.006 -2.53e-14 1.000 -0.013 0.013
C(item_nbr)[T.110] -2.011e-16 0.006 -3.15e-14 1.000 -0.013 0.013
C(item_nbr)[T.111] 5.471e-16 0.006 8.58e-14 1.000 -0.013 0.013
==============================================================================
Omnibus: 99723.253 Durbin-Watson: 2.003
Prob(Omnibus): 0.000 Jarque-Bera (JB): 168257149.095
Skew: 4.179 Prob(JB): 0.00
Kurtosis: 208.865 Cond. No. 92.3
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
F- 검정
###Code
sm.stats.anova_lm(model2_1_1.fit())
###Output
C:\ProgramData\Anaconda3\lib\site-packages\scipy\stats\_distn_infrastructure.py:879: RuntimeWarning: invalid value encountered in greater
return (self.a < x) & (x < self.b)
C:\ProgramData\Anaconda3\lib\site-packages\scipy\stats\_distn_infrastructure.py:879: RuntimeWarning: invalid value encountered in less
return (self.a < x) & (x < self.b)
C:\ProgramData\Anaconda3\lib\site-packages\scipy\stats\_distn_infrastructure.py:1821: RuntimeWarning: invalid value encountered in less_equal
cond2 = cond0 & (x <= self.a)
###Markdown
7. result2의 잔차의 정규성 검정 : 정규성을 띄지 않음.
###Code
%matplotlib inline
sp.stats.probplot(result2_1_1.resid, plot=plt)
plt.show()
###Output
_____no_output_____
###Markdown
8. 다중공선성 감소시키기 : VIF
###Code
df2_1.columns
# sampleX = df2_1.loc[:, cols]
# sampley = df2_1.loc[:,"log1p_units"]
# sns.pairplot(sampleX)
# plt.show()
from statsmodels.stats.outliers_influence import variance_inflation_factor
cols = ['tmax', 'tmin', 'tavg', 'dewpoint', 'wetbulb', 'heat', 'cool', 'sunrise', 'sunset',\
'snowfall', 'preciptotal', 'stnpressure', 'sealevel', 'resultspeed', 'avgspeed', \
'relative_humility', 'windchill', 'daytime', 'item_nbr']
y = df2_1.loc[:,cols]
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(y.values, i) for i in range(y.shape[1])]
vif["features"] = y.columns
vif = vif.sort_values("VIF Factor", ascending=False).reset_index(drop=True)
vif
###Output
C:\ProgramData\Anaconda3\lib\site-packages\statsmodels\stats\outliers_influence.py:167: RuntimeWarning: divide by zero encountered in double_scalars
vif = 1. / (1. - r_squared_i)
###Markdown
tmax, sunrise, tavg, daytime, tmin, sunset, stnpressure, sealevel를 빼고 df2_1을 다시 OLS돌려본다(6-1번 참조)
###Code
cols = ['dewpoint', 'wetbulb', 'heat', 'cool', 'snowfall', 'preciptotal', 'resultspeed', 'avgspeed', \
'relative_humility', 'windchill', 'item_nbr']
sampleX = df2_1.loc[:, cols]
sampley = df2_1.loc[:,"log1p_units"]
from statsmodels.stats.outliers_influence import variance_inflation_factor
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(sampleX.values, i) for i in range(sampleX.shape[1])]
vif["features"] = sampleX.columns
vif = vif.sort_values("VIF Factor", ascending=False).reset_index(drop=True)
vif
###Output
_____no_output_____
###Markdown
VIF : wetbulb 버리고 다시
###Code
cols = ['dewpoint', 'heat', 'cool', 'snowfall', 'preciptotal', 'resultspeed', 'avgspeed', \
'relative_humility', 'windchill', 'item_nbr']
sampleX = df2_1.loc[:, cols]
sampley = df2_1.loc[:,"log1p_units"]
from statsmodels.stats.outliers_influence import variance_inflation_factor
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(sampleX.values, i) for i in range(sampleX.shape[1])]
vif["features"] = sampleX.columns
vif
###Output
_____no_output_____
###Markdown
VIF : dewpoint 버리고 다시
###Code
cols = ['heat', 'cool', 'snowfall', 'preciptotal', 'resultspeed', 'avgspeed', \
'relative_humility', 'windchill', 'item_nbr']
sampleX = df2_1.loc[:, cols]
sampley = df2_1.loc[:,"log1p_units"]
from statsmodels.stats.outliers_influence import variance_inflation_factor
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(sampleX.values, i) for i in range(sampleX.shape[1])]
vif["features"] = sampleX.columns
vif = vif.sort_values("VIF Factor", ascending=False).reset_index(drop=True)
vif
###Output
_____no_output_____
###Markdown
VIF : avgspeed 버리고 다시
###Code
cols = ['heat', 'cool', 'snowfall', 'preciptotal', 'resultspeed', \
'relative_humility', 'windchill', 'item_nbr']
sampleX = df2_1.loc[:, cols]
sampley = df2_1.loc[:,"log1p_units"]
from statsmodels.stats.outliers_influence import variance_inflation_factor
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(sampleX.values, i) for i in range(sampleX.shape[1])]
vif["features"] = sampleX.columns
vif = vif.sort_values("VIF Factor", ascending=False).reset_index(drop=True)
vif
###Output
_____no_output_____
###Markdown
VIF : relative_humility 버리고 다시
###Code
cols = ['heat', 'cool', 'snowfall', 'preciptotal', 'resultspeed', 'windchill', 'item_nbr']
sampleX = df2_1.loc[:, cols]
sampley = df2_1.loc[:,"log1p_units"]
from statsmodels.stats.outliers_influence import variance_inflation_factor
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(sampleX.values, i) for i in range(sampleX.shape[1])]
vif["features"] = sampleX.columns
vif = vif.sort_values("VIF Factor", ascending=False).reset_index(drop=True)
vif
###Output
_____no_output_____
###Markdown
9. 정규화 후 Cross validation(교차검증)- 6-4번 model 사용- 순수 Ridge모형(L1_wt=0), 순수 lasso모형(L1_wt=1)
###Code
from patsy import dmatrix
# 독립변수와 종속변수로 나누기
df2_1_target = df2_1['log1p_units']
df2_1_X = df2_1.drop(columns=['log1p_units'])
len(df2_1_X), len(df2_1_target)
###Output
_____no_output_____
###Markdown
scikit learn에서 적용할 때 사용하는 코드 :df2_1(log1p_units) 대상
###Code
formula = 'C(month) + C(weekend) + C(rainY) + C(item_nbr) + 0'
dfX = dmatrix(formula, df2_1_X, return_type='dataframe')
dfy = pd.DataFrame(df2_1_target, columns=["log1p_units"])
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import KFold
from sklearn.metrics import r2_score
model = LinearRegression()
cv = KFold(10, shuffle=True, random_state=0)
scores = np.zeros(10)
for i, (train_index, test_index) in enumerate(cv.split(dfX)):
X_train = dfX.values[train_index]
y_train = dfy.values[train_index]
X_test = dfX.values[test_index]
y_test = dfy.values[test_index]
model = model.fit(X_train, y_train)
y_pred = model.predict(X_test)
scores[i] = r2_score(y_test, y_pred)
scores
# Ridge
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import KFold
from sklearn.metrics import r2_score
cv = KFold(10, shuffle=True, random_state=0)
scores1 = np.zeros(10)
for i, (train_index, test_index) in enumerate(cv.split(dfX)):
X_train = dfX.values[train_index]
y_train = dfy.values[train_index]
X_test = dfX.values[test_index]
y_test = dfy.values[test_index]
model = sm.OLS(y_train, X_train)
model = model.fit_regularized(alpha=0.1, L1_wt=0)
y_pred = model.predict(X_test)
scores1[i] = r2_score(y_test, y_pred)
scores1
# Lasso
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import KFold
from sklearn.metrics import r2_score
cv = KFold(10, shuffle=True, random_state=0)
scores2 = np.zeros(10)
for i, (train_index, test_index) in enumerate(cv.split(dfX)):
X_train = dfX.values[train_index]
y_train = dfy.values[train_index]
X_test = dfX.values[test_index]
y_test = dfy.values[test_index]
model = sm.OLS(y_train, X_train)
model = model.fit_regularized(alpha=0.1, L1_wt=1)
y_pred = model.predict(X_test)
scores2[i] = r2_score(y_test, y_pred)
scores2
# Elasic net
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import KFold
from sklearn.metrics import r2_score
cv = KFold(10, shuffle=True, random_state=0)
scores3 = np.zeros(10)
for i, (train_index, test_index) in enumerate(cv.split(dfX)):
X_train = dfX.values[train_index]
y_train = dfy.values[train_index]
X_test = dfX.values[test_index]
y_test = dfy.values[test_index]
model = sm.OLS(y_train, X_train)
model = model.fit_regularized(alpha=0.1, L1_wt=0.5)
y_pred = model.predict(X_test)
scores3[i] = r2_score(y_test, y_pred)
scores3
###Output
_____no_output_____
###Markdown
station 평균성능
###Code
scores.mean(), scores1.mean(), scores2.mean(), scores3.mean()
###Output
_____no_output_____ |
notebooks/2021cnps_ccap3_semantics.ipynb | ###Markdown
TLPA 図版の単語を word2vec でベクトル化し tSNE でプロット- date: 2021_0830- filename: 2021cnps_ccap3_semantics.ipynb- author: 浅川伸一 - note: 2021cnps 配布用- License: MIT License
###Code
# -*- coding: utf-8 -*-
import numpy as np
# 表示精度桁数の設定
np.set_printoptions(suppress=False, formatter={'float': '{:6.3f}'.format})
# 形態素分析ライブラリ MeCab と 辞書 mecab-ipadic-NEologd のインストール
# reference: https://qiita.com/jun40vn/items/78e33e29dce3d50c2df1
!apt-get -q -y install sudo file mecab libmecab-dev mecab-ipadic-utf8 git curl python-mecab
!git clone --depth 1 https://github.com/neologd/mecab-ipadic-neologd.git
!echo yes | mecab-ipadic-neologd/bin/install-mecab-ipadic-neologd -n
!pip install mecab-python3
# シンボリックリンクによるエラー回避
!ln -s /etc/mecabrc /usr/local/etc/mecabrc
# 動作確認
import MeCab
neologd_path = "-d /usr/lib/x86_64-linux-gnu/mecab/dic/mecab-ipadic-neologd"
m = MeCab.Tagger(neologd_path +' -Oyomi')
print(m.parse('鬼滅の刃'))
#訓練済 word2vec ファイルの取得
#!wget --no-check-certificate --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1B9HGhLZOja4Xku5c_d-kMhCXn1LBZgDb' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1B9HGhLZOja4Xku5c_d-kMhCXn1LBZgDb" -O 2021_05jawiki_hid128_win10_neg10_cbow.bin.gz && rm -rf /tmp/cookies.txt
#!wget --no-check-certificate --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1OWmFOVRC6amCxsomcRwdA6ILAA5s4y4M' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1OWmFOVRC6amCxsomcRwdA6ILAA5s4y4M" -O 2021_05jawiki_hid128_win10_neg10_sgns.bin.gz && rm -rf /tmp/cookies.txt
!wget --no-check-certificate --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1JTkU5SUBU2GkURCYeHkAWYs_Zlbqob0s' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1JTkU5SUBU2GkURCYeHkAWYs_Zlbqob0s" -O 2021_05jawiki_hid200_win20_neg20_cbow.bin.gz && rm -rf /tmp/cookies.txt
#!wget --no-check-certificate --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1VPL2Mr9JgWHik9HjRmcADoxXIdrQ3ds7' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1VPL2Mr9JgWHik9HjRmcADoxXIdrQ3ds7" -O 2021_05jawiki_hid200_win20_neg20_sgns.bin.gz && rm -rf /tmp/cookies.txt
#直上のセルで取得したファイルの読み込み
#word2vec データ処理のため gensim を使う
import os
import sys
from gensim.models import KeyedVectors
from gensim.models import Word2Vec
print('# word2vec データの読み込み')
print('# 訓練済 word2vec,訓練データは wikipedia 全文 読み込みに時間がかかります...', end="")
# ファイルの所在に応じて変更してください
w2v_base = '.'
#w2v_file='2021_05jawiki_hid128_win10_neg10_cbow.bin.gz'
#w2v_file='2021_05jawiki_hid128_win10_neg10_sgns.bin.gz'
w2v_file='2021_05jawiki_hid200_win20_neg20_cbow.bin.gz'
#w2v_file='2021_05jawiki_hid200_win20_neg20_sgns.bin.gz'
w2v_file = os.path.join(w2v_base, w2v_file)
w2v = KeyedVectors.load_word2vec_format(w2v_file,
encoding='utf-8',
unicode_errors='replace',
binary=True)
!pip install japanize_matplotlib
import matplotlib.pyplot as plt
import japanize_matplotlib
%matplotlib inline
# TLPA のデータ
tlpa_labels = ['バス', '緑', '桜', 'のり巻き', '五重塔', 'コップ', 'ごぼう', '土踏まず', '風呂', 'ヒトデ', 'ハム', '兎', 'ロープウエイ', '学校', 'ちりとり', '縁側', '歯', 'ネギ', 'あじさい', '灰色', '天井', '鍵', '肌色', 'ワニ', '電車', '顔', '松', 'ガードレール', '柿', 'ちまき', '信号', 'すすき', 'じょうろ', 'コンセント', '天ぷら', '中指', 'ヨット', 'ピンク', 'ふくろう', 'みかん', '柱', '角砂糖', '犬', 'かご', 'バラ', '鍋', 'まぶた', 'くるみ', '黒', 'デパート', 'カーネーション', '城', '蟻', '豆腐', 'ドライバー', '紺', '階段', '戦車', '人参', '背中', '鏡餅', 'スプーン', '朝顔', '金', '足', 'ふすま', '蛇', 'レモン', '公園', '乳母車', '床', '藤', 'ピンセット', 'トラック', '苺', '黄土色', '銭湯', 'ナマズ', 'そば', 'お腹', 'オレンジ', 'バター', '工場', '鳩', '電卓', '喉仏', 'チューリップ', '白菜', 'トラクター', '廊下', 'パトカー', '押入れ', '鉛筆', '目尻', '芋', '吊り橋', '赤', 'かき氷', '豹', 'サボテン', 'ピラミッド', 'サイ', '目', 'ひまわり', 'はたき', '刺し身', '玄関', 'トマト', '黄緑', '三輪車', '鶏', 'つむじ', 'アスパラガス', 'ドア', '銀色', 'すりこ木', 'ウイスキー', '梅', 'タクシー', '動物園', '床の間', '焦げ茶', 'ぶどう', '飴', '毛虫', 'アイロン', '寺', 'そり', 'ひょうたん', '首', '消しゴム', '頬', 'いちょう', '駅', 'ギョウザ', '牛', 'びわ', '飛行機', '畳', '白', '竹', 'ペリカン', '紫', '手すり', '口', '大根', '風車', '鋏', '潜水艦', 'ステーキ', 'マッチ', '二階', '落花生', '御飯', '自転車', '歩道橋', '鯨', '茶色', '菖蒲', 'ふくらはぎ', '桃', 'たいやき', '道路', '靴べら', '水色', '壁', 'たんぽぽ', 'いかだ', '山羊', '鼻', '海老', '台所', 'オートバイ', 'かぶ', '柳', 'しゃもじ', 'まんじゅう', 'かかと', '薄紫', '家', 'おせち料理', '青', '傘', 'つくし', 'りんご', '馬車', '線路', 'タツノオトシゴ', '耳', '便所', '蓮根', '猫', '黄色', 'へそ', '街灯', '障子', '酒', '船', '安全ピン', 'もみじ']
tlpa_fam = ['高', '高', '高', '低', '低', '高', '低', '低', '高', '低', '高', '高', '低', '高', '低', '低', '高', '高', '低', '低', '高', '高', '低', '低', '高', '高', '高', '低', '低', '低', '高', '低', '低', '低', '高', '低', '高', '高', '低', '高', '低', '低', '高', '低', '高', '高', '低', '低', '高', '高', '低', '低', '高', '高', '低', '低', '高', '低', '高', '高', '低', '高', '高', '低', '高', '低', '高', '低', '高', '低', '高', '低', '低', '高', '高', '低', '低', '低', '高', '高', '高', '高', '高', '高', '低', '低', '高', '低', '低', '低', '高', '高', '高', '低', '高', '低', '高', '低', '低', '低', '低', '低', '高', '高', '低', '高', '高', '高', '低', '低', '高', '低', '低', '高', '低', '低', '低', '高', '高', '高', '低', '低', '高', '高', '低', '高', '高', '低', '低', '高', '高', '低', '低', '高', '低', '高', '低', '高', '低', '高', '高', '低', '高', '低', '高', '高', '低', '高', '低', '低', '高', '低', '低', '高', '高', '低', '高', '高', '低', '低', '高', '低', '高', '低', '低', '高', '高', '低', '低', '高', '高', '高', '高', '低', '低', '低', '高', '低', '低', '高', '低', '高', '高', '低', '高', '低', '低', '低', '高', '高', '低', '高', '高', '低', '低', '低', '高', '高', '低', '高']
tlpa_cat = ['乗り物', '色', '植物', '加工食品', '建造物', '道具', '野菜果物', '身体部位', '屋内部位', '動物', '加工食品', '動物', '乗り物', '建造物', '道具', '屋内部位', '身体部位', '野菜果物', '植物', '色', '屋内部位', '道具', '色', '動物', '乗り物', '身体部位', '植物', '建造物', '野菜果物', '加工食品', '建造物', '植物', '道具', '屋内部位', '加工食品', '身体部位', '乗り物', '色', '動物', '野菜果物', '屋内部位', '加工食品', '動物', '乗り物', '植物', '道具', '身体部位', '野菜果物', '色', '建造物', '植物', '建造物', '動物', '加工食品', '道具', '色', '屋内部位', '乗り物', '野菜果物', '身体部位', '加工食品', '道具', '植物', '色', '身体部位', '屋内部位', '動物', '野菜果物', '建造物', '乗り物', '屋内部位', '植物', '道具', '乗り物', '野菜果物', '色', '建造物', '動物', '加工食品', '身体部位', '色', '加工食品', '建造物', '動物', '道具', '身体部位', '植物', '野菜果物', '乗り物', '屋内部位', '乗り物', '屋内部位', '道具', '身体部位', '野菜果物', '建造物', '色', '加工食品', '動物', '植物', '建造物', '動物', '身体部位', '植物', '道具', '加工食品', '屋内部位', '野菜果物', '色', '乗り物', '動物', '身体部位', '野菜果物', '屋内部位', '色', '道具', '加工食品', '植物', '乗り物', '建造物', '屋内部位', '色', '野菜果物', '加工食品', '動物', '道具', '建造物', '乗り物', '植物', '身体部位', '道具', '身体部位', '植物', '建造物', '加工食品', '動物', '野菜果物', '乗り物', '屋内部位', '色', '植物', '動物', '色', '屋内部位', '身体部位', '野菜果物', '建造物', '道具', '乗り物', '加工食品', '道具', '屋内部位', '野菜果物', '加工食品', '乗り物', '建造物', '動物', '色', '植物', '身体部位', '野菜果物', '加工食品', '建造物', '道具', '色', '屋内部位', '植物', '乗り物', '動物', '身体部位', '動物', '屋内部位', '乗り物', '野菜果物', '植物', '道具', '加工食品', '身体部位', '色', '建造物', '加工食品', '色', '道具', '植物', '野菜果物', '乗り物', '建造物', '動物', '身体部位', '屋内部位', '野菜果物', '動物', '色', '身体部位', '建造物', '屋内部位', '加工食品', '乗り物', '道具', '植物']
import sys
import numpy as np
"""
- source: https://lvdmaaten.github.io/tsne/
- オリジナルの tSNE python 実装を python 3 系で呼び出せるように変更したバージョン
- date: 2021_0510
- author: 浅川伸一
```python
import numpy as np
import tsne
X = np.random.random(100, 30)
result = tsne.tsne(X)
```
"""
# tsne.py
#
# Implementation of t-SNE in Python. The implementation was tested on Python 2.7.10, and it requires a working
# installation of NumPy. The implementation comes with an example on the MNIST dataset. In order to plot the
# results of this example, a working installation of matplotlib is required.
#
# The example can be run by executing: `ipython tsne.py`
#
#
# Created by Laurens van der Maaten on 20-12-08.
# Copyright (c) 2008 Tilburg University. All rights reserved.
#import numpy as Math
#import pylab as Plot
def Hbeta(D = np.array([]), beta = 1.0):
"""Compute the perplexity and the P-row for a specific value of the precision of a Gaussian distribution."""
# Compute P-row and corresponding perplexity
P = np.exp(-D.copy() * beta)
sumP = sum(P) if sum(P) > 1e-12 else 1e-12 # to avoid division by zero ここだけ加えた。直下行が division by zero error にならないように
H = np.log(sumP) + beta * np.sum(D * P) / sumP
P = P / sumP
return H, P
def x2p(X = np.array([]), tol=1e-5, perplexity=30.0):
"""Performs a binary search to get P-values in such a way that each conditional Gaussian has the same perplexity."""
# Initialize some variables
#print("Computing pairwise distances...")
(n, d) = X.shape
sum_X = np.sum(np.square(X), 1)
D = np.add(np.add(-2 * np.dot(X, X.T), sum_X).T, sum_X)
P = np.zeros((n, n))
beta = np.ones((n, 1))
logU = np.log(perplexity)
# Loop over all datapoints
for i in range(n):
# Print progress
#if i % 500 == 0:
# print("Computing P-values for point ", i, " of ", n, "...")
# Compute the Gaussian kernel and entropy for the current precision
betamin = -np.inf
betamax = np.inf
Di = D[i, np.concatenate((np.r_[0:i], np.r_[i+1:n]))]
(H, thisP) = Hbeta(Di, beta[i])
# Evaluate whether the perplexity is within tolerance
Hdiff = H - logU
tries = 0
while np.abs(Hdiff) > tol and tries < 50:
# If not, increase or decrease precision
if Hdiff > 0:
betamin = beta[i].copy()
if betamax == np.inf or betamax == -np.inf:
beta[i] = beta[i] * 2
else:
beta[i] = (beta[i] + betamax) / 2;
else:
betamax = beta[i].copy()
if betamin == np.inf or betamin == -np.inf:
beta[i] = beta[i] / 2
else:
beta[i] = (beta[i] + betamin) / 2;
# Recompute the values
(H, thisP) = Hbeta(Di, beta[i])
Hdiff = H - logU
tries = tries + 1
# Set the final row of P
P[i, np.concatenate((np.r_[0:i], np.r_[i+1:n]))] = thisP
# Return final P-matrix
sigma = np.mean(np.sqrt(1 / beta))
print(f'Mean value of sigma: {sigma:.3f}')
return P
def pca(X = np.array([]), no_dims = 50):
"""Runs PCA on the NxD array X in order to reduce its dimensionality to no_dims dimensions."""
#print("Preprocessing the data using PCA...")
(n, d) = X.shape
X = X - np.tile(np.mean(X, 0), (n, 1))
(l, M) = np.linalg.eig(np.dot(X.T, X))
Y = np.dot(X, M[:,0:no_dims])
return Y
def tsne(X = np.array([]), no_dims=2, initial_dims=50, perplexity=30.0):
"""
Runs t-SNE on the dataset in the NxD array X to reduce its dimensionality to no_dims dimensions.
The syntaxis of the function is Y = tsne.tsne(X, no_dims, perplexity), where X is an NxD NumPy array.
"""
# Check inputs
if isinstance(no_dims, float):
print("Error: array X should have type float.")
return -1
if round(no_dims) != no_dims:
print("Error: number of dimensions should be an integer.")
return -1
# Initialize variables
X = pca(X, initial_dims).real
(n, d) = X.shape
max_iter = 1000
initial_momentum = 0.5
final_momentum = 0.8
eta = 500
min_gain = 0.01
Y = np.random.randn(n, no_dims)
dY = np.zeros((n, no_dims))
iY = np.zeros((n, no_dims))
gains = np.ones((n, no_dims))
# Compute P-values
P = x2p(X, 1e-5, perplexity)
P = P + np.transpose(P)
P = P / np.sum(P)
P = P * 4 # early exaggeration
P = np.maximum(P, 1e-12)
#P = np.maximum(P, 1e-5)
interval = int(max_iter >> 2)
# Run iterations
for iter in range(max_iter):
# Compute pairwise affinities
sum_Y = np.sum(np.square(Y), 1)
num = 1 / (1 + np.add(np.add(-2 * np.dot(Y, Y.T), sum_Y).T, sum_Y))
num[range(n), range(n)] = 0
Q = num / np.sum(num)
Q = np.maximum(Q, 1e-12)
#Q = np.maximum(Q, 1e-5)
# Compute gradient
PQ = P - Q;
for i in range(n):
dY[i,:] = np.sum(np.tile(PQ[:,i] * num[:,i], (no_dims, 1)).T * (Y[i,:] - Y), 0)
# Perform the update
if iter < 20:
momentum = initial_momentum
else:
momentum = final_momentum
gains = (gains + 0.2) * ((dY > 0) != (iY > 0)) + (gains * 0.8) * ((dY > 0) == (iY > 0))
gains[gains < min_gain] = min_gain
iY = momentum * iY - eta * (gains * dY)
Y = Y + iY
Y = Y - np.tile(np.mean(Y, 0), (n, 1))
# Compute current value of cost function
#if (iter + 1) % 10 == 0:
#if (iter + 1) % interval == 0:
# C = np.sum(P * np.log(P / Q))
# print(f"Iteration {(iter + 1):<5d}: error is {C:.3f}")
# Stop lying about P-values
if iter == 100:
P = P / 4;
# Return solution
return Y;
# if __name__ == "__main__":
# print("Run Y = tsne.tsne(X, no_dims, perplexity) to perform t-SNE on your dataset.")
# print("Running example on 2,500 MNIST digits...")
# X = np.loadtxt("mnist2500_X.txt");
# labels = np.loadtxt("mnist2500_labels.txt");
# Y = tsne(X, 2, 50, 20.0);
# Plot.scatter(Y[:,0], Y[:,1], 20, labels);
# Plot.show();
tlpa_cats = list(set(tlpa_cat))
tlpa_colors = [tlpa_cats.index(c) for c in tlpa_cat]
def draw_tSNE_plot(data,
colors=tlpa_colors,
labels=tlpa_labels,
fontsize=16,
figsize=(16,16),
xmax=None, xmin=None,
ymax=None, ymin=None,
save_figname=None,
grid = True,
auto_lim = True,
):
"""tSNE のプロットを描画する関数
引数:
data: np.array[N,2]
tSNE の結果
colors: list[N]
各項目の色を指定する数字 N 個
labels: list[str]
散布図中に表示する項目名のリスト
figsize: タプル
縦横のサイズ。単位はインチ。だが昔と違ってディスプレイサイズがまちまちなので目安でしか無い
xmax, xmin, ymax, ymin: int
図の最大値と最小値を指定する。
指定しなければ自動計算する
save_figname: str
保存するファイル名 pdf ファイルとして保存するなら .pdf 拡張子をつける
grid: Boolean
図中にグリッドを表示するか否か。デフォルトは True
auto_lim: Boolean
最大値最小値を自動計算するか否か
xmax, xmin, ymax, ymin が指定されていれば自動的に False になる
"""
fig = plt.figure(figsize=figsize)
axe = fig.add_subplot(1,1,1)
if xmax == None: xmax = (X[:,0]).max(); auto_lim = False
if xmin == None: xmin = (X[:,0]).min(); auto_lim = False
if ymax == None: ymax = (X[:,1]).max(); auto_lim = False
if ymin == None: ymin = (X[:,1]).min(); auto_lim = False
if auto_lim:
axe.set_xlim(xmin, xmax); axe.set_ylim(ymin, ymax)
if grid:
axe.grid()
axe.scatter(data[:,0], data[:,1], 120, colors)
for i, label in enumerate(labels):
axe.annotate(label, (data[i,0], data[i,1]), fontsize=fontsize)
if save_figname != None:
plt.savefig(save_figname)
return fig, axe
# tSNE による TLPA 図版単語をカテゴリごとに楕円を描いて表示する
# 楕円を描画するための準備
from matplotlib.patches import Ellipse
np.set_printoptions(suppress=False, formatter={'float': '{:6.3f}'.format})
seed = 0
np.random.seed(seed)
X = np.array([w2v[word] for word in tlpa_labels], dtype=np.float)
tlpa_cats = list(set(tlpa_cat))
tlpa_colors = [tlpa_cats.index(c) for c in tlpa_cat]
tlpa_results = tsne(X, perplexity=30.0)
f, axe = draw_tSNE_plot(tlpa_results, labels=tlpa_labels, colors=tlpa_colors)
tlpa_group_avgs = np.zeros((len(tlpa_cats),2),dtype=np.float)
tlpa_groups = {}
for i in range(len(tlpa_cats)):
tlpa_groups[i] = []
for i, c in enumerate(tlpa_colors):
tlpa_group_avgs[c] += tlpa_results[i]/20
tlpa_groups[c].append(tlpa_results[i])
colors = ['blue', 'orange', 'green', 'red', 'purple', 'brown', 'pink', 'gray', 'olive', 'cyan']
plt.title('加工食品:blue 動物:orange 建造物:green 屋内部位:red 道具:purple 植物:brown 身体部位:pink 色:gray 乗り物:olive 野菜果物:cyan')
for i, (x, y) in enumerate(zip(tlpa_cats, tlpa_group_avgs)):
tlpa_groups[i] = np.array(tlpa_groups[i])
Cov = np.cov(tlpa_groups[i].T)
width, height = Cov[0,0], Cov[1,1]
corr = np.corrcoef(tlpa_groups[i].T)[0,1]
deg = np.rad2deg(np.arccos(corr))
print(f'{i:3d} category:{x:<7s}, 中心:{y} width:{width:.3f} height:{height:.03f}',
f' deg:{deg:.3f} color:{colors[i]}')
const = 0.5
#tlpa_groups[i] = {'cat_name':x, 'xy':(y[0],y[1]), 'width':np.sqrt(width) * 1.4, 'height':np.sqrt(height) * 1.4, 'deg':deg, 'color':colors[i]}
#tlpa_groups[i] = {'cat_name':x, 'xy':(y[0],y[1]), 'width':width, 'height':height, 'deg':deg, 'color':colors[i]}
#tlpa_groups[i] = {'cat_name':x, 'xy':(y[0],y[1]), 'width':width * 0.4, 'height':height * 0.4, 'deg':deg, 'color':colors[i]}
#tlpa_groups[i] = {'cat_name':x, 'xy':(y[0],y[1]), 'width':width * 0.7, 'height':height * 0.7, 'deg':deg, 'color':colors[i]}
tlpa_groups[i] = {'cat_name':x, 'xy':(y[0],y[1]), 'width':width * const, 'height':height * const, 'deg':deg, 'color':colors[i]}
axe.add_artist(Ellipse(xy=tlpa_groups[i]['xy'],
width=tlpa_groups[i]['width'],
height=tlpa_groups[i]['height'],
#color=tlpa_groups[i]['color'],
color=colors[i],
angle=tlpa_groups[i]['deg'],
alpha=0.2
))
#plt.savefig('2021_0825tlpa_tSNE_ellipse.svg')
#print(tlpa_colors, len(tlpa_colors), type(tlpa_colors), np.array(tlpa_colors).max())
# 大門先生のデータ,天ぷら ---> たくあん と言う意味性錯語を検証
def compare_distances_tSNE_word2vec(t_word='天ぷら', r_word='たくあん',
tlpa_labels=tlpa_labels,
seed=0,
perplexity=30.0,
save_fig=False,
draw_fig=True
):
# TLPA の単語ベクトルを X に代入
X = np.array([w2v[word] for word in tlpa_labels], dtype=np.float)
r_vec = [w2v[r_word]] # 反応語の単語ベクトルを取得し r_vec に代入
X_ = np.concatenate((X, r_vec), axis=0) # X と r_vec を併せて新しい X_ を作成する
# 上の X_ に合わせてラベルデータ tlpa_labels_ を作成
tlpa_labels_ = ['バス', '緑', '桜', 'のり巻き', '五重塔', 'コップ', 'ごぼう', '土踏まず', '風呂', 'ヒトデ', \
'ハム', '兎', 'ロープウエイ', '学校', 'ちりとり', '縁側', '歯', 'ネギ', 'あじさい', '灰色', \
'天井', '鍵', '肌色', 'ワニ', '電車', '顔', '松', 'ガードレール', '柿', 'ちまき', '信号', \
'すすき', 'じょうろ', 'コンセント', '天ぷら', '中指', 'ヨット', 'ピンク', 'ふくろう', 'みかん', \
'柱', '角砂糖', '犬', 'かご', 'バラ', '鍋', 'まぶた', 'くるみ', '黒', 'デパート', 'カーネーション', \
'城', '蟻', '豆腐', 'ドライバー', '紺', '階段', '戦車', '人参', '背中', '鏡餅', 'スプーン', \
'朝顔', '金', '足', 'ふすま', '蛇', 'レモン', '公園', '乳母車', '床', '藤', 'ピンセット', \
'トラック', '苺', '黄土色', '銭湯', 'ナマズ', 'そば', 'お腹', 'オレンジ', 'バター', '工場', \
'鳩', '電卓', '喉仏', 'チューリップ', '白菜', 'トラクター', '廊下', 'パトカー', '押入れ', \
'鉛筆', '目尻', '芋', '吊り橋', '赤', 'かき氷', '豹', 'サボテン', 'ピラミッド', 'サイ', '目', \
'ひまわり', 'はたき', '刺し身', '玄関', 'トマト', '黄緑', '三輪車', '鶏', 'つむじ', 'アスパラガス', \
'ドア', '銀色', 'すりこ木', 'ウイスキー', '梅', 'タクシー', '動物園', '床の間', '焦げ茶', 'ぶどう', \
'飴', '毛虫', 'アイロン', '寺', 'そり', 'ひょうたん', '首', '消しゴム', '頬', 'いちょう', '駅', \
'ギョウザ', '牛', 'びわ', '飛行機', '畳', '白', '竹', 'ペリカン', '紫', '手すり', '口', '大根', \
'風車', '鋏', '潜水艦', 'ステーキ', 'マッチ', '二階', '落花生', '御飯', '自転車', '歩道橋', '鯨', \
'茶色', '菖蒲', 'ふくらはぎ', '桃', 'たいやき', '道路', '靴べら', '水色', '壁', 'たんぽぽ', \
'いかだ', '山羊', '鼻', '海老', '台所', 'オートバイ', 'かぶ', '柳', 'しゃもじ', 'まんじゅう', \
'かかと', '薄紫', '家', 'おせち料理', '青', '傘', 'つくし', 'りんご', '馬車', '線路', \
'タツノオトシゴ', '耳', '便所', '蓮根', '猫', '黄色', 'へそ', '街灯', '障子', '酒', '船', \
'安全ピン', 'もみじ', r_word]
# 反応語を最後に入れたので,その語の色を 11 番目の色として設定
tlpa_colors_ = [8, 7, 5, 0, 2, 4, 9, 6, 3, 1, 0, 1, 8, 2, 4, 3, 6, 9, 5, 7, 3, 4, 7, 1, 8, 6, 5, 2, 9, 0, 2, 5, 4, 3, 0, 6, 8, 7, 1, 9, 3, 0, 1, 8, 5, 4, 6, 9, 7, 2, 5, 2, 1, 0, 4, 7, 3, 8, 9, 6, 0, 4, 5, 7, 6, 3, 1, 9, 2, 8, 3, 5, 4, 8, 9, 7, 2, 1, 0, 6, 7, 0, 2, 1, 4, 6, 5, 9, 8, 3, 8, 3, 4, 6, 9, 2, 7, 0, 1, 5, 2, 1, 6, 5, 4, 0, 3, 9, 7, 8, 1, 6, 9, 3, 7, 4, 0, 5, 8, 2, 3, 7, 9, 0, 1, 4, 2, 8, 5, 6, 4, 6, 5, 2, 0, 1, 9, 8, 3, 7, 5, 1, 7, 3, 6, 9, 2, 4, 8, 0, 4, 3, 9, 0, 8, 2, 1, 7, 5, 6, 9, 0, 2, 4, 7, 3, 5, 8, 1, 6, 1, 3, 8, 9, 5, 4, 0, 6, 7, 2, 0, 7, 4, 5, 9, 8, 2, 1, 6, 3, 9, 1, 7, 6, 2, 3, 0, 8, 4, 5, 10]
t_num = tlpa_labels_.index(t_word)
r_num = tlpa_labels_.index(r_word)
tlpa_colors_[t_num] = 10 # 図を見やすくするため,ターゲット語の色を表出語の色と同じ 11 番目の色に設定
np.random.seed(seed) # 乱数の種を設定
tlpa_results_ = tsne(X_, perplexity=perplexity) # tSNE の実行
if draw_fig:
f, axe = draw_tSNE_plot(tlpa_results_, labels=tlpa_labels_, colors=tlpa_colors_)
if save_fig:
save_fname = '2021_0825'+t_word+'_'+r_word+'.pdf'
plt.savefig(save_fname)
plt.show()
a = tlpa_results_[t_num] # ターゲット語の tSNE 座標を取得して a に代入
b = tlpa_results_[r_num] # 表出語の tSNE 座標を取得して b に代入
print(f'{t_word}: {a} {r_word}: {b}') # 結果を表示
tsne_dist = np.linalg.norm(a - b) # ターゲット語と表出語のユークリッド距離を計算し tsne_dist に代入
w2v_dist = w2v.distance(t_word, r_word)
#print(f'{t_word} と {r_word} との tSNE 上での距離: {dist:.3f}',
# f'word2vec 上での距離: {w2v_dist:.3f}')
return tsne_dist, w2v_dist
#大門先生のデータ,各リストの要素はタプルで,タプルの先頭が刺激語,2番目が反応
daimon_results = [ ('あじさい', 'フラワー'), ('ちまき','ふきのとう'),
('天ぷら','たくあん'), ('角砂糖','ストロー'),
('角砂糖', 'フォーク'), ('鍋','やかん'),
('城','やぐら'), ('廊下','戸締り'),
('安全ピン','ピンセット') #これだけは形式性錯語
]
for pair in daimon_results:
tsne_dist, w2v_dist = compare_distances_tSNE_word2vec(pair[0], pair[1], save_fig=False, draw_fig=False)
print(f'{pair[0]} と {pair[1]} との tSNE 上での距離: {tsne_dist:.3f}, w2v 上での距離(1-similarity):{w2v_dist:.3f}')
###Output
_____no_output_____ |
GTI770-TP03.ipynb | ###Markdown
Laboratoire 3 : Machines à vecteurs de support et réseaux neuronaux Département du génie logiciel et des technologies de l’information| Étudiants | NOMS - CODE PERMANENT ||-----------------------|---------------------------------------------------------|| Cours | GTI770 - Systèmes intelligents et apprentissage machine || Session | SAISON ANNÉE || Groupe | X || Numéro du laboratoire | X || Professeur | Prof. NOM || Chargé de laboratoire | NOM || Date | DATE |
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Laboratoire 3 : Machines à vecteurs de support et réseaux neuronaux Département du génie logiciel et des technologies de l’information| Étudiants | ||-----------------------|---------------------------------------------------------|| Jean-Philippe Decoste | DECJ19059105 || Ahmad Al-Taher | ALTA22109307 || Stéphanie Lacerte | LACS06629109 || Cours | GTI770 - Systèmes intelligents et apprentissage machine || Session | Automne 2018 || Groupe | 2 || Numéro du laboratoire | 02 || Professeur | Hervé Lombaert || Chargé de laboratoire | Pierre-Luc Delisle || Date | 30 oct 2018 |
###Code
import csv
import math
import os
import numpy as np
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
import tensorflow as tf
from tensorflow import keras
from tabulate import tabulate
import matplotlib.pyplot as plt
from helpers import utilities as Utils
from helpers import datasets as Data
###Output
_____no_output_____
###Markdown
Paramètres
###Code
#MERGED_GALAXY_PRIMITIVE = r"data\csv\eq07_pMerged.csv"
ALL_GALAXY_PRIMITIVE = r"data\csv\galaxy\galaxy_feature_vectors.csv"
MERGED_GALAXY_PRIMITIVE = r".\galaxies.csv"
# Neural Network
LAYERS_ACTIVATION = 'relu'
LAST_LAYER_ACTIVATION = 'sigmoid'
TENSORBOARD_SUMMARY = r"tensorboard"
###Output
_____no_output_____
###Markdown
SVM
###Code
def svm():
#linear
print("SVM linear")
c=[0.001,0.1,1.0,10.0]
params = dict(kernel=['linear'], C=c ,class_weight=['balanced'], cache_size=[2048])
grid = GridSearchCV(SVC(), param_grid=params, cv=dataset_splitter, n_jobs=-1, iid=True)
#Fit the feature to svm algo
grid.fit(features_SVM, answers)
#build table
outPut = []
y1 = []
for i in range(0, 4):
outPut.append([grid.cv_results_['params'][i]['C'],
"{0:.2f}%".format(grid.cv_results_['mean_test_score'][i]*100)])
y1.append(grid.cv_results_['mean_test_score'][i]*100)
#print table
print(tabulate(outPut, headers=['Variable C','class_weight= {‘balanced’}']))
print("The best parameters are ", grid.best_params_," with a score of {0:.2f}%".format(float(grid.best_score_)* 100))
#rbf
print("\nSVM rbf")
params = dict(kernel=['rbf'], C=c, gamma=c ,class_weight=['balanced'], cache_size=[2048])
grid = GridSearchCV(SVC(), param_grid=params, cv=dataset_splitter, n_jobs=-1, iid=True)
#Fit the feature to svm algo
grid.fit(features_SVM, answers)
#build table
outPut = []
outPut.append(['0.001',
"{0:.2f}%".format(grid.cv_results_['mean_test_score'][0]*100),
"{0:.2f}%".format(grid.cv_results_['mean_test_score'][1]*100),
"{0:.2f}%".format(grid.cv_results_['mean_test_score'][2]*100),
"{0:.2f}%".format(grid.cv_results_['mean_test_score'][3]*100)])
outPut.append(['0.1',
"{0:.2f}%".format(grid.cv_results_['mean_test_score'][4]*100),
"{0:.2f}%".format(grid.cv_results_['mean_test_score'][5]*100),
"{0:.2f}%".format(grid.cv_results_['mean_test_score'][6]*100),
"{0:.2f}%".format(grid.cv_results_['mean_test_score'][7]*100)])
outPut.append(['1.0',
"{0:.2f}%".format(grid.cv_results_['mean_test_score'][8]*100),
"{0:.2f}%".format(grid.cv_results_['mean_test_score'][9]*100),
"{0:.2f}%".format(grid.cv_results_['mean_test_score'][10]*100),
"{0:.2f}%".format(grid.cv_results_['mean_test_score'][11]*100)])
outPut.append(['10.0',
"{0:.2f}%".format(grid.cv_results_['mean_test_score'][12]*100),
"{0:.2f}%".format(grid.cv_results_['mean_test_score'][13]*100),
"{0:.2f}%".format(grid.cv_results_['mean_test_score'][14]*100),
"{0:.2f}%".format(grid.cv_results_['mean_test_score'][15]*100)])
y2=[grid.cv_results_['mean_test_score'][0]*100,grid.cv_results_['mean_test_score'][1]*100,grid.cv_results_['mean_test_score'][2]*100,grid.cv_results_['mean_test_score'][3]*100]
y3=[grid.cv_results_['mean_test_score'][4]*100,grid.cv_results_['mean_test_score'][5]*100,grid.cv_results_['mean_test_score'][6]*100,grid.cv_results_['mean_test_score'][7]*100]
y4=[grid.cv_results_['mean_test_score'][8]*100,grid.cv_results_['mean_test_score'][9]*100,grid.cv_results_['mean_test_score'][10]*100,grid.cv_results_['mean_test_score'][11]*100]
y5=[grid.cv_results_['mean_test_score'][12]*100,grid.cv_results_['mean_test_score'][13]*100,grid.cv_results_['mean_test_score'][14]*100,grid.cv_results_['mean_test_score'][15]*100]
#print table
print(tabulate(outPut, headers=['Variable C','Ɣ=0.001','Ɣ=0.1','Ɣ=1.0','Ɣ=10.0']))
print("The best parameters are ", grid.best_params_," with a score of {0:.2f}%".format(float(grid.best_score_)* 100))
print("-> Done\n\n")
plt.grid(True)
plt.xlabel('Variable C')
plt.ylabel('Accuracy')
plt.plot(c, y1,label='Linear')
plt.plot(c, y2,label='Gamma=0.001')
plt.plot(c, y3,label='Gamma=0.1')
plt.plot(c, y4,label='Gamma=1')
plt.plot(c, y5,label='Gamma=10')
plt.legend()
plt.ylim(45, 85)
plt.show()
###Output
_____no_output_____
###Markdown
Réseaux neuronaux
###Code
def neuralNetwork(runId, networkFrame, epoch, learning_rate):
# Format arrays to np arrays
features_train = []
answers_train = []
features_test = []
answers_test = []
for train_index, test_index in dataset_splitter.split(features, answers):
for elem in train_index:
features_train.append(features[elem])
answers_train.append(answers[elem])
for elem in test_index:
features_test.append(features[elem])
answers_test.append(answers[elem])
print("1.Initializing Neural Network for run #" + str(runId))
# Create a default in-process session.
directory = TENSORBOARD_SUMMARY + "/run" + str(runId)
if not os.path.exists(directory):
os.makedirs(directory)
print("TensorBoard summary writer at :" + directory + "\n")
tbCallBack = keras.callbacks.TensorBoard(log_dir=directory, histogram_freq=1, write_graph=False, write_images=False)
# Parameters
dimension = len(features[0])
layers = networkFrame
epoch = epoch
batch_size = 200
learning_rate = learning_rate
# The network type
neuralNetwork_model = keras.Sequential()
counter = 1
# Set layer in model
# First layer is set according to data dimension
neuralNetwork_model.add(keras.layers.Dense(dimension, input_dim=dimension, kernel_initializer='random_normal', bias_initializer='zeros', activation=LAYERS_ACTIVATION))
neuralNetwork_model.add(keras.layers.Dropout(0.5))
# Other layer set using layers array
for perceptron in layers:
if len(layers) == counter:
# Last layer (2 neurons for 2 possible class, SIGMOID ensure result between 1 and 0)
neuralNetwork_model.add(keras.layers.Dense(1, kernel_initializer='random_normal', bias_initializer='zeros', activation=LAST_LAYER_ACTIVATION))
#print("Layer #" + str(counter) + ": dimension = " + str(2) + ", activation = " + LAST_LAYER_ACTIVATION)
else:
# Adds Layer
neuralNetwork_model.add(keras.layers.Dense(perceptron, kernel_initializer='random_normal', bias_initializer='zeros', activation=LAYERS_ACTIVATION))
neuralNetwork_model.add(keras.layers.Dropout(0.5))
#print("Layer #" + str(counter) + ": dimension = " + str(perceptron) + ", activation = " + LAYERS_ACTIVATION)
counter = counter + 1
# Compile the network according to previous settings
neuralNetwork_model.compile(optimizer=tf.train.AdamOptimizer(learning_rate),
loss='binary_crossentropy',
metrics=['accuracy'])
# Print visualisation of network (layer and perceptron)
neuralNetwork_model.summary()
# Fit model to data
print("\n2.Training\n")
neuralNetwork_model.fit(np.array(features_train), np.array(answers_train),
epochs=epoch,
batch_size=batch_size,
validation_data=(np.array(features_test), np.array(answers_test)),
callbacks=[tbCallBack],
verbose=2)
# Evaluation
#scores = neuralNetwork_model.evaluate(np.array(features_train), np.array(answers_train), verbose=1)
#print("\n%s: %.2f%%" % (neuralNetwork_model.metrics_names[1], scores[1]*100))
# Clear previous model
keras.backend.clear_session()
###Output
_____no_output_____
###Markdown
Main
###Code
#1.A Read Galaxy features (name of file, path, n_split, test size, random state)
if os.path.isfile(MERGED_GALAXY_PRIMITIVE):
features, features_SVM, answers, dataset_splitter = Data.prepareDataset("Galaxy", MERGED_GALAXY_PRIMITIVE, 5, 0.2, 0)
else:
features, features_SVM, answers, dataset_splitter = Data.prepareDataset("Galaxy", ALL_GALAXY_PRIMITIVE, 5, 0.2, 0)
#2. Algorithms
print("ALGORITHMS")
print("\nSVM:")
svm()
print("\nNeural Network:")
print("TensorFlow version:" + tf.VERSION + ", Keras version:" + tf.keras.__version__ + "\n")
# Diff number of layer
neuralNetwork(1, [100, 100, 2], 60, 0.0005)
neuralNetwork(2, [100, 2], 60, 0.0005)
neuralNetwork(3, [100, 100, 100, 100, 2], 60, 0.0005)
# Diff perceptron
neuralNetwork(4, [80, 50, 2], 60, 0.0005)
neuralNetwork(5, [120, 2], 60, 0.0005)
neuralNetwork(6, [100, 120, 100, 50, 2], 60, 0.0005)
# Diff epoch
neuralNetwork(7, [100, 100, 2], 60, 0.0005)
neuralNetwork(8, [100, 2], 20, 0.0005)
neuralNetwork(9, [100, 100, 100, 100, 2], 100, 0.0005)
# Diff learning
neuralNetwork(10, [100, 100, 2], 60, 0.0005)
neuralNetwork(11, [100, 2], 60, 0.005)
neuralNetwork(12, [100, 100, 100, 100, 2], 60, 0.05)
###Output
PREPARING DATASETS
Reading Galaxy features:
Progress |**************************************************| 100.0% Complete
-> Done
Splitting Dataset according to these params:
Property Value
------------ -------
n_splits 5
test_size 0.2
random_state 0
-> Done
ALGORITHMS
SVM:
SVM linear
Variable C class_weight= {‘balanced’}
------------ ----------------------------
0.001 51.89%
0.1 77.97%
1 81.28%
10 81.43%
The best parameters are {'C': 10.0, 'cache_size': 2048, 'class_weight': 'balanced', 'kernel': 'linear'} with a score of 81.43%
SVM rbf
Variable C Ɣ=0.001 Ɣ=0.1 Ɣ=1.0 Ɣ=10.0
------------ --------- ------- ------- --------
0.001 51.89% 51.89% 51.89% 72.26%
0.1 48.11% 69.83% 81.37% 84.39%
1 48.46% 79.79% 83.81% 84.91%
10 69.69% 82.45% 84.46% 85.10%
The best parameters are {'C': 10.0, 'cache_size': 2048, 'class_weight': 'balanced', 'gamma': 10.0, 'kernel': 'rbf'} with a score of 85.10%
-> Done
|
lectures/L19/L19_Exercise_final.ipynb | ###Markdown
Lecture 19 Monday, November 13th 2017 Joins with `SQLite`, `pandas` Starting UpYou can connect to the saved database from last time if you want. Alternatively, for extra practice, you can just recreate it from the datasets provided in the `.txt` files. That's what I'll do.
###Code
import sqlite3
import numpy as np
import pandas as pd
import time
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
db = sqlite3.connect('L19DB.sqlite')
cursor = db.cursor()
cursor.execute("DROP TABLE IF EXISTS candidates")
cursor.execute("DROP TABLE IF EXISTS contributors")
cursor.execute("PRAGMA foreign_keys=1")
cursor.execute('''CREATE TABLE candidates (
id INTEGER PRIMARY KEY NOT NULL,
first_name TEXT,
last_name TEXT,
middle_init TEXT,
party TEXT NOT NULL)''')
db.commit() # Commit changes to the database
cursor.execute('''CREATE TABLE contributors (
id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
last_name TEXT,
first_name TEXT,
middle_name TEXT,
street_1 TEXT,
street_2 TEXT,
city TEXT,
state TEXT,
zip TEXT,
amount REAL,
date DATETIME,
candidate_id INTEGER NOT NULL,
FOREIGN KEY(candidate_id) REFERENCES candidates(id))''')
db.commit()
with open ("candidates.txt") as candidates:
next(candidates) # jump over the header
for line in candidates.readlines():
cid, first_name, last_name, middle_name, party = line.strip().split('|')
vals_to_insert = (int(cid), first_name, last_name, middle_name, party)
cursor.execute('''INSERT INTO candidates
(id, first_name, last_name, middle_init, party)
VALUES (?, ?, ?, ?, ?)''', vals_to_insert)
with open ("contributors.txt") as contributors:
next(contributors)
for line in contributors.readlines():
cid, last_name, first_name, middle_name, street_1, street_2, \
city, state, zip_code, amount, date, candidate_id = line.strip().split('|')
vals_to_insert = (last_name, first_name, middle_name, street_1, street_2,
city, state, int(zip_code), amount, date, candidate_id)
cursor.execute('''INSERT INTO contributors (last_name, first_name, middle_name,
street_1, street_2, city, state, zip, amount, date, candidate_id)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)''', vals_to_insert)
candidate_cols = [col[1] for col in cursor.execute("PRAGMA table_info(candidates)")]
contributor_cols = [col[1] for col in cursor.execute("PRAGMA table_info(contributors)")]
def viz_tables(cols, query):
q = cursor.execute(query).fetchall()
framelist = []
for i, col_name in enumerate(cols):
framelist.append((col_name, [col[i] for col in q]))
return pd.DataFrame.from_items(framelist)
###Output
_____no_output_____
###Markdown
RecapLast time, you played with a bunch of `SQLite` commands to query and update the tables in the database.One thing we didn't get to was how to query the contributors table based off of a query in the candidates table. For example, suppose you want to query which contributors donated to Obama. You could use a nested `SELECT` statement to accomplish that.
###Code
query = '''SELECT * FROM contributors WHERE candidate_id = (SELECT id from candidates WHERE last_name = "Obama")'''
viz_tables(contributor_cols, query)
###Output
_____no_output_____
###Markdown
JoinsThe last example involved querying data from multiple tables.In particular, we combined columns from the two related tables (related through the `FOREIGN KEY`).This leads to the idea of *joining* multiple tables together. `SQL` has a set of commands to handle different types of joins. `SQLite` does not support the full suite of join commands offered by `SQL` but you should still be able to get the main ideas from the limited command set.We'll begin with the `INNER JOIN`. INNER JOINThe idea here is that you will combine the tables if the values of certain columns are the same between the two tables. In our example, we will join the two tables based on the candidate id. The result of the `INNER JOIN` will be a new table consisting of the columns we requested and containing the common data. Since we are joining based off of the candidate id, we will not be excluding any rows. ExampleHere are two tables. Table A has the form:| nA | attr | idA || :::: | ::::: | ::: || s1 | 23 | 0 || s2 | 7 | 2 |and table B has the form:| nB | attr | idB || :::: | ::::: | ::: || t1 | 60 | 0 || t2 | 14 | 7 || t3 | 22 | 2 |Table A is associated with Table B through a foreign key on the id column.If we join the two tables by comparing the id columns and selecting the nA, nB, and attr columns then we'll get | nA | A.attr | nB | B.attr || :::: | ::::::: | ::: | :::::: || s1 | 23 | t1 | 60 || s2 | 7 | t3 | 22 |The `SQLite` code to do this join would be ```sqlSELECT nA, A.attr, nB, B.attr FROM A INNER JOIN B ON B.idB = A.idA```Notice that the second row in table B is gone because the id values are not the same. ThoughtsWhat is `SQL` doing with this operation? It may help to visualize this with a Venn diagram. Table A has rows with values corresponding to the `idA` attribute. Column B has rows with values corresponding to the `idB` attribute. The `INNER JOIN` will combine the two tables such that rows with common entries in the `id` attributes are included. We essentially have the following Venn diagram. Exercises1. Using an `INNER JOIN`, join the candidates and contributors tables by comparing the `candidate_id` and `candidates_id` columns. Display your joined table with the columns `contributors.last_name`, `contributors.first_name`, and `candidates.last_name`.2. Do the same inner join as in the last part, but this time append a `WHERE` clause to select a specific candidate's last name.
###Code
from IPython.core.display import display
query = '''SELECT contributors.last_name, contributors.first_name, candidates.last_name
FROM candidates
INNER JOIN contributors ON candidates.id = contributors.candidate_id'''
display(viz_tables(['contributors.last_name', 'contributors.first_name', 'candidates.last_name'], query))
query = '''SELECT contributors.last_name, contributors.first_name, candidates.last_name
FROM candidates
INNER JOIN contributors ON candidates.id = contributors.candidate_id
WHERE candidates.last_name = "Obama"'''
display(viz_tables(['contributors.last_name', 'contributors.first_name', 'candidates.last_name'], query))
###Output
_____no_output_____
###Markdown
`LEFT JOIN` or `LEFT OUTER JOIN`There are many ways to combine two tables. We just explored one possibility in which we combined the tables based upon the intersection of the two tables (the `INNER JOIN`).Now we'll talk about the `LEFT JOIN` or `LEFT OUTER JOIN`.In words, the `LEFT JOIN` is combining the tables based upon what is in the intersection of the two tables *and* what is in the "reference" table.We can consider our toy example in two guises: Example ALet's do a `LEFT JOIN` of table B from table A. That is, we'd like to make a new table by putting table B into table A. In this case, we'll consider table A our "reference" table. We're comparing by the `id` column again. We know that these two tables share ids 0 and 2 and table A doesn't have anything else in it. The resulting table is:| nA | A.attr | nB | B.attr || :::: | ::::::: | ::: | :::::: || s1 | 23 | t1 | 60 || s2 | 7 | t3 | 22 |That's not very exciting. It's the same result as from the `INNER JOIN`. We can do another example that may be more enlightening. Example BLet's do a `LEFT JOIN` of table A from table B. That is, we'd like to make a new table by putting table A into table B. In this case, we'll consider table B our "reference" table. Again, we use the `id` column from comparison. We know that these two tables share ids 0 and 2. This time, table B also contains the id 7, which is not shared by table A. The resulting table is:| nA | A.attr | nB | B.attr || :::: | ::::::: | ::: | :::::: || s1 | 23 | t1 | 60 || None | NaN | t2 | 14 || s2 | 7 | t3 | 22 |Notice that `SQLite` filed in the missing entries for us. This is necessary for completion of the requested join.The `SQLite` commands to accomplish all of this are:```sqlSELECT nA, A.attr, nB, B.attr FROM A LEFT JOIN B ON B.idB = A.idA```and```sqlSELECT nA, A.attr, nB, B.attr FROM B LEFT JOIN A ON A.idA = B.idB```Here is a visualization using Venn diagrams of the `LEFT JOIN`. ExercisesUse the following two tables to do the first two exercises in this section. Table A has the form:| nA | attr | idA || :::: | ::::: | ::: || s1 | 23 | 0 || s2 | 7 | 2 || s3 | 15 | 2 || s4 | 31 | 0 |and table B has the form:| nB | attr | idB || :::: | ::::: | ::: || t1 | 60 | 0 || t2 | 14 | 7 || t3 | 22 | 2 |1. Draw the table that would result from a `LEFT JOIN` using table A as the reference and the `id` columns for comparison.2. Draw the table that would result from a `LEFT JOIN` using table B as the reference and the `id` columns for comparison.3. Create a new table with the following form:| average contribution | number of contributors | candidate last name || :::::::::::::::::::: | :::::::::::::::::::::: | ::::::::::::::::::: || ... | ... | ... | The table should be created using the `LEFT JOIN` clause on the contributors table by joining the candidates table by the `id` column. The `average contribution` column and `number of contributors` column should be obtained using the `AVG` and `COUNT` `SQL` functions. Finally, you should use the `GROUP BY` clause on the candidates last name. **1. Draw the table that would result from a `LEFT JOIN` using table A as the reference and the `id` columns for comparison.**| nA | A.attr | idA | nB | B.attr | idB || :::: | ::::: | ::: | ::: | ::::: | ::: || s1 | 23 | 0 | t1 | 60 | 0 || s2 | 7 | 2 | t3 | 22 | 2 || s3 | 15 | 2 | t3 | 22 | 2 || s4 | 31 | 0 | t1 | 60 | 0 |**2. Draw the table that would result from a `LEFT JOIN` using table B as the reference and the `id` columns for comparison.**| nB | B.attr | idB | nA | A.attr | idA || :::: | ::::: | ::: | ::: | ::::: | ::: || t1 | 60 | 0 | s1 | 23 | 0 || t1 | 60 | 0 | s4 | 31 | 0 || t2 | 14 | 7 | NaN | NaN | 7 || t3 | 22 | 2 | s2 | 7 | 2 || t3 | 22 | 2 | s3 | 15 | 2 |**3. Create a new table using the `LEFT JOIN` clause on the contributors table by joining the candidates table by the `id` column. The `average contribution` column and `number of contributors` column should be obtained using the `AVG` and `COUNT` `SQL` functions. Finally, you should use the `GROUP BY` clause on the candidates last name. **
###Code
query = '''SELECT AVG(contributors.amount), COUNT(contributors.id), candidates.last_name
FROM contributors
LEFT JOIN candidates ON contributors.candidate_id = candidates.id
GROUP BY candidates.last_name'''
display(viz_tables(['average contribution', 'number of contributors', 'candidates last name'], query))
###Output
_____no_output_____
###Markdown
--- `pandas` We've been working with databases for the last few lectures and learning `SQLite` commands to work with and manipulate the databases. There is a `Python` package called `pandas` that provides broad support for data structures. It can be used to interact with relationsional databases through its own methods and even through `SQL` commands.In the last part of this lecture, you will get to redo a bunch of the database exercises using `pandas`.We won't be able to cover `pandas` from the ground up, but it's a well-documented library and is fairly easy to get up and running. Here's the website: [`pandas`](http://pandas.pydata.org/). Reading a datafile into `pandas`
###Code
# Using pandas naming convention
dfcand = pd.read_csv("candidates.txt", sep="|")
dfcand
dfcontr = pd.read_csv("contributors.txt", sep="|")
dfcontr
###Output
_____no_output_____
###Markdown
Reading things in is quite easy with `pandas`.Notice that `pandas` populates empty fields with `NaN` values.The `id` column in the contributors dataset is superfluous. Let's delete it.
###Code
del dfcontr['id']
dfcontr.head()
###Output
_____no_output_____
###Markdown
Very nice! And we used the `head` method to print out the first five rows. Creating a Table with `pandas`We can use `pandas` to create tables in a database.First, let's create a new database since we've already done a lot on our test database.
###Code
dbp = sqlite3.connect('L19_pandas_DB.sqlite')
csr = dbp.cursor()
csr.execute("DROP TABLE IF EXISTS candidates")
csr.execute("DROP TABLE IF EXISTS contributors")
csr.execute("PRAGMA foreign_keys=1")
csr.execute('''CREATE TABLE candidates (
id INTEGER PRIMARY KEY NOT NULL,
first_name TEXT,
last_name TEXT,
middle_name TEXT,
party TEXT NOT NULL)''')
dbp.commit() # Commit changes to the database
csr.execute('''CREATE TABLE contributors (
id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
last_name TEXT,
first_name TEXT,
middle_name TEXT,
street_1 TEXT,
street_2 TEXT,
city TEXT,
state TEXT,
zip TEXT,
amount REAL,
date DATETIME,
candidate_id INTEGER NOT NULL,
FOREIGN KEY(candidate_id) REFERENCES candidates(id))''')
dbp.commit()
###Output
_____no_output_____
###Markdown
Last time, we opened the data files with `Python` and then manually used `SQLite` commands to populate the individual tables. We can use `pandas` instead like so.
###Code
dfcand.to_sql("candidates", dbp, if_exists="append", index=False)
###Output
_____no_output_____
###Markdown
How big is our table?
###Code
dfcand.shape
###Output
_____no_output_____
###Markdown
We can visualize the data in our `pandas`-populated table. No surprises here except that `pandas` did everything for us.
###Code
query = '''SELECT * FROM candidates'''
csr.execute(query).fetchall()
###Output
_____no_output_____
###Markdown
Querying a table with `pandas` One Way
###Code
dfcand.query("first_name=='Mike' & party=='D'")
###Output
_____no_output_____
###Markdown
Another Way
###Code
dfcand[(dfcand.first_name=="Mike") & (dfcand.party=="D")]
###Output
_____no_output_____
###Markdown
More Queries
###Code
dfcand[dfcand.middle_name.notnull()]
dfcand[dfcand.first_name.isin(['Mike', 'Hillary'])]
###Output
_____no_output_____
###Markdown
Exercises1. Use `pandas` to populate the contributors table.2. Query the contributors tables with the following: 1. List entries where the state is "VA" and the amount is less than $\$400.00$. 2. List entries where the state is "NULL". 3. List entries for the states of Texas and Pennsylvania. 4. List entries where the amount contributed is between $\$10.00$ and $\$50.00$.
###Code
# Populate the contributors table
dfcontr.to_sql("contributors", dbp, if_exists="append", index=False)
dfcontr.shape
query = '''SELECT * FROM contributors'''
contributor_cols_dbp = [col[1] for col in csr.execute("PRAGMA table_info(contributors)")]
display(viz_tables(contributor_cols_dbp, query))
# Query A
dfcontr[(dfcontr.state=="VA") & (dfcontr.amount<400)]
# Query B
dfcontr[dfcontr.state.isnull()]
# Query C
dfcontr[dfcontr.state.isin(['TX', 'PA'])]
# Query D
dfcontr[(dfcontr.amount<=50) & (dfcontr.amount>=10)]
###Output
_____no_output_____
###Markdown
Sorting
###Code
dfcand.sort_values(by='party')
dfcand.sort_values(by='party', ascending=False)
###Output
_____no_output_____
###Markdown
Selecting Columns
###Code
dfcand[['last_name', 'party']]
dfcand[['last_name', 'party']].count()
dfcand[['first_name']].drop_duplicates()
dfcand[['first_name']].drop_duplicates().count()
###Output
_____no_output_____
###Markdown
Exercises1. Sort the contributors table by `amount` and order in *descending* order.2. Select the `first_name` and `amount` columns.3. Select the `last_name` and `first_name` columns and drop duplicates.4. Count how many there are after the duplicates have been dropped.
###Code
dfcontr.sort_values(by='amount', ascending=False)
dfcontr[['first_name', 'amount']]
dfcontr[['last_name', 'first_name']].drop_duplicates()
dfcontr[['last_name', 'first_name']].drop_duplicates().count()
###Output
_____no_output_____
###Markdown
Altering Tables Creating a new column is quite easy with `pandas`.
###Code
dfcand['name'] = dfcand['last_name'] + ", " + dfcand['first_name']
dfcand
###Output
_____no_output_____
###Markdown
We can change an existing field as well.
###Code
dfcand.loc[dfcand.first_name == "Mike", "name"]
dfcand.loc[dfcand.first_name == "Mike", "name"] = "Mikey"
dfcand.query("first_name == 'Mike'")
dfcand.loc[dfcand.first_name == "Mike", "name"]
###Output
_____no_output_____
###Markdown
You may recall that `SQLite` doesn't have the functionality to drop a column. It's a one-liner with `pandas`.
###Code
del dfcand['name']
dfcand
###Output
_____no_output_____
###Markdown
Exercises1. Create a name column for the contributors table with field entries of the form "last name, first name"2. For contributors from the state of "PA", change the name to "X".3. Delete the newly created name column.
###Code
# Create a name col for contributors table
dfcontr['name'] = dfcontr['last_name'] + ", " + dfcontr['first_name']
dfcontr.head()
# Change the name of the contributors from "PA" to "X"
dfcontr.loc[dfcontr.state == "PA", "name"] = "X"
dfcontr[dfcontr.state == "PA"]
# Delete the 'name' column
del dfcontr['name']
dfcontr.head()
###Output
_____no_output_____
###Markdown
AggregationWe'd like to get information about the tables such as the maximum amount contributed to the candidates. Here are a bunch of way to describe the tables.
###Code
dfcand.describe()
###Output
_____no_output_____
###Markdown
It's not very interesting with the candidates table because the candidates table only has one numeric column. ExerciseUse the `describe()` method on the `contributors` table. I'll use the contributors table to do some demos now.
###Code
dfcontr.amount.max()
dfcontr[dfcontr.amount==dfcontr.amount.max()]
dfcontr.groupby("state").sum()
dfcontr.groupby("state")["amount"].sum()
dfcontr.state.unique()
###Output
_____no_output_____
###Markdown
There is also a version of the `LIMIT` clause. It's very intuitive with `pandas`.
###Code
dfcand[0:3]
###Output
_____no_output_____
###Markdown
The usual `Python` slicing works just fine! Joins with `pandas` `pandas` has some some documentation on `joins`: [Merge, join, and concatenate](http://pandas.pydata.org/pandas-docs/stable/merging.html). If you want some more reinforcement on the concepts from earlier regarding `JOIN`, then the `pandas` documentation may be a good place to get it.You may also be interested in [a comparison with `SQL`](http://pandas.pydata.org/pandas-docs/stable/comparison_with_sql.htmlcompare-with-sql-join).To do joins with `pandas`, we use the `merge` command. Here's an example of an explicit inner join:
###Code
cols_wanted = ['last_name_x', 'first_name_x', 'candidate_id', 'id', 'last_name_y']
dfcontr.merge(dfcand, left_on="candidate_id", right_on="id")[cols_wanted]
###Output
_____no_output_____
###Markdown
Somewhat organized example
###Code
dfcontr.merge(dfcand, left_on="candidate_id", right_on="id")[cols_wanted].groupby('last_name_y').describe()
###Output
_____no_output_____
###Markdown
Other Joins with `pandas`We didn't cover all possible joins because `SQLite` can only handle the few that we did discuss. As mentioned, there are workarounds for some things in `SQLite`, but not evertyhing. Fortunately, `pandas` can handle pretty much everything. Here are a few joins that `pandas` can handle:* `LEFT OUTER` (already discussed)* `RIGHT OUTER` - Think of the "opposite" of a `LEFT OUTER` join (shade the intersection and *right* set in the Venn diagram).* `FULL OUTER` - Combine everything from both tables (shade the entire Venn diagram) Left Outer Join with `pandas`
###Code
dfcontr.merge(dfcand, left_on="candidate_id", right_on="id", how="left")[cols_wanted]
###Output
_____no_output_____
###Markdown
Right Outer Join with `pandas`
###Code
dfcontr.merge(dfcand, left_on="candidate_id", right_on="id", how="right")[cols_wanted]
###Output
_____no_output_____
###Markdown
Full Outer Join with `pandas`
###Code
dfcontr.merge(dfcand, left_on="candidate_id", right_on="id", how="outer")[cols_wanted]
# Close DB
db.commit()
db.close()
dbp.commit()
dbp.close()
###Output
_____no_output_____ |
Pandas/Dados/Tratamento de Dados Faltantes.ipynb | ###Markdown
Relatório de Análise V Tratamento de Dados Faltantes
###Code
import pandas as pd
dados = pd.read_csv('aluguel_residencial.csv', sep=';')
dados
dados.isnull()
dados.info()
dados['Valor'].isnull()
dados[dados['Valor'].isnull()]
A = dados.shape[0]
dados.dropna(subset = ['Valor'])
B = dados.shape[0]
A - B
A = dados.shape[0]
dados.dropna(subset = ['Valor'], inplace = True)
B = dados.shape[0]
A - B
dados[dados['Condominio'].isnull()].shape[0]
selecao = (dados['Tipo'] == 'Apartamento') & (dados['Condominio'].isnull())
A = dados.shape[0]
dados = dados[~selecao]
B = dados.shape[0]
A - B
dados[dados['Condominio'].isnull()].shape[0]
selecao = (dados['Tipo'] == 'Apartamento') & (dados['Condominio'].isnull())
dados.fillna(0, inplace = True)
dados = dados.fillna({'Condominio': 0, 'IPTU': 0})
dados.info()
dados.to_csv('aluguel_residencial.csv', sep = ';', index = False)
###Output
_____no_output_____ |
simulation(1).ipynb | ###Markdown
Mass distribution of planets
###Code
#define the masses of all planets
#mass in earth mass
masses = SU.Mp
#convert spec object array to string array
Spec=SU.TargetList.Spec.astype(str)
#assign each palent a spec type
planet_spec = np.array([Spec[i] for i in SU.plan2star])
#for i in Mspec, if == 0 then means it contains str 'M'
#in other word, its M-type
strM = 'M'
Mspec = []
for s in planet_spec:
M = s.find(strM)
Mspec.append(M)
#convert to array
Mspec = np.array(Mspec)
#get the index of all the planets around M stars
M_id = np.where(Mspec==0)
#use the indexing to extract mass for all the planets around M stars
M_mass = [masses[i] for i in M_id][0]
M_len = len(M_mass)
print(len(M_mass))
hist, Mbins, _ = plt.hist(M_mass, bins=35)
plt.figure(figsize=(7,7))
Mlogbins = np.logspace(np.log10(Mbins[0]),np.log10(Mbins[-1]),len(Mbins))
plt.hist(M_mass,bins=Mlogbins,color="royalblue")
plt.gca().set_xscale("log")
plt.xlabel('log scale earth mass')
plt.ylabel('number of planets')
plt.title(' Mass distribution of %d planets aroud M-type stars within 30pc' %M_len)
plt.savefig('M-type.png')
#for i in FGKspec, if == 0 then means it contains str 'F'
#in other word, its F-type
strF = 'F'
strG = 'G'
strK = 'K'
FGKspec = []
for s in planet_spec:
F = s.find(strF)
G = s.find(strG)
K = s.find(strK)
FGKspec.append(F&G&K)
#convert to array
FGKspec = np.array(FGKspec)
#get the index of all the planets around FGK stars
FGK_id = np.where(FGKspec==0)
#use the indexing to extract mass for all the planets around FGK stars
FGK_mass = [masses[i] for i in FGK_id][0]
FGK_len = len(FGK_mass)
print(len(FGK_mass))
hist, FGKbins, _ = plt.hist(FGK_mass, bins=35)
plt.figure(figsize=(7,7))
FGKlogbins = np.logspace(np.log10(FGKbins[0]),np.log10(FGKbins[-1]),len(FGKbins))
plt.hist(FGK_mass,bins=FGKlogbins,color="royalblue")
#plt.xlim(0,10000)
plt.gca().set_xscale("log")
plt.xlabel('log scale earth mass')
plt.ylabel('number of planets')
plt.title(' Mass distribution of %d planets aroud FGK stars within 30pc'%FGK_len)
plt.savefig('FGK-type.png')
###Output
_____no_output_____
###Markdown
Temperature of planets
###Code
#define some variables
star_name = SU.TargetList.Name.astype(str)
star_mag = SU.TargetList.MV
albedo = SU.p
#star luminsity in terms of ln(sun lum)
star_lum = SU.TargetList.L
#define distance of the planets form their host star
d = SU.d
dist = d.to(u.m)
#define solar luministy in terms of watt
solar_lum = 3.828E26
#change the lum unit to Watt
star_lum_W = []
for i in star_lum:
lum = solar_lum*(np.e**i)
star_lum_W.append(lum)
#assign each palent the lum of its star
planet_star_lum = np.array([star_lum_W[i] for i in SU.plan2star])
#extract the planets around FGK stars for their corresponding star lum
FGKstar_lum = [planet_star_lum[i] for i in FGK_id][0]
Mstar_lum = [planet_star_lum[i] for i in M_id][0]
#use the indexing to extract distant and albedo for all the planets around FGK stars
FGK_dist = [dist[i] for i in FGK_id][0]
FGK_albedo = [albedo[i] for i in FGK_id][0]
#use the indexing to extract distant adn albedo for all the planets around M stars
M_dist = [dist[i] for i in M_id][0]
M_albedo = [albedo[i] for i in M_id][0]
###Output
_____no_output_____
###Markdown
Temp for planets around FGK stars
###Code
#define Stefan-Boltzman constant
sigma = 5.6704E-8
#caculate the tempurature in K
FGK_T = np.power((1-FGK_albedo)*FGKstar_lum/(16.*np.pi*(FGK_dist**2.)*sigma),1./4)
#change astropy unit to floats
FGK_Temp = FGK_T.value
#limt the range of the temperature to get rid of the extremes
Temp_FGK = []
for i in FGK_Temp:
if i < 2500:
Temp_FGK.append(i)
FGK_len_new = len(Temp_FGK)
plt.figure(figsize=(7,7))
hist, FGK_Tbins, _ = plt.hist(Temp_FGK, bins=70,color="royalblue")
plt.xlim(0,1500)
plt.xlabel('Temperature[K]')
plt.ylabel('Number of planets')
plt.title(' Temperature distrubution of %d planets aroud FGK stars within 30pc'%FGK_len_new)
plt.savefig('FGK-T.png')
plt.figure(figsize=(7,7))
FGK_Tlogbins = np.logspace(np.log10(FGK_Tbins[0]),np.log10(FGK_Tbins[-1]),len(FGK_Tbins))
plt.hist(Temp_FGK,bins=FGK_Tlogbins,color="royalblue")
plt.gca().set_xscale("log")
plt.xlabel('Temperature in log scale K')
plt.ylabel('Number of planets')
plt.title(' Temperature distribution of %d planets aroud FGK type star within 30pc'%FGK_len_new)
###Output
_____no_output_____
###Markdown
Temp for planets around FGK stars
###Code
#caculate the tempurature in K
M_T = np.power((1-M_albedo)*Mstar_lum/(16.*np.pi*(M_dist**2.)*sigma),1./4)
#change astropy unit to floats
M_Temp = M_T.value
#limt the range of the temperature to get rid of the extremes
Temp_M = []
for i in M_Temp:
if i < 2500:
Temp_M.append(i)
M_len_new = len(Temp_M)
plt.figure(figsize=(7,7))
hist, M_Tbins, _ = plt.hist(Temp_M, bins=70,color="royalblue")
plt.xlim(0,1500)
plt.xlabel('Temperature[K]')
plt.ylabel('Number of planets')
plt.title(' Temperature distribution of %d planets aroud M stars within 30pc'%M_len_new)
plt.savefig('M-T.png')
plt.figure(figsize=(7,7))
M_Tlogbins = np.logspace(np.log10(M_Tbins[0]),np.log10(M_Tbins[-1]),len(M_Tbins))
plt.hist(Temp_M,bins=M_Tlogbins,color="royalblue")
plt.gca().set_xscale("log")
plt.xlabel('Temperature in log scale K')
plt.ylabel('Number of planets')
plt.title(' Temperature distribution of %d planets aroud M type star within 30pc'%M_len_new)
###Output
_____no_output_____
###Markdown
Planet distance vs temperature
###Code
#change the distance unit to AU
FGK_AU = FGK_dist.to(u.AU)
M_AU = M_dist.to(u.AU)
plt.figure(figsize=(7,7))
plt.errorbar(FGK_AU, FGK_Temp,fmt= '.' ,c='royalblue')
plt.ylim(0,2500)
plt.xlabel('Distance[AU]')
plt.ylabel('Temperature[K]')
plt.title('Temperature-Distance of %d planets aroud M type star within 30pc'%FGK_len_new)
plt.figure(figsize=(7,7))
plt.errorbar(M_AU, M_Temp,fmt= '.' ,c='royalblue')
plt.ylim(0,2500)
plt.xlabel('Distance[AU]')
plt.ylabel('Temperature[K]')
plt.title('Temperature-Distance of %d planets aroud M type star within 30pc'%FGK_len_new)
###Output
_____no_output_____ |
Apache Spark + Collaborative Filtering.ipynb | ###Markdown
Install Apache Spark: $ pip install pyspark Initialize spark session:
###Code
from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
sc = SparkContext('local')
spark = SparkSession(sc)
###Output
_____no_output_____
###Markdown
File "sample_movielens_ratings.txt" contains rows with content:userId::movieId::rating::timestampFor 29::9::1::1424380312 example:userId=29movieId=9rating=1timestamp=1424380312 Read and parse dataset:
###Code
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.recommendation import ALS
from pyspark.sql import Row
lines = spark.read.text("sample_movielens_ratings.txt").rdd
parts = lines.map(lambda row: row.value.split("::"))
ratingsRDD = parts.map(lambda p: Row(userId=int(p[0]), movieId=int(p[1]),
rating=float(p[2]), timestamp=float(p[3])))
ratings = spark.createDataFrame(ratingsRDD)
#Split dataset to training and test:
(training, test) = ratings.randomSplit([0.8, 0.2])
###Output
_____no_output_____
###Markdown
Important features while using ALS:- userCol - column with user id identifier- itemCol - column with identifier of an object- ratingCol - column of rating, this could be explicite rating or implicite (for example kind of behaviour), in this second case implicitPrefs=True should be use for better results- coldStartStrategy - strategy for cold start problem, there are 2 solutions in Apache: drop - drop nan values, and nan - return nan values, other strategies are in development
###Code
# Build the recommendation model using ALS on the training data
# Note we set cold start strategy to 'drop' to ensure we don't get NaN evaluation metrics
als = ALS(maxIter=5, regParam=0.01, userCol="userId", itemCol="movieId", ratingCol="rating",
coldStartStrategy="drop")
model = als.fit(training)
# Evaluate the model by computing the RMSE on the test data
predictions = model.transform(test)
evaluator = RegressionEvaluator(metricName="rmse", labelCol="rating",
predictionCol="prediction")
rmse = evaluator.evaluate(predictions)
print("Root-mean-square error = " + str(rmse))
# Generate top 10 movie recommendations for each user
userRecs = model.recommendForAllUsers(10)
userRecs.toPandas().head(3)
# Generate top 10 user recommendations for each movie
movieRecs = model.recommendForAllItems(10)
movieRecs.toPandas().head(3)
recommendations_for_users = userRecs.select("userId", "recommendations.movieId")
recommendations_for_users.collect()
json_rdd = recommendations_for_users.toJSON()
json_rdd.collect()
###Output
_____no_output_____ |
Calculation_of_daily_pivot_levels.ipynb | ###Markdown
https://www.tradingview.com/support/solutions/43000521824-pivot-points-standard/
###Code
last_day['Pivot'] = (last_day['High'] + last_day['Low'] + last_day['Close'])/3
last_day['R1'] = 2*last_day['Pivot'] - last_day['Low']
last_day['S1'] = 2*last_day['Pivot'] - last_day['High']
last_day['R2'] = last_day['Pivot'] + (last_day['High'] - last_day['Low'])
last_day['S2'] = last_day['Pivot'] - (last_day['High'] - last_day['Low'])
last_day['R3'] = last_day['Pivot'] + 2*(last_day['High'] - last_day['Low'])
last_day['S3'] = last_day['Pivot'] - 2*(last_day['High'] - last_day['Low'])
last_day
###Output
_____no_output_____
###Markdown
https://www.tradingview.com/support/solutions/43000521824-pivot-points-standard/
###Code
last_day['Pivot'] = (last_day['High'] + last_day['Low'] + last_day['Close'])/3
last_day['R1'] = 2*last_day['Pivot'] - last_day['Low']
last_day['S1'] = 2*last_day['Pivot'] - last_day['High']
last_day['R2'] = last_day['Pivot'] + (last_day['High'] - last_day['Low'])
last_day['S2'] = last_day['Pivot'] - (last_day['High'] - last_day['Low'])
last_day['R3'] = last_day['Pivot'] + 2*(last_day['High'] - last_day['Low'])
last_day['S3'] = last_day['Pivot'] - 2*(last_day['High'] - last_day['Low'])
last_day
###Output
_____no_output_____ |
Basics/ButlerTutorial.ipynb | ###Markdown
DC2 Data Access with the Gen-2 ButlerOwner: **Yao-Yuan Mao** (@yymao) and **Johann Cohen-Tanugi** (@johannct) based on work by **Daniel Perrefort** ([@djperrefort](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@djperrefort))Last Verified to Run: **2020-11-28**Verified Stack Release: **w2020_48** Core ConceptsThis notebook provides a hands-on overview of how to interact with the Gen-2 `Butler` (it should be updated for Gen-3, once available). The `Butler` provides a way to access information using a uniform interface without needing to keep track of how the information is internally stored or organized. Data access with `Butler` has three levels you need to be aware of:1. Each instantiated `Butler` object provides access to a collection of datasets called a **repository**. Each repository is defined by Butler using the local file directory where the data is stored.2. Each data set in a **repository** is assigned a unique name called a **type**. These types are strings that describe the data set and should not be confused with an "object type" as defined by Python.3. Individual entries in a data set are identified using a unique **data identifier**, which is a dictionary who's allowed keys and values depend on the data set you are working with. Learning Objectives:This notebook demonstrates how to use the Gen-2 `Butler` object from the DM stack to access and manipulate data. After finishing this notebook, users will know how to:1. Load and access a data repository using `Butler`2. Select subsets of data and convert data into familiar data structures3. Use `Butler` to access coordinate information and cutout postage stamps4. Use `Butler` to access a skymap
###Code
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.pyplot as plt
import os
import lsst.afw.display as afwDisplay
from lsst.daf.persistence import Butler
import lsst.geom
from lsst.geom import SpherePoint, Angle
%matplotlib inline
import desc_dc2_dm_data
dc2_version = '2.2i_dr6_wfd'
butler = desc_dc2_dm_data.get_butler(dc2_version)
repo = desc_dc2_dm_data.REPOS[dc2_version]
###Output
_____no_output_____
###Markdown
Loading DataTo start we instantiate a `Butler` object by providing it with the directory of the **repository** we want to access. Next, we load a **type** of dataset and select data from a single **data identifier**. For this demonstration we consider the `deepCoadd_ref` dataset, which contains tables of information concerning coadded images used in the differencing image pipeline. The id values for this data set include two required values: `tract` and `patch` which denote sections of the sky.
###Code
# We choose an "arbitrary" tract and patch.
# Want to figure out how we found this tract and patch? Check out the notebook on Exploring_A_Data_Repo.ipynb
# should contain a cluster at z=0.66 M=1.5e15
tract_id = 4024
patch_id = '3,4'
data_id = {'tract': tract_id, 'patch': patch_id}
dataset_type = 'deepCoadd_ref'
# We can check that the data exists before we try to read it
data_exists = butler.datasetExists(datasetType=dataset_type, dataId=data_id)
print('Data exists for ID:', data_exists)
data_entry = butler.get(dataset_type, dataId=data_id)
data_entry
###Output
_____no_output_____
###Markdown
The data table returned above is formatted as a `SourceCatalog` object, which is essentially a collection of `numpy` arrays. We can see this when we index a particular column.
###Code
print(type(data_entry['merge_measurement_i']))
###Output
_____no_output_____
###Markdown
`SourceCatalog` objects have their own set of methods for table manipulations (sorting, appending rows, etc.). However, we can also work with the data in a more familiar format, such as an astropy `Table` or a pandas `DataFrame`.
###Code
data_frame = data_entry.asAstropy().to_pandas()
data_frame.head()
###Output
_____no_output_____
###Markdown
It is important to note that `Butler` objects do not always return tabular data. We will see an example of this later when we load and parse image data. Selecting Subsets of DataIn practice, you may not know the format of the data identifier for a given data set. In this case, the `getKeys()` method can be used to determine the key values expected in a **data identifier**.
###Code
data_id_format = butler.getKeys(dataset_type)
print('Expected data id format:', data_id_format)
###Output
_____no_output_____
###Markdown
It is important to note that if you specify a key that is not part of the data **type**, the `Butler` will silently ignore it. This can be misleading. For example, in the previous example we read in a table that has a column of booleans named `merge_footprint_i`. If you specify `merge_footprint_i: True` in the dataID and rerun the example, `Butler` will ignore the extra key silently. As a result, you might expect the returned table to only include values where `merge_footprint_i` is `True`, but that isn't what happens. Here is an example of the correct way to select data from the returned table:
###Code
# An example of what not to do!!
#
# new_data_id = {'tract': 0, 'patch': '1,1', 'merge_footprint_i': True}
# merged_i_data = butler.get(dataset_type, dataId=new_data_id)
# assert merged_i_data['merge_measurement_i'].all()
# Do this instead...
new_data_id = {'tract': tract_id, 'patch': patch_id}
merged_i_data = butler.get(dataset_type, dataId=new_data_id)
merged_i_data = merged_i_data[merged_i_data['merge_measurement_i']].copy(True)
# Check that the returned table does in fact have only entries where
# merge_footprint_i is True
print(merged_i_data['merge_measurement_i'].all())
###Output
_____no_output_____
###Markdown
**Important:** Note the use of `copy` above which is required to create an array that is contiguous in memory (yay!)You can also select all complete dataIds for a dataset type that match a partial (or empty) dataId. For example, the below cell iterates over all possible ids and checks if the corresponding file exists.
###Code
subset = butler.subset(dataset_type, dataId=data_id)
id_list = [dr.dataId for dr in subset if dr.datasetExists()]
print(f'Available Ids:\n {id_list}')
###Output
_____no_output_____
###Markdown
Creating Postage StampsWhen dealing with image data, we can use `Butler` to generate postage stamps at a given set of coordinates. For this example, we consider the `deepCoadd` data set, which has one extra key value than the previous example.
###Code
coadd_type = 'deepCoadd'
butler.getKeys(coadd_type)
###Output
_____no_output_____
###Markdown
In order to generate a postage stamp, we need to define the center and size of the cutout. First, we pick an RA and Dec from our previous example.
###Code
# Let's select some nice large galaxies for the purpose of creating postage stamp images
from easyquery import Query
nice_galaxy_query = Query(
"base_ClassificationExtendedness_value == 1",
"modelfit_CModel_instFlux > 5000",
"modelfit_CModel_instFlux / modelfit_CModel_instFluxErr > 10",
"base_PixelFlags_flag == 0",
"merge_footprint_g",
"merge_footprint_r",
"detect_isPatchInner",
)
nice_galaxy_indices = np.flatnonzero(nice_galaxy_query.mask(merged_i_data))
# Pick an RA and Dec
i = nice_galaxy_indices[1]
ra = np.degrees(merged_i_data['coord_ra'][i])
dec = np.degrees(merged_i_data['coord_dec'][i])
###Output
_____no_output_____
###Markdown
Next we plot our cutout.
###Code
# Retrieve the image using butler
coadd_id = {'tract': tract_id, 'patch': patch_id, 'filter': 'i'}
image = butler.get(coadd_type, dataId=coadd_id)
# Let's take a look at the full image first
fig = plt.figure(figsize=(10,10))
display = afwDisplay.Display(frame=1, backend='matplotlib')
display.scale("linear", "zscale")
display.mtv(image.getMaskedImage().getImage())
###Output
_____no_output_____
###Markdown
Since the postage stamp was generated using `Butler`, it is represented as an `afwImage` object. This is a data type from the DM stack that is used to represent images. Since it is a DM object, we choose to plot it using the DM `afwDisplay` module.
###Code
# Define the center and size of our cutout
radec = SpherePoint(ra, dec, lsst.geom.degrees)
cutout_size = 300
cutout_extent = lsst.geom.ExtentI(cutout_size, cutout_size)
# Retrieve cutout
postage_stamp = image.getCutout(radec, cutout_extent)
xy = postage_stamp.getWcs().skyToPixel(radec)
display = afwDisplay.Display(frame=2, backend='matplotlib')
display.mtv(postage_stamp.getMaskedImage().getImage())
display.scale("linear", "zscale")
display.dot('o', xy.getX(), xy.getY(), ctype='red')
display.show_colorbar()
plt.xlabel('x')
plt.ylabel('y')
plt.show()
###Output
_____no_output_____
###Markdown
Note that the cutout image is aware of the masks and pixel values of the original image. This is why the axis labels in the above cutout are so large. We also note that the orientation of the postage stamp is in the x, y orientation of the original coadded image. Creating an RGB picture of a coadd patchA nice and simple interface is also available to create pretty pictures of patch areas (stolen from D. Boutigny). We are using the same patch as above, and define our three colors as bands r,i and g.Then we ask the `deepCoadd` type from the butler, which corresponds to coadded images. We finally make use of the `afw.display` interface to build the RGB image.
###Code
import lsst.afw.display.rgb as rgb
dataId = {'tract':tract_id, 'patch':patch_id}
bandpass_color_map = {'green':'r', 'red':'i', 'blue':'g'}
exposures = {}
for bandpass in bandpass_color_map.values():
dataId['filter'] = bandpass
exposures[bandpass] = butler.get(coadd_type, dataId=dataId)
fig = plt.figure(figsize=(10,10))
rgb_im = rgb.makeRGB(*(exposures[bandpass_color_map[color]].getMaskedImage().getImage()
for color in ('red', 'green', 'blue')), Q=8, dataRange=1.0,
xSize=None, ySize=None)
rgb.displayRGB(rgb_im)
###Output
_____no_output_____
###Markdown
In the RGB map the cluster appears very red!Now we can also create RGB cutout images!
###Code
rgb_im = rgb.makeRGB(*(exposures[bandpass_color_map[color]].getCutout(radec, cutout_extent).getMaskedImage().getImage()
for color in ('red', 'green', 'blue')), Q=8, dataRange=1.0,
xSize=None, ySize=None)
rgb.displayRGB(rgb_im)
###Output
_____no_output_____
###Markdown
Selecting an Area on the Sky with a Sky MapAs a final example, we consider a third type of data that can be accessed via `Butler` called a `skyMap`. Sky maps allow you to look up information for a given `tract` and `patch`. You may notice from the below example that data set **types** tend to follow the naming convention of having a base name (e.g. `'deepCoadd'`) followed by a descriptor (e.g. `'_skyMap'`).
###Code
skymap = butler.get('deepCoadd_skyMap')
tract_info = skymap[0]
tract_info
patch_info = tract_info.getPatchInfo((1,1))
patch_info
tract_bbox = tract_info.getBBox()
tract_pix_corners = lsst.geom.Box2D(tract_bbox).getCorners()
print('Tract corners in pixels:\n', tract_pix_corners)
wcs = tract_info.getWcs()
tract_deg_corners = wcs.pixelToSky(tract_pix_corners)
tract_deg_corners = [[c.getRa().asDegrees(), c.getDec().asDegrees()] for c in tract_deg_corners]
print('\nTract corners in degrees:\n', tract_deg_corners)
#You can also go in reverse to find the tract, patch that contains a coordinate (320.8,-0.4)
coordList = [SpherePoint(Angle(np.radians(320.8)),Angle(np.radians(-0.4)))]
tractInfo = skymap.findClosestTractPatchList(coordList)
print(tractInfo[0][0])
print(tractInfo[0][1])
###Output
_____no_output_____
###Markdown
Data Access with the Gen-2 ButlerOwner: **Daniel Perrefort** ([@djperrefort](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@djperrefort))Last Verified to Run: **2020-07-17**Verified Stack Release: **v20.0.0** Core ConceptsThis notebook provides a hands-on overview of how to interact with the Gen-2 `Butler` (it should be updated for Gen-3, once available). The `Butler` provides a way to access information using a uniform interface without needing to keep track of how the information is internally stored or organized. Data access with `Butler` has three levels you need to be aware of:1. Each instantiated `Butler` object provides access to a collection of datasets called a **repository**. Each repository is defined by Butler using the local file directory where the data is stored.2. Each data set in a **repository** is assigned a unique name called a **type**. These types are strings that describe the data set and should not be confused with an "object type" as defined by Python.3. Individual entries in a data set are identified using a unique **data identifier**, which is a dictionary who's allowed keys and values depend on the data set you are working with. Learning Objectives:This notebook demonstrates how to use the Gen-2 `Butler` object from the DM stack to access and manipulate data. After finishing this notebook, users will know how to:1. Load and access a data repository using `Butler`2. Select subsets of data and convert data into familiar data structures3. Use `Butler` to access coordinate information and cutout postage stamps4. Use `Butler` to access a skymap
###Code
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.pyplot as plt
import lsst.afw.display as afwDisplay
from lsst.daf.persistence import Butler
import lsst.geom
from lsst.geom import SpherePoint, Angle
%matplotlib inline
###Output
_____no_output_____
###Markdown
Loading DataTo start we instantiate a `Butler` object by providing it with the directory of the **repository** we want to access. Next, we load a **type** of dataset and select data from a single **data identifier**. For this demonstration we consider the `deepCoadd_ref` dataset, which contains tables of information concerning coadded images used in the differencing image pipeline. The id values for this data set include two required values: `tract` and `patch` which denote sections of the sky.
###Code
repo = '/project/shared/data/DATA_ci_hsc/rerun/coaddForcedPhot'
butler = Butler(repo)
# We choose an "arbitrary" tract and patch.
# Want to figure out how we found this tract and patch? Check out the notebook on Exploring_A_Data_Repo.ipynb
data_id = {'tract': 0, 'patch': '1,1'}
dataset_type = 'deepCoadd_ref'
# We can check that the data exists before we try to read it
data_exists = butler.datasetExists(datasetType=dataset_type, dataId=data_id)
print('Data exists for ID:', data_exists)
data_entry = butler.get(dataset_type, dataId=data_id)
data_entry
###Output
_____no_output_____
###Markdown
The data table returned above is formatted as a `SourceCatalog` object, which is essentially a collection of `numpy` arrays. We can see this when we index a particular column.
###Code
print(type(data_entry['merge_measurement_i']))
###Output
_____no_output_____
###Markdown
`SourceCatalog` objects have their own set of methods for table manipulations (sorting, appending rows, etc.). However, we can also work with the data in a more familiar format, such as an astropy `Table` or a pandas `DataFrame`.
###Code
data_frame = data_entry.asAstropy().to_pandas()
data_frame.head()
###Output
_____no_output_____
###Markdown
It is important to note that `Butler` objects do not always return tabular data. We will see an example of this later when we load and parse image data. Selecting Subsets of DataIn practice, you may not know the format of the data identifier for a given data set. In this case, the `getKeys()` method can be used to determine the key values expected in a **data identifier**.
###Code
data_id_format = butler.getKeys(dataset_type)
print('Expected data id format:', data_id_format)
###Output
_____no_output_____
###Markdown
It is important to note that if you specify a key that is not part of the data **type**, the `Butler` will silently ignore it. This can be misleading. For example, in the previous example we read in a table that has a column of booleans named `merge_footprint_i`. If you specify `merge_footprint_i: True` in the dataID and rerun the example, `Butler` will ignore the extra key silently. As a result, you might expect the returned table to only include values where `merge_footprint_i` is `True`, but that isn't what happens. Here is an example of the correct way to select data from the returned table:
###Code
# An example of what not to do!!
#
# new_data_id = {'tract': 0, 'patch': '1,1', 'merge_footprint_i': True}
# merged_i_data = butler.get(dataset_type, dataId=new_data_id)
# assert merged_i_data['merge_measurement_i'].all()
# Do this instead...
new_data_id = {'tract': 0, 'patch': '1,1'}
merged_i_data = butler.get(dataset_type, dataId=new_data_id)
merged_i_data = merged_i_data[merged_i_data['merge_measurement_i']].copy(True)
# Check that the returned table does in fact have only entries where
# merge_footprint_i is True
print(merged_i_data['merge_measurement_i'].all())
###Output
_____no_output_____
###Markdown
**Important:** Note the use of `copy` above which is required to create an array that is contiguous in memory (yay!)You can also select all complete dataIds for a dataset type that match a partial (or empty) dataId. For example, the below cell iterates over all possible ids and checks if the corresponding file exists.
###Code
subset = butler.subset(dataset_type, dataId=data_id)
id_list = [dr.dataId for dr in subset if dr.datasetExists()]
print(f'Available Ids:\n {id_list}')
###Output
_____no_output_____
###Markdown
Creating Postage StampsWhen dealing with image data, we can use `Butler` to generate postage stamps at a given set of coordinates. For this example, we consider the `deepCoadd` data set, which has one extra key value than the previous example.
###Code
coadd_type = 'deepCoadd'
butler.getKeys(coadd_type)
###Output
_____no_output_____
###Markdown
In order to generate a postage stamp, we need to define the center and size of the cutout. First, we pick an RA and Dec from our previous example.
###Code
# Find indices of all targets with a flux between 100 and 500 as follows
# np.where((merged_i_data['base_PsfFlux_flux'] > 100) & (merged_i_data['base_PsfFlux_flux'] < 500))
# Pick an RA and Dec
i = 1000
ra = np.degrees(merged_i_data['coord_ra'][i])
dec = np.degrees(merged_i_data['coord_dec'][i])
###Output
_____no_output_____
###Markdown
Next we plot our cutout.
###Code
# Retrieve the image using butler
coadd_id = {'tract': 0, 'patch': '1,1', 'filter': 'HSC-I'}
image = butler.get(coadd_type, dataId=coadd_id)
# Define the center and size of our cutout
radec = SpherePoint(ra, dec, lsst.geom.degrees)
cutout_size = 150
cutout_extent = lsst.geom.ExtentI(cutout_size, cutout_size)
# Cutout and optionally save the postage stamp to file
postage_stamp = image.getCutout(radec, cutout_extent)
# postage_stamp.writeFits(<output_filename>)
###Output
_____no_output_____
###Markdown
Since the postage stamp was generated using `Butler`, it is represented as an `afwImage` object. This is a data type from the DM stack that is used to represent images. Since it is a DM object, we choose to plot it using the DM `afwDisplay` module.
###Code
xy = postage_stamp.getWcs().skyToPixel(radec)
display = afwDisplay.Display(frame=1, backend='matplotlib')
display.mtv(postage_stamp)
display.scale("linear", "zscale")
display.dot('o', xy.getX(), xy.getY(), ctype='red')
display.show_colorbar()
plt.xlabel('x')
plt.ylabel('y')
plt.show()
###Output
_____no_output_____
###Markdown
Note that the cutout image is aware of the masks and pixel values of the original image. This is why the axis labels in the above cutout are so large. We also note that the orientation of the postage stamp is in the x, y orientation of the original coadded image. Selecting an Area on the Sky with a Sky MapAs a final example, we consider a third type of data that can be accessed via `Butler` called a `skyMap`. Sky maps allow you to look up information for a given `tract` and `patch`. You may notice from the below example that data set **types** tend to follow the naming convention of having a base name (e.g. `'deepCoadd'`) followed by a descriptor (e.g. `'_skyMap'`).
###Code
skymap = butler.get('deepCoadd_skyMap')
tract_info = skymap[0]
tract_info
patch_info = tract_info.getPatchInfo((1,1))
patch_info
tract_bbox = tract_info.getBBox()
tract_pix_corners = lsst.geom.Box2D(tract_bbox).getCorners()
print('Tract corners in pixels:\n', tract_pix_corners)
wcs = tract_info.getWcs()
tract_deg_corners = wcs.pixelToSky(tract_pix_corners)
tract_deg_corners = [[c.getRa().asDegrees(), c.getDec().asDegrees()] for c in tract_deg_corners]
print('\nTract corners in degrees:\n', tract_deg_corners)
#You can also go in reverse to find the tract, patch that contains a coordinate (320.8,-0.4)
coordList = [SpherePoint(Angle(np.radians(320.8)),Angle(np.radians(-0.4)))]
tractInfo = skymap.findClosestTractPatchList(coordList)
print(tractInfo[0][0])
print(tractInfo[0][1])
###Output
_____no_output_____ |
cgatpipelines/tools/pipeline_docs/pipeline_bamstats/Jupyter_report/CGAT_bamstats_report.ipynb | ###Markdown
jQuery(document).ready(function($) { $(window).load(function(){ $('preloader').fadeOut('slow',function(){$(this).remove();}); }); }); divpreloader { position: fixed; left: 0; top: 0; z-index: 999; width: 100%; height: 100%; overflow: visible; background: fff url('http://preloaders.net/preloaders/720/Moving%20line.gif') no-repeat center center; } function code_toggle() { if (code_shown){ $('div.input').hide('500'); $('toggleButton').val('Show Code') } else { $('div.input').show('500'); $('toggleButton').val('Hide Code') } code_shown = !code_shown } $( document ).ready(function(){ code_shown=false; $('div.input').hide() });
###Code
# <font color='firebrick'><center>Report for Bam Stats</center></font>
### This report details the bamstats output tables that have been generated as part of running bamstats tool.
<br>
###Output
_____no_output_____
###Markdown
from IPython.display import display, Markdownfrom IPython.display import HTMLimport IPython.core.display as diimport csvimport numpy as npimport zlibimport cgatcore.iotools as iotoolsimport itertools as ITLimport osimport stringimport pandas as pdimport sqlite3import matplotlib as mplfrom matplotlib.backends.backend_pdf import PdfPages noqa: E402mpl.use('Agg') noqa: E402import matplotlib.pyplot as pltfrom matplotlib.ticker import FuncFormatterimport matplotlib.font_manager as font_managerimport matplotlib.lines as mlinesfrom matplotlib.colors import ListedColormapfrom matplotlib import cmfrom matplotlib import rc, font_managerimport cgatcore.experiment as Eimport mathfrom random import shuffleimport matplotlib as mplimport datetimeimport seaborn as snsimport nbformatPlot customizationplt.ioff()plt.style.use('seaborn-white')plt.style.use('ggplot')title_font = {'size':'20','color':'darkblue', 'weight':'bold', 'verticalalignment':'bottom'} Bottom vertical alignment for more spaceaxis_font = {'size':'18', 'weight':'bold'}For summary page pdf'''To add description pageplt.figure() plt.axis('off')plt.text(0.5,0.5,"my title",ha='center',va='center')pdf.savefig()'''Panda data frame cutomizationpd.options.display.width = 80pd.set_option('display.max_colwidth', -1)feature = ['input','mapped','spliced','unspliced']colors_category = ['yellowgreen', 'pink', 'gold', 'lightskyblue', 'orchid','darkgoldenrod','skyblue','b', 'red', 'darkorange','grey','violet','magenta','cyan', 'hotpink','mediumslateblue']threshold = 5def hover(hover_color="ffff99"): return dict(selector="tr:hover", props=[("background-color", "%s" % hover_color)])def y_fmt(y, pos): decades = [1e9, 1e6, 1e3, 1e0, 1e-3, 1e-6, 1e-9 ] suffix = ["G", "M", "k", "" , "m" , "u", "n" ] if y == 0: return str(0) for i, d in enumerate(decades): if np.abs(y) >=d: val = y/float(d) signf = len(str(val).split(".")[1]) if signf == 0: return '{val:d} {suffix}'.format(val=int(val), suffix=suffix[i]) else: if signf == 1: print(val, signf) if str(val).split(".")[1] == "0": return '{val:d} {suffix}'.format(val=int(round(val)), suffix=suffix[i]) tx = "{"+"val:.{signf}f".format(signf = signf) +"} {suffix}" return tx.format(val=val, suffix=suffix[i]) return y return ydef getTables(dbname): ''' Retrieves the names of all tables in the database. Groups tables into dictionaries by annotation ''' dbh = sqlite3.connect(dbname) c = dbh.cursor() statement = "SELECT name FROM sqlite_master WHERE type='table'" c.execute(statement) tables = c.fetchall() print(tables) c.close() dbh.close() return def readDBTable(dbname, tablename): ''' Reads the specified table from the specified database. Returns a list of tuples representing each row ''' dbh = sqlite3.connect(dbname) c = dbh.cursor() statement = "SELECT * FROM %s" % tablename c.execute(statement) allresults = c.fetchall() c.close() dbh.close() return allresultsdef getDBColumnNames(dbname, tablename): dbh = sqlite3.connect(dbname) res = pd.read_sql('SELECT * FROM %s' % tablename, dbh) dbh.close() return res.columnsdef plotBamstats(df,i_index,name,track_name,colors,xname,titlename): fig,ax = plt.subplots() ax.grid(which='major', linestyle='-', linewidth='0.25') ax.yaxis.set_major_formatter(FuncFormatter(y_fmt)) index=list(range(1,len(df.loc[track_name])+1)) plt.bar(index,df.loc[df.index[i_index]],0.50,color=colors,label=df.index[i_index],edgecolor=colors) fig = plt.gcf() fig.set_size_inches(12,8) plt.xticks(fontsize = 14,weight='bold') plt.yticks(fontsize = 14,weight='bold') legend_properties = {'weight':'bold','size':'14'} leg = plt.legend(title="Sample",prop=legend_properties,bbox_to_anchor=(1.36,1.01),frameon=True) leg.get_frame().set_edgecolor('k') leg.get_frame().set_linewidth(2) leg.get_title().set_fontsize(16) leg.get_title().set_fontweight('bold') plt.xlabel(xname,**axis_font) plt.ylabel('Number of Reads',**axis_font,labelpad=42) plt.title(''.join(["Distribution of ",titlename]), **title_font) plt.tight_layout() plt.savefig(''.join([df.index[i_index],name,'.png']),bbox_inches='tight',pad_inches=0.6) print("\n\n") plt.show() plt.close() return fig def BamStatsReport(dbname, tablename,tablenm,tablemapq): nh table trans = pd.DataFrame(readDBTable(dbname,tablename)) trans.columns = getDBColumnNames(dbname,tablename) df = trans.T mapq table trans_mapq = pd.DataFrame(readDBTable(dbname,tablemapq)) trans_mapq.columns = getDBColumnNames(dbname,tablemapq) df_mapq = trans_mapq.T nm table trans_nm = pd.DataFrame(readDBTable(dbname,tablenm)) trans_nm.columns = getDBColumnNames(dbname,tablenm) df_nm = trans_nm.T for i in range(0,(df.shape[0]-1)): pdf=PdfPages(str("_".join([df.index[i],"bam_stats_summary.pdf"]))) print("\n") fig = plotBamstats(df,i,"_bamstatsNH_tags",'nh','green','\nNumber of hits (NH tag)',"NH tag") pdf.savefig(fig,bbox_inches='tight',pad_inches=0.6) fig = plotBamstats(df_mapq,i,"_bamstatsMapping_quality",'mapq','firebrick','\nMapping quality',"mapping quality") pdf.savefig(fig,bbox_inches='tight',pad_inches=0.6) fig = plotBamstats(df_nm,i,"_bamstatsMismatch_stats",'nm','a80975','\nNo. of mismatch',"mismatch") pdf.savefig(fig,bbox_inches='tight',pad_inches=0.6) pdf.close()getTables("csvdb")BamStatsReport("../csvdb","bam_stats_nh","bam_stats_nm","bam_stats_mapq")
###Code
<script>
$(document).ready(function(){
$('div.prompt').hide();
$('div.back-to-top').hide();
$('nav#menubar').hide();
$('.breadcrumb').hide();
$('.hidden-print').hide();
});
</script>
<footer id="attribution" style="float:right; color:#999; background:#fff;">
Created with Jupyter,by Reshma.
</footer>
###Output
_____no_output_____ |
machine_learning/svd_compression.ipynb | ###Markdown
Image Compression with SVD**Singular value decomposition** (SVD) is a factorization method which generalizes the eigendecomposition for rectangular matrices. SVD can be used to approximate a matrice. The decomposition of the matrix $X$ is defined as : $X=U\Sigma V^{T}$ where $X$ and $V^{T}$ are unitary matices and $\Sigma$ is a rectangular diagonal matrice.This example showcases the use of SVD for a simple image compression. Load Data
###Code
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import skimage
# Load and prepare the image (X)
#filepath = './resources/cat.jpg'
#A = mpimg.imread(filepath)
A = skimage.data.chelsea()
X = np.mean(A, 2) # from RGB to grayscale
# Show the grayscale image and image shape
img = plt.imshow(X)
img.set_cmap('gray')
plt.title('Input image')
plt.show()
print('X shape :', X.shape)
###Output
_____no_output_____
###Markdown
SVD decompositionCompute the decomposition : $X=U\Sigma V^{T}$
###Code
U, singular_values, VT = np.linalg.svd(X, full_matrices=False)
S = np.diag(singular_values)
print('U shape :', U.shape)
print('Sigma shape :', S.shape)
print('VT shape :', VT.shape)
###Output
U shape : (300, 300)
Sigma shape : (300, 300)
VT shape : (300, 451)
###Markdown
CompressRebuild with an approximated $X \approx U_{r}\Sigma_{r} {V_{r}}^{T} $.The **singular values** are sorted by Numpy in descending order, so we can use the first few columns $\Sigma$ to extract the most relevant informations.
###Code
REDUCED_RANK = 20 # how many singular values to keep
# Compute approximated image
Ur = U[:,:REDUCED_RANK]
Sr = S[:REDUCED_RANK,:REDUCED_RANK]
VTr = VT[:REDUCED_RANK,:]
approximatedX = Ur @ Sr @ VTr
# Show approximated image, cumulative singular values and shapes
ax = plt.subplot()
img = ax.imshow(approximatedX)
img.set_cmap('gray')
plt.title('Approximated image with rank-{}'.format(REDUCED_RANK))
plt.show()
cumulative_sv = np.cumsum(singular_values) / np.sum(singular_values)
ax = plt.subplot()
ax.scatter(REDUCED_RANK,cumulative_sv[REDUCED_RANK])
ax.plot(cumulative_sv)
plt.title('Cumulative sum of all singular values')
plt.show()
print('Ur shape :', Ur.shape)
print('Sr shape :', Sr.shape)
print('Vtr shape :', VTr.shape)
print('approximatedX shape :', approximatedX.shape) # equal to X.shape
ratio = X.size / (Ur.size + REDUCED_RANK + VTr.size)
print('The compressed image is {:.2f}x smaller than the original'.format(ratio))
###Output
_____no_output_____ |
Econometrics_301_Milestone1_May_26.ipynb | ###Markdown
Import data
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Data Metrics
###Code
df=pd.read_csv("https://raw.githubusercontent.com/coinmetrics-io/data/master/csv/eth.csv")
df.head()
df=df.dropna()
###Output
_____no_output_____
###Markdown
Statistic
###Code
df["PriceUSD"]
df.tail()
df.info
df.describe()
###Output
_____no_output_____
###Markdown
Regression
###Code
import statsmodels.api as sm
# define the dependent and independent variables
X=df[['AdrActCnt','NVTAdj90']]
y=df['PriceUSD']
# add a constant to the dependent variables
X= sm.add_constant(X)
X.head()
# conduct regression
model = sm.OLS(y, X).fit()
# print model summary
print(model.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: PriceUSD R-squared: 0.175
Model: OLS Adj. R-squared: 0.174
Method: Least Squares F-statistic: 185.9
Date: Wed, 26 May 2021 Prob (F-statistic): 6.17e-74
Time: 07:50:38 Log-Likelihood: -13412.
No. Observations: 1752 AIC: 2.683e+04
Df Residuals: 1749 BIC: 2.685e+04
Df Model: 2
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 526.0366 37.059 14.195 0.000 453.353 598.721
AdrActCnt 0.0006 3.81e-05 15.467 0.000 0.001 0.001
NVTAdj90 -6.6735 0.679 -9.827 0.000 -8.005 -5.342
==============================================================================
Omnibus: 932.819 Durbin-Watson: 0.067
Prob(Omnibus): 0.000 Jarque-Bera (JB): 21069.500
Skew: 2.005 Prob(JB): 0.00
Kurtosis: 19.509 Cond. No. 1.40e+06
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 1.4e+06. This might indicate that there are
strong multicollinearity or other numerical problems.
|
scatter.ipynb | ###Markdown
Relationships between variables====================Copyright 2015 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](http://creativecommons.org/licenses/by/4.0/)
###Code
# this future import makes this code mostly compatible with Python 2 and 3
from __future__ import print_function, division
import numpy as np
import pandas as pd
import math
import thinkplot
import thinkstats2
np.random.seed(17)
%matplotlib inline
###Output
_____no_output_____
###Markdown
To explore the relationship between height and weight, I'll load data from the Behavioral Risk Factor Surveillance Survey (BRFSS).
###Code
def ReadBrfss(filename='CDBRFS08.ASC.gz', compression='gzip', nrows=None):
"""Reads the BRFSS data.
filename: string
compression: string
nrows: int number of rows to read, or None for all
returns: DataFrame
"""
var_info = [
('age', 101, 102, int),
('sex', 143, 143, int),
('wtyrago', 127, 130, int),
('finalwt', 799, 808, int),
('wtkg2', 1254, 1258, int),
('htm3', 1251, 1253, int),
]
columns = ['name', 'start', 'end', 'type']
variables = pd.DataFrame(var_info, columns=columns)
variables.end += 1
dct = thinkstats2.FixedWidthVariables(variables, index_base=1)
df = dct.ReadFixedWidth(filename, compression=compression, nrows=nrows)
CleanBrfssFrame(df)
return df
###Output
_____no_output_____
###Markdown
The following function cleans some of the variables we'll need.
###Code
def CleanBrfssFrame(df):
"""Recodes BRFSS variables.
df: DataFrame
"""
# clean age
df.age.replace([7, 9], float('NaN'), inplace=True)
# clean height
df.htm3.replace([999], float('NaN'), inplace=True)
# clean weight
df.wtkg2.replace([99999], float('NaN'), inplace=True)
df.wtkg2 /= 100.0
# clean weight a year ago
df.wtyrago.replace([7777, 9999], float('NaN'), inplace=True)
df['wtyrago'] = df.wtyrago.apply(lambda x: x/2.2 if x < 9000 else x-9000)
###Output
_____no_output_____
###Markdown
Now we'll read the data into a Pandas DataFrame.
###Code
brfss = ReadBrfss(nrows=None)
brfss.shape
###Output
_____no_output_____
###Markdown
And drop any rows that are missing height or weight (about 5%).
###Code
complete = brfss.dropna(subset=['htm3', 'wtkg2'])
complete.shape
###Output
_____no_output_____
###Markdown
Here's what the first few rows look like.
###Code
complete.head()
###Output
_____no_output_____
###Markdown
And we can summarize each of the columns.
###Code
complete.describe()
###Output
_____no_output_____
###Markdown
Since the data set is large, I'll start with a small random subset and we'll work our way up.
###Code
sample = thinkstats2.SampleRows(complete, 1000)
###Output
_____no_output_____
###Markdown
For convenience, I'll extract the columns we want as Pandas Series.
###Code
heights = sample.htm3
weights = sample.wtkg2
###Output
_____no_output_____
###Markdown
And then we can look at a scatterplot. By default, `Scatter` uses `alpha=0.2`, so when multiple data points are stacked, the intensity of the plot adds up (at least approximately).
###Code
thinkplot.Scatter(heights, weights)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
legend=False)
###Output
_____no_output_____
###Markdown
The outliers stretch the bounds of the figure, making it harder to see the shape of the core. We can adjust the limits by hand.
###Code
thinkplot.Scatter(heights, weights)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
###Output
_____no_output_____
###Markdown
The data points fall in columns because the heights were collected in inches and converted to cm. We can smooth this out by jittering the data.
###Code
heights = thinkstats2.Jitter(heights, 2.0)
weights = thinkstats2.Jitter(weights, 0.5)
###Output
_____no_output_____
###Markdown
The following figure shows the effect of jittering.
###Code
thinkplot.Scatter(heights, weights)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
###Output
_____no_output_____
###Markdown
With only 1000 samples, this works fine, but if we scale up to 10,000, we have a problem.
###Code
sample = thinkstats2.SampleRows(complete, 10000)
heights = sample.htm3
weights = sample.wtkg2
heights = thinkstats2.Jitter(heights, 2.0)
weights = thinkstats2.Jitter(weights, 0.5)
###Output
_____no_output_____
###Markdown
In the highest density parts of the figure, the ink is saturated, so they are not as dark as they should be, and the outliers are darker than they should be.
###Code
thinkplot.Scatter(heights, weights)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
###Output
_____no_output_____
###Markdown
This problem -- saturated scatter plots -- is amazingly common. I see it all the time in published papers, even in good journals.With moderate data sizes, you can avoid saturation by decreasing the marker size and `alpha`.
###Code
thinkplot.Scatter(heights, weights, alpha=0.1, s=10)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
###Output
_____no_output_____
###Markdown
That's better. Although now the horizontal lines are more apparent, probably because people round their weight off to round numbers (in pounds). We could address that by adding more jittering, but I will leave it alone for now.If we increase the sample size again, to 100,000, we have to decrease the marker size and alpha level even more.
###Code
sample = thinkstats2.SampleRows(complete, 100000)
heights = sample.htm3
weights = sample.wtkg2
heights = thinkstats2.Jitter(heights, 3.5)
weights = thinkstats2.Jitter(weights, 1.5)
thinkplot.Scatter(heights, weights, alpha=0.1, s=1)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
###Output
_____no_output_____
###Markdown
Finally, we can generate a plot with the entire sample, about 395,000 respondents.
###Code
sample = complete
heights = sample.htm3
weights = sample.wtkg2
heights = thinkstats2.Jitter(heights, 3.5)
weights = thinkstats2.Jitter(weights, 1.5)
###Output
_____no_output_____
###Markdown
And I decreased the marker size one more time.
###Code
thinkplot.Scatter(heights, weights, alpha=0.07, s=0.5)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
###Output
_____no_output_____
###Markdown
This is about the best we can do, but it still has a few problems. The biggest problem with this version is that it takes a long time to generate, and the resulting figure is big.An alternative to a scatterplot is a hexbin plot, which divides the plane into hexagonal bins, counts the number of entries in each bin, and colors the hexagons in proportion to the number of entries.
###Code
thinkplot.HexBin(heights, weights)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
###Output
_____no_output_____
###Markdown
The resulting figure is smaller and faster to generate, but it doesn't show all features of the scatterplot clearly.There are a few other options for visualizing relationships between variables. One is to group respondents by height and compute the CDF of weight for each group.I use `np.digitize` and `DataFrame.groupby` to group respondents by height:
###Code
bins = np.arange(135, 210, 10)
indices = np.digitize(complete.htm3, bins)
groups = complete.groupby(indices)
###Output
_____no_output_____
###Markdown
Then I compute a CDF for each group (except the first and last).
###Code
mean_heights = [group.htm3.mean() for i, group in groups][1:-1]
cdfs = [thinkstats2.Cdf(group.wtkg2) for i, group in groups][1:-1]
###Output
_____no_output_____
###Markdown
The following plot shows the distributions of weight.
###Code
thinkplot.PrePlot(7)
for mean, cdf in zip(mean_heights, cdfs):
thinkplot.Cdf(cdf, label='%.0f cm' % mean)
thinkplot.Config(xlabel='weight (kg)',
ylabel='CDF',
axis=[20, 200, 0, 1],
legend=True)
###Output
_____no_output_____
###Markdown
Using the CDFs, we can read off the percentiles of weight for each height group, and plot these weights agains the mean height in each group.
###Code
thinkplot.PrePlot(5)
for percent in [90, 75, 50, 25, 10]:
weight_percentiles = [cdf.Percentile(percent) for cdf in cdfs]
label = '%dth' % percent
thinkplot.Plot(mean_heights, weight_percentiles, label=label)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
axis=[135, 220, 35, 145],
legend=True)
###Output
_____no_output_____
###Markdown
This figure shows more clearly that the relationship between these variables is non-linear. Based on background information, I expect the distribution of weight to be lognormal, so I would try plotting weight on a log scale.
###Code
thinkplot.PrePlot(5)
for percent in [90, 75, 50, 25, 10]:
weight_percentiles = [cdf.Percentile(percent) for cdf in cdfs]
label = '%dth' % percent
thinkplot.Plot(mean_heights, weight_percentiles, label=label)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
yscale='log',
axis=[135, 220, 35, 145],
legend=True)
###Output
_____no_output_____
###Markdown
That relationship looks more linear, although not perfectly.Correlation-----------After looking at a scatterplot, if you conclude that the relationship is at least approximately linear, you could compute a coefficient of correlation to quantify the strength of the relationship.
###Code
heights.corr(weights)
###Output
_____no_output_____
###Markdown
A correlation of $\rho = 0.48$ is moderately strong -- I'll say more about what that means in a minute.Since the relationship is more linear under a log transform, we might transform weight first, before computing the correlation.
###Code
heights.corr(np.log(weights))
###Output
_____no_output_____
###Markdown
As expected, the correlation is a little higher with the transform.Spearman's rank correlation can measure the strength of a non-linear relationship, provided it is monotonic.
###Code
heights.corr(weights, method='spearman')
###Output
_____no_output_____
###Markdown
And Spearman's correlation is a little stronger still.Remember that correlation measures the strength of a linear relationship, but says nothing about the slope of the line that relates the variables.We can use `LeastSquares` to estimate the slope of the least squares fit.
###Code
inter, slope = thinkstats2.LeastSquares(heights, weights)
inter, slope
###Output
_____no_output_____
###Markdown
So each additional cm of height adds almost a kilo of weight!Here's what that line looks like, superimposed on the scatterplot:
###Code
fit_xs, fit_ys = thinkstats2.FitLine(heights, inter, slope)
thinkplot.Scatter(heights, weights, alpha=0.07, s=0.5)
thinkplot.Plot(fit_xs, fit_ys, color='gray')
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
###Output
_____no_output_____
###Markdown
The fit line is a little higher than the visual center of mass because it is being pulled up by the outliers.Here's the same thing using the log transform:
###Code
log_weights = np.log(weights)
inter, slope = thinkstats2.LeastSquares(heights, log_weights)
fit_xs, fit_ys = thinkstats2.FitLine(heights, inter, slope)
thinkplot.Scatter(heights, log_weights, alpha=0.07, s=0.5)
thinkplot.Plot(fit_xs, fit_ys, color='gray')
thinkplot.Config(xlabel='height (cm)',
ylabel='log weight (kg)',
axis=[140, 210, 3.5, 5.5],
legend=False)
###Output
_____no_output_____
###Markdown
That looks better, although maybe still not the line a person would have drawn.The residuals are the distances between each point and the fitted line.
###Code
inter, slope = thinkstats2.LeastSquares(heights, weights)
res = thinkstats2.Residuals(heights, weights, inter, slope)
###Output
_____no_output_____
###Markdown
The coefficient of determination $R^2$ is the fraction of the variance in weight we can eliminate by taking height into account.
###Code
var_y = weights.var()
var_res = res.var()
R2 = 1 - var_res / var_y
R2
###Output
_____no_output_____
###Markdown
The value $R^2 = 0.23$ indicates a moderately strong relationship.Note that the coefficient of determination is related to the coefficient of correlation, $\rho^2 = R^2$. So if we compute the sqrt of $R^2$, we should get $\rho$.
###Code
math.sqrt(R2)
###Output
_____no_output_____
###Markdown
And here's the correlation again:
###Code
thinkstats2.Corr(heights, weights)
###Output
_____no_output_____
###Markdown
If you see a high value of $\rho$, you should not be too impressed. If you square it, you get $R^2$, which you can interpret as the decrease in variance if you use the predictor (height) to guess the weight.But even the decrease in variance overstates the practical effect of the predictor. A better measure is the decrease in root mean squared error (RMSE).
###Code
RMSE_without = weights.std()
RMSE_without
###Output
_____no_output_____
###Markdown
If you guess weight without knowing height, you expect to be off by 19.6 kg on average.
###Code
RMSE_with = res.std()
RMSE_with
###Output
_____no_output_____
###Markdown
If height is known, you can decrease the error to 17.2 kg on average.
###Code
(1 - RMSE_with / RMSE_without) * 100
###Output
_____no_output_____
###Markdown
Scatterplot TestNow we will add a second numeric series, but still not make any other major adjustments. MatplotlibHighly abbreviated arguments, makes it hard to intuit the grammar. When using subplots, API is not consistent with plain singular plots. Almost all layout work beyond the minimal requires subplots.
###Code
fig, ax = plt.subplots(figsize=(12, 6))
ax.scatter(x=dataset.acousticness, y=dataset.loudness, alpha=0.75, s=2)
ax.set_title('Acousticness x Loudness Scatterplot')
ax.set_xlabel('Acousticness')
ax.set_ylabel('Loudness')
plt.show()
###Output
_____no_output_____
###Markdown
Seaborn
###Code
fig, ax = plt.subplots(figsize=(12, 6))
with sns.axes_style("whitegrid"):
viz = sns.scatterplot(data=dataset, x="acousticness", y='loudness', alpha = .75, s = 6, ax=ax)
viz.set_title("Acousticness x Loudness Scatterplot")
viz.set_xlabel('Acousticness')
viz.set_ylabel('Loudness')
viz
###Output
_____no_output_____
###Markdown
Bokeh
###Code
output_notebook()
p = figure(title="Acousticness x Loudness Scatterplot",
y_axis_label='Loudness',
x_axis_label='Acousticness',
width=750,
height = 400)
p.scatter(x=dataset.acousticness, y=dataset.loudness, marker='circle',
line_color="#97b5e6", fill_color="#2b4570", fill_alpha=0.75, size=5)
show(p)
###Output
_____no_output_____
###Markdown
Altair
###Code
source = dataset.sample(axis = 0, n=4000)
viz = alt.Chart(source)
viz = viz.mark_circle(size = 6)
viz = viz.encode(alt.X("acousticness"),y='loudness')
viz = viz.properties(title='Acousticness x Loudness Scatterplot').properties(width=700, height=300)
viz
###Output
_____no_output_____
###Markdown
Plotnine
###Code
pno.dpi = (150)
pno.figure_size = (6,3)
ggplot(data=dataset, mapping=aes(x='acousticness', y='loudness')) + \
theme_bw(base_size=6) + \
geom_point(size = .5, fill = '#2b4570', alpha = .75, color = "#97b5e6") + \
labs(title = "Acousticness x Loudness Scatterplot", x="Acousticness", y="Loudness")
###Output
_____no_output_____
###Markdown
PlotlyIn Plotly Express, setting element visual traits requires passing vectors the same length as data, column names, etc. Can't just pass a constant.
###Code
fig = px.scatter(dataset,
x="acousticness",
y='loudness',
title="Acousticness x Loudness Scatterplot",
template='plotly_white')
fig.update_layout(
width=700,height=400,
margin=dict(l=15,r=25,b=15,t=40,pad=1))
fig.show()
###Output
_____no_output_____
###Markdown
Relationships between variables====================
###Code
# this future import makes this code mostly compatible with Python 2 and 3
from __future__ import print_function, division
import numpy as np
import pandas as pd
import math
import seaborn as sns
import thinkplot
import thinkstats2
np.random.seed(17)
sns.set()
%matplotlib inline
###Output
_____no_output_____
###Markdown
To explore the relationship between height and weight, I'll load data from the Behavioral Risk Factor Surveillance Survey (BRFSS).
###Code
def ReadBrfss(filename='CDBRFS08.ASC.gz', compression='gzip', nrows=None):
"""Reads the BRFSS data.
filename: string
compression: string
nrows: int number of rows to read, or None for all
returns: DataFrame
"""
var_info = [
('age', 101, 102, int),
('sex', 143, 143, int),
('wtyrago', 127, 130, int),
('finalwt', 799, 808, int),
('wtkg2', 1254, 1258, int),
('htm3', 1251, 1253, int),
]
columns = ['name', 'start', 'end', 'type']
variables = pd.DataFrame(var_info, columns=columns)
variables.end += 1
dct = thinkstats2.FixedWidthVariables(variables, index_base=1)
df = dct.ReadFixedWidth(filename, compression=compression, nrows=nrows)
CleanBrfssFrame(df)
return df
###Output
_____no_output_____
###Markdown
The following function cleans some of the variables we'll need.
###Code
def CleanBrfssFrame(df):
"""Recodes BRFSS variables.
df: DataFrame
"""
# clean age
df.age.replace([7, 9], float('NaN'), inplace=True)
# clean height
df.htm3.replace([999], float('NaN'), inplace=True)
# clean weight
df.wtkg2.replace([99999], float('NaN'), inplace=True)
df.wtkg2 /= 100.0
# clean weight a year ago
df.wtyrago.replace([7777, 9999], float('NaN'), inplace=True)
df['wtyrago'] = df.wtyrago.apply(lambda x: x/2.2 if x < 9000 else x-9000)
###Output
_____no_output_____
###Markdown
Now we'll read the data into a Pandas DataFrame.
###Code
brfss = ReadBrfss(nrows=None)
brfss.shape
###Output
_____no_output_____
###Markdown
And drop any rows that are missing height or weight (about 5%).
###Code
complete = brfss.dropna(subset=['htm3', 'wtkg2'])
complete.shape
###Output
_____no_output_____
###Markdown
Here's what the first few rows look like.
###Code
complete.head()
###Output
_____no_output_____
###Markdown
And we can summarize each of the columns.
###Code
complete.describe()
###Output
_____no_output_____
###Markdown
Since the data set is large, I'll start with a small random subset and we'll work our way up.
###Code
sample = thinkstats2.SampleRows(complete, 1000)
###Output
_____no_output_____
###Markdown
For convenience, I'll extract the columns we want as Pandas Series.
###Code
heights = sample.htm3
weights = sample.wtkg2
###Output
_____no_output_____
###Markdown
And then we can look at a scatterplot. By default, `Scatter` uses `alpha=0.2`, so when multiple data points are stacked, the intensity of the plot adds up (at least approximately).
###Code
thinkplot.Scatter(heights, weights)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
legend=False)
###Output
_____no_output_____
###Markdown
The outliers stretch the bounds of the figure, making it harder to see the shape of the core. We can adjust the limits by hand.
###Code
thinkplot.Scatter(heights, weights)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
###Output
_____no_output_____
###Markdown
The data points fall in columns because the heights were collected in inches and converted to cm. We can smooth this out by jittering the data.
###Code
heights = thinkstats2.Jitter(heights, 2.0)
weights = thinkstats2.Jitter(weights, 0.5)
###Output
_____no_output_____
###Markdown
The following figure shows the effect of jittering.
###Code
thinkplot.Scatter(heights, weights)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
###Output
_____no_output_____
###Markdown
With only 1000 samples, this works fine, but if we scale up to 10,000, we have a problem.
###Code
sample = thinkstats2.SampleRows(complete, 10000)
heights = sample.htm3
weights = sample.wtkg2
heights = thinkstats2.Jitter(heights, 2.0)
weights = thinkstats2.Jitter(weights, 0.5)
###Output
_____no_output_____
###Markdown
In the highest density parts of the figure, the ink is saturated, so they are not as dark as they should be, and the outliers are darker than they should be.
###Code
thinkplot.Scatter(heights, weights)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
###Output
_____no_output_____
###Markdown
This problem -- saturated scatter plots -- is amazingly common. I see it all the time in published papers, even in good journals.With moderate data sizes, you can avoid saturation by decreasing the marker size and `alpha`.
###Code
thinkplot.Scatter(heights, weights, alpha=0.1, s=10)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
###Output
_____no_output_____
###Markdown
That's better. Although now the horizontal lines are more apparent, probably because people round their weight off to round numbers (in pounds). We could address that by adding more jittering, but I will leave it alone for now.If we increase the sample size again, to 100,000, we have to decrease the marker size and alpha level even more.
###Code
sample = thinkstats2.SampleRows(complete, 100000)
heights = sample.htm3
weights = sample.wtkg2
heights = thinkstats2.Jitter(heights, 3.5)
weights = thinkstats2.Jitter(weights, 1.5)
thinkplot.Scatter(heights, weights, alpha=0.1, s=1)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
###Output
_____no_output_____
###Markdown
Finally, we can generate a plot with the entire sample, about 395,000 respondents.
###Code
sample = complete
heights = sample.htm3
weights = sample.wtkg2
heights = thinkstats2.Jitter(heights, 3.5)
weights = thinkstats2.Jitter(weights, 1.5)
###Output
_____no_output_____
###Markdown
And I decreased the marker size one more time.
###Code
thinkplot.Scatter(heights, weights, alpha=0.07, s=0.5)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
###Output
_____no_output_____
###Markdown
This is about the best we can do, but it still has a few problems. The biggest problem with this version is that it takes a long time to generate, and the resulting figure is big.An alternative to a scatterplot is a hexbin plot, which divides the plane into hexagonal bins, counts the number of entries in each bin, and colors the hexagons in proportion to the number of entries.
###Code
thinkplot.HexBin(heights, weights)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
###Output
_____no_output_____
###Markdown
The resulting figure is smaller and faster to generate, but it doesn't show all features of the scatterplot clearly.There are a few other options for visualizing relationships between variables. One is to group respondents by height and compute the CDF of weight for each group.I use `np.digitize` and `DataFrame.groupby` to group respondents by height:
###Code
bins = np.arange(135, 210, 10)
indices = np.digitize(complete.htm3, bins)
groups = complete.groupby(indices)
###Output
_____no_output_____
###Markdown
Then I compute a CDF for each group (except the first and last).
###Code
mean_heights = [group.htm3.mean() for i, group in groups][1:-1]
cdfs = [thinkstats2.Cdf(group.wtkg2) for i, group in groups][1:-1]
###Output
_____no_output_____
###Markdown
The following plot shows the distributions of weight.
###Code
thinkplot.PrePlot(7)
for mean, cdf in zip(mean_heights, cdfs):
thinkplot.Cdf(cdf, label='%.0f cm' % mean)
thinkplot.Config(xlabel='weight (kg)',
ylabel='CDF',
axis=[20, 200, 0, 1],
legend=True)
###Output
_____no_output_____
###Markdown
Using the CDFs, we can read off the percentiles of weight for each height group, and plot these weights agains the mean height in each group.
###Code
thinkplot.PrePlot(5)
for percent in [90, 75, 50, 25, 10]:
weight_percentiles = [cdf.Percentile(percent) for cdf in cdfs]
label = '%dth' % percent
thinkplot.Plot(mean_heights, weight_percentiles, label=label)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
axis=[135, 220, 35, 145],
legend=True)
###Output
_____no_output_____
###Markdown
This figure shows more clearly that the relationship between these variables is non-linear. Based on background information, I expect the distribution of weight to be lognormal, so I would try plotting weight on a log scale.
###Code
thinkplot.PrePlot(5)
for percent in [90, 75, 50, 25, 10]:
weight_percentiles = [cdf.Percentile(percent) for cdf in cdfs]
label = '%dth' % percent
thinkplot.Plot(mean_heights, weight_percentiles, label=label)
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
yscale='log',
axis=[135, 220, 35, 145],
legend=True)
###Output
_____no_output_____
###Markdown
That relationship looks more linear, although not perfectly.Correlation-----------After looking at a scatterplot, if you conclude that the relationship is at least approximately linear, you could compute a coefficient of correlation to quantify the strength of the relationship.
###Code
heights.corr(weights)
###Output
_____no_output_____
###Markdown
A correlation of $\rho = 0.48$ is moderately strong -- I'll say more about what that means in a minute.Since the relationship is more linear under a log transform, we might transform weight first, before computing the correlation.
###Code
heights.corr(np.log(weights))
###Output
_____no_output_____
###Markdown
As expected, the correlation is a little higher with the transform.Spearman's rank correlation can measure the strength of a non-linear relationship, provided it is monotonic.
###Code
heights.corr(weights, method='spearman')
###Output
_____no_output_____
###Markdown
And Spearman's correlation is a little stronger still.Remember that correlation measures the strength of a linear relationship, but says nothing about the slope of the line that relates the variables.We can use `LeastSquares` to estimate the slope of the least squares fit.
###Code
inter, slope = thinkstats2.LeastSquares(heights, weights)
inter, slope
###Output
_____no_output_____
###Markdown
So each additional cm of height adds almost a kilo of weight!Here's what that line looks like, superimposed on the scatterplot:
###Code
fit_xs, fit_ys = thinkstats2.FitLine(heights, inter, slope)
thinkplot.Scatter(heights, weights, alpha=0.07, s=0.5)
thinkplot.Plot(fit_xs, fit_ys, color='gray')
thinkplot.Config(xlabel='height (cm)',
ylabel='weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
###Output
_____no_output_____
###Markdown
The fit line is a little higher than the visual center of mass because it is being pulled up by the outliers.Here's the same thing using the log transform:
###Code
log_weights = np.log(weights)
inter, slope = thinkstats2.LeastSquares(heights, log_weights)
fit_xs, fit_ys = thinkstats2.FitLine(heights, inter, slope)
thinkplot.Scatter(heights, log_weights, alpha=0.07, s=0.5)
thinkplot.Plot(fit_xs, fit_ys, color='gray')
thinkplot.Config(xlabel='height (cm)',
ylabel='log weight (kg)',
axis=[140, 210, 3.5, 5.5],
legend=False)
###Output
_____no_output_____
###Markdown
That looks better, although maybe still not the line a person would have drawn.The residuals are the distances between each point and the fitted line.
###Code
inter, slope = thinkstats2.LeastSquares(heights, weights)
res = thinkstats2.Residuals(heights, weights, inter, slope)
###Output
_____no_output_____
###Markdown
The coefficient of determination $R^2$ is the fraction of the variance in weight we can eliminate by taking height into account.
###Code
var_y = weights.var()
var_res = res.var()
R2 = 1 - var_res / var_y
R2
###Output
_____no_output_____
###Markdown
The value $R^2 = 0.23$ indicates a moderately strong relationship.Note that the coefficient of determination is related to the coefficient of correlation, $\rho^2 = R^2$. So if we compute the sqrt of $R^2$, we should get $\rho$.
###Code
math.sqrt(R2)
###Output
_____no_output_____
###Markdown
And here's the correlation again:
###Code
thinkstats2.Corr(heights, weights)
###Output
_____no_output_____
###Markdown
If you see a high value of $\rho$, you should not be too impressed. If you square it, you get $R^2$, which you can interpret as the decrease in variance if you use the predictor (height) to guess the weight.But even the decrease in variance overstates the practical effect of the predictor. A better measure is the decrease in root mean squared error (RMSE).
###Code
RMSE_without = weights.std()
RMSE_without
###Output
_____no_output_____
###Markdown
If you guess weight without knowing height, you expect to be off by 19.6 kg on average.
###Code
RMSE_with = res.std()
RMSE_with
###Output
_____no_output_____
###Markdown
If height is known, you can decrease the error to 17.2 kg on average.
###Code
(1 - RMSE_with / RMSE_without) * 100
###Output
_____no_output_____ |
EuropeanSoccerDataProject/England.ipynb | ###Markdown
Let's Choose a League to explore
###Code
Match = pd.read_sql_query('Select * FROM Match', cnx)
England_Match = Match[Match['country_id']==1729]
###Output
_____no_output_____
###Markdown
There's too many columns, let's pick some and narrow them down
###Code
England_Match.info()
England_Match.columns
England_Betting = England_Match.loc[:,['B365H', 'B365D', 'B365A', 'BWH',
'BWD', 'BWA', 'IWH', 'IWD', 'IWA', 'LBH', 'LBD', 'LBA','PSH',
'PSD', 'PSA', 'WHH', 'WHD', 'WHA', 'SJH', 'SJD','SJA', 'VCH',
'VCD', 'VCA', 'GBH', 'GBD', 'GBA', 'BSH', 'BSD', 'BSA']]
England_Betting
#lowercase e to distinguish less columns going forward
england_Match = England_Match.loc[:,['id', 'season', 'stage', 'date',
'match_api_id', 'home_team_api_id', 'away_team_api_id',
'home_team_goal', 'away_team_goal', 'goal', 'shoton',
'shotoff', 'foulcommit', 'card', 'cross', 'corner',
'possession']]
england_Match
###Output
_____no_output_____ |
tasks/Scrapy/scrapy_official_newspapers/keywords_and_dictionaries/Old_files/Negative_Keywords_Knowledge_Domain.ipynb | ###Markdown
Negative Knowledge Domain KeywordsThis notebook is to manually build a keyword dictionary. This particular dictionary contains what we call negative keywords. Negative keywords are those keywords that are used to remove policies from the scraping process. For instance, some document have been scraped because it contains a word related with "environment" but then it turns out that it is a nomination for an institutional post or somthing related with telecomunications. With the negative keywords "Designa director" or "Telefonía" we can remove these documents from the final list of scraped documents. Dependencies
###Code
import json
###Output
_____no_output_____
###Markdown
DictionaryThis is a "user frienly" to manually enter data in a dictionary which can later be transformed into a json file.
###Code
keywords = {
'Aceptan renuncia' : 0,
'Acepta renuncia' : 0,
'Acuicultura': 0,
'Aprueban expedición' : 0,
'Archivo general de la nación' : 0,
'Arqueológica' : 0,
'Arqueológicas' : 0,
'Arqueológico' : 0,
'Arqueológicos' : 0,
'Artefactos navales' : 0,
'Asociación religiosa' : 0,
'Atmosférica' : 0,
'Atmosférico' : 0,
'Autorizan viaje' : 0,
'Aviación' : 0,
'Calidad del aire' : 0,
'Certificados de estudios' : 0,
'Condiciones empresas instaladoras' : 0,
'Congregación' : 0,
'Consejo de la judicatura' : 0,
'Consejo de seguridad' : 0,
'Contaminación sonora' : 0,
'Contraloría general' : 0,
'Contrato Ley de la Industria' : 0,
'Convivencia ciudadana' : 0,
'Datos personales' : 0,
'Declara desierto concurso' : 0,
'Declara desierto el concurso' : 0,
'Declaran desierto concurso' : 0,
'Declaran desierto el concurso' : 0,
'Declara desierto proceso' : 0,
'Declaran vacancia' : 0,
'Declara vacante' : 0,
'Delimitación' : 0,
'Desechos' : 0,
'Designa director' : 0,
'Designa directora' : 0,
'Designa ministro' : 0,
'Designa vicepresidente' : 0,
'Designan asesor' : 0,
'Designan asesora' : 0,
'Designan asesores' : 0,
'Designan coordinador' : 0,
'Designan coordinadora' : 0,
'Designan coordinadores': 0,
'Designan director' : 0,
'Designan directora' : 0,
'Designan directivos' : 0,
'Designan ejecutor' : 0,
'Designan ejecutora' : 0,
'Designan funcionario' : 0,
'Designan funcionaria' : 0,
'Designan funcionarios' : 0,
'Designan funcionarias' : 0,
'Designan gerente' : 0,
'Designan gerentes' : 0,
'Designan jefe' : 0,
'Designan jefa' : 0,
'Designan miembro' : 0,
'Designan miembros' : 0,
'Designan presidente' : 0,
'Designan presidenta' : 0,
'Designan representante' : 0,
'Designan representantes' : 0,
'Designan responsable' : 0,
'Designan responsables' : 0,
'Designan secretario' : 0,
'Designan secretaria' : 0,
'Designan subdirector' : 0,
'Designan subdirectora' : 0,
'Designan sub director' : 0,
'Designan sub directora' : 0,
'Educación de los adultos' : 0,
'Educación pública' : 0,
'Educación superior' : 0,
'Energía eléctrica' : 0,
'Espectáculos públicos' : 0,
'Establecimientos educacionales' : 0,
'Estatuto orgánico' : 0,
'Familia' : 0,
'Familiar' : 0,
'Farmacovigilancia' : 0,
'Funcionario' : 0,
'Funcionarios' : 0,
'Indústria de la construcción' : 0,
'Inmueble' : 0,
'Juegos florales' : 0,
'Matrimonio civil' : 0,
'Migraciones' : 0,
'Nacionalización del inmueble' : 0,
'Nombra' : 0,
'Nombramiento' : 0,
'Organización política' : 0,
'Otorgan duplicado' : 0,
'Otorgan duplicados' : 0,
'Pasan a la situación de retiro' : 0,
'Penitenciario' : 0,
'Personas desplazadas internas' : 0,
'Persona natural' : 0,
'Personas naturales' : 0,
'Pesca de investigación' : 0,
'Pesquería' : 0,
'Planta de personal' : 0,
'Plantas del personal' : 0,
'Planta envasadora' : 0,
'Plantas envasadoras' : 0,
'Planta nacional' : 0,
'Portuaria' : 0,
'Portuario' : 0,
'Publicitario' : 0,
'Pobreza' : 0,
'Radiaciones' : 0,
'Radiodifusión' : 0,
'Redes eléctricas' : 0,
'Resíduos' : 0,
'Reglamento orgánico' : 0,
'Salario mínimo' : 0,
'Salmónidos' : 0,
'Salud ambiental' : 0,
'Tasas municipales' : 0,
'Tasas por servicios municipales' : 0,
'Tasas, que el municipio' : 0,
'Telecomunicaciones' : 0,
'Telefonía' : 0,
'Televisión' : 0,
'Universidad' : 0,
'Vacante cargo' : 0,
'Vivienda' : 0
}
len(keywords)
###Output
_____no_output_____
###Markdown
Saving JSON in google colaboration
###Code
from google.colab import drive
drive.mount('/content/drive/')
with open('/content/drive/My Drive/Official Folder of WRI Latin America Project/Omdena Challenge/task4_web_scraping/Google_Search_Scraping/negative_keywords_knowledge_domain.json', 'w') as dict:
json.dump(keywords, dict)
from google.colab import files
files.download('/content/drive/My Drive/Official Folder of WRI Latin America Project/Omdena Challenge/task4_web_scraping/Google_Search_Scraping/negative_keywords_knowledge_domain.json')
###Output
_____no_output_____
###Markdown
Saving JSON in the local folder of the scrapy project
###Code
# path = "../output/"
filename = "negative_keywords_knowledge_domain.json"
# file = path + filename
with open(filename, 'w') as fp:
json.dump(keywords, fp)
###Output
_____no_output_____ |
P1_ML2021_73148_UsingLemma.ipynb | ###Markdown
Import libraries and data into the notebook:
###Code
#basic imports:
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
#sklearn imports:
from sklearn.pipeline import Pipeline
# Feature extraction - CountVectorizer or TFidfVectorizer - "Term frequency, inverse document frequency":
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer, TfidfTransformer
from sklearn.decomposition import TruncatedSVD
# Model selection:
from sklearn.model_selection import train_test_split, StratifiedKFold, learning_curve
# Feature selection:
from sklearn.feature_selection import SelectFromModel
from sklearn.feature_selection import SelectKBest, chi2
# Models:
from sklearn.linear_model import Perceptron, PassiveAggressiveClassifier, SGDClassifier, LogisticRegression
from sklearn.linear_model import LogisticRegressionCV, RidgeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC, SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import MultinomialNB
# Performance metrics:
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score, auc, roc_auc_score, roc_curve, RocCurveDisplay
# There are several approaches to cleaning the text and processing it as a "Bag-of-words"/tokenizing/vectorizing.
# Approach using the NLTK library and corpus:
import nltk
from nltk.tokenize import word_tokenize
from nltk.tokenize import WordPunctTokenizer
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('stopwords')
# Import Counter
from collections import Counter
# Regular expression and string imports:
import re
import string
from string import punctuation
# Set some styles to match other code repos for data visualization:
plt.style.use('ggplot')
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['font.serif'] = 'Ubuntu'
plt.rcParams['font.monospace'] = 'Ubuntu Mono'
plt.rcParams['font.size'] = 14
plt.rcParams['axes.labelsize'] = 12
plt.rcParams['axes.labelweight'] = 'bold'
plt.rcParams['axes.titlesize'] = 12
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
plt.rcParams['legend.fontsize'] = 12
plt.rcParams['figure.titlesize'] = 12
plt.rcParams['image.cmap'] = 'jet'
plt.rcParams['image.interpolation'] = 'none'
plt.rcParams['figure.figsize'] = (10, 10
)
plt.rcParams['axes.grid']=False
plt.rcParams['lines.linewidth'] = 2
plt.rcParams['lines.markersize'] = 8
colors = ['xkcd:pale range', 'xkcd:sea blue', 'xkcd:pale red', 'xkcd:sage green', 'xkcd:terra cotta', 'xkcd:dull purple', 'xkcd:teal', 'xkcd: goldenrod', 'xkcd:cadet blue',
'xkcd:scarlet']
bbox_props = dict(boxstyle="round,pad=0.3", fc=colors[0], alpha=.5)
# Load the data into raw, unprocessed dataframes:
df_fake_raw = pd.read_csv("C:/Users/JOAO/Desktop/CleanSlate/input/Fake.csv")
df_true_raw = pd.read_csv("C:/Users/JOAO/Desktop/CleanSlate/input/True.csv")
df_fake_raw["class"] = 0
df_true_raw["class"] = 1
dataset_size = [len(df_fake_raw),len(df_true_raw)]
# Concatenate both raw data into a single dataframe:
df_news_raw = pd.concat([df_fake_raw, df_true_raw],axis=0)
# Because date and subject are not linearly independent they will reduce model accuracy and induce redundant terms in our models. To avoid this remove these collumns:
df_news_lean = df_news_raw.drop(["subject","date"], axis=1)
# Concatenate the title with the remaining text:
df_news_lean["text"] = df_news_lean["title"] + df_news_lean["text"]
# Drop the title column:
df_news_text = df_news_lean.drop(["title"],axis=1)
# Random shuffling the dataframe:
data = df_news_text.sample(frac = 1)
# Reset the indexes of the dataframes, otherwise they would be doubled in the final data:
data.reset_index(inplace=True)
data.drop(["index"], axis=1, inplace=True)
# Save this "curated" data to a .csv file:
data.to_csv('C:/Users/JOAO/Desktop/CleanSlate/input/Curated_data.csv')
# Some data visualization:
plt.pie(dataset_size,explode=[0.1,0.1],colors=['darkorange','darkgreen'],startangle=90,shadow=True,labels=['Fake News','True News'],autopct='%1.1f%%')
###Output
_____no_output_____
###Markdown
Data cleaning:
###Code
def nltk_process(data):
# Tokenization
# tokenList = word_tokenize(data)
tk = WordPunctTokenizer()
tokenList = tk.tokenize(data)
# Convert the tokens into lowercase: lower_tokens
lower_tokens = [t.lower() for t in tokenList]
# Retain alphabetic words: alpha_only
alpha_only = [t for t in lower_tokens if t.isalpha()]
# Lemmatization
wordnet_lemmatizer = WordNetLemmatizer()
lemmaList = []
for word in alpha_only:
lemmaList.append(wordnet_lemmatizer.lemmatize(word, pos="v"))
# Stopwords
filtered_words = []
nltk_stop_words = set(stopwords.words("english"))
for word in lemmaList:
if word not in nltk_stop_words:
filtered_words.append(word)
# Remove punct.
for word in filtered_words:
if word in string.punctuation:
filtered_words.remove(word)
return filtered_words
%%time
if __name__ == "__main__":
data["clean"] = [" ".join(text) for text in data["text"].apply(lambda x: nltk_process(x)).values]
# Do some manual checks of the clean text to verify if the lemmatization and puntuation removal was done right:
# print(data["clean"][44235])
# print()
# print(data["clean"][15000])
# print()
# print(data["clean"][10])
type(data["clean"])
print(data["clean"][4])
print()
print(data["text"][4])
###Output
watch obama perfectly mock trump insane followers think literal demonconservative radio show host alex jones recently proclaim president obama hillary clinton really demons send lucifer doubt mean metaphorically mean literally proof evil origins jones claim smell like sulphur hell apparently somebody mention president speak event campaign trail clinton obama decide give sniff test tuesday night check suspicious demonic odors demonize mean literally way read day guy radio apparently trump show frequently say hillary demons say smell like sulphur somethin president obama perform sniff test smell hand crowd laugh absurdity jones bullsh president begin burst laughter along crowd mean come people right wing nut job reduce us president sniff make sure actually f cking demon happen obama respond alex jones say hillary literal demons smell like sulfur sniff pic twitter com gsxrsklrdf colin jones colinjones october image via video screen capture
WATCH: Obama PERFECTLY Mocks Trump’s Insane Followers That Think He’s A Literal DemonConservative radio show host Alex Jones recently proclaimed that President Obama and Hillary Clinton are really demons, sent by Lucifer himself no doubt. No, he did not mean metaphorically. He meant literally. As proof of their evil origins, Jones claimed that they both smell like sulphur and hell. Apparently, somebody mentioned to this to the president. So, while speaking at an event on the campaign trail for Clinton, Obama decided to give himself a sniff test on Tuesday night, just to check for any suspicious demonic odors. we demonize each other. And I mean that literally, by the way. I was reading the other day, there s a guy on the radio who apparently, Trump s on his show frequently, he said me and Hillary are demons. Said we smell like sulphur. Ain t that somethin ? President Obama then performed his sniff test, smelling his hand as the crowd laughed at the absurdity of Jones bullsh*t. Now, the president began before he burst into laughter along with the crowd. I mean, come on, people! This is what the right wing nut jobs have reduced us to. Our president has to sniff himself to make sure he isn t actually a f*cking demon.This happened. Obama responds to Alex Jones saying he and Hillary are literal demons who smell like sulfur. Then he sniffs himself pic.twitter.com/GSxRsklRDf Colin Jones (@colinjones) October 11, 2016Featured image via video screen capture
###Markdown
Vectorization of the text data into numerical dataIncluding the train, test split after the vectorization.
###Code
%%time
X = data["clean"]
y = data["class"]
# Create training and test sets
X_train, X_test, y_train, y_test = train_test_split(X,y,train_size=10000, test_size=0.1, shuffle=False)
# Initialize a TfidfVectorizer object: tfidf_vectorizer
tfidf_vectorizer = TfidfVectorizer(ngram_range=(1, 2), stop_words='english', max_df=0.7)
# Fit and Transform the training data: tfidf_train
tfidf_train = tfidf_vectorizer.fit_transform(X_train)
# Transform the test data: tfidf_test
tfidf_test = tfidf_vectorizer.transform(X_test)
###Output
Wall time: 12.7 s
###Markdown
MODEL TRAINING AND TESTING No cross validation, no optimization, no hyperparameter tuning!
###Code
%%time
clf = SGDClassifier(penalty='elasticnet', alpha=0.000001, max_iter=1000)
clf.fit(tfidf_train, y_train)
y_pred_SDG = clf.predict(tfidf_test)
cm = confusion_matrix(y_test, y_pred_SDG)
new_cm = pd.DataFrame(cm , index = ['Fake','Not Fake'] , columns = ['Fake','Not Fake'])
sns.heatmap(new_cm,cmap= 'Blues', annot = True, fmt='',xticklabels = ['Fake','Not Fake'], yticklabels = ['Fake','Not Fake'])
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.title('Confusion matrix On Test Data')
plt.show()
print("Accuracy: {}".format(round(accuracy_score(y_test, y_pred_SDG)*100,2)))
print()
print(classification_report(y_test, y_pred_SDG))
%%time
classifier = LogisticRegression()
classifier.fit(tfidf_train, y_train)
y_pred_LR = classifier.predict(tfidf_test)
cm = confusion_matrix(y_test, y_pred_LR)
new_cm = pd.DataFrame(cm , index = ['Fake','Not Fake'] , columns = ['Fake','Not Fake'])
sns.heatmap(new_cm,cmap= 'Blues', annot = True, fmt='',xticklabels = ['Fake','Not Fake'], yticklabels = ['Fake','Not Fake'])
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.title('Confusion matrix On Test Data')
plt.show()
print("Accuracy: {}".format(round(accuracy_score(y_test, y_pred_LR)*100,2)))
print()
print(classification_report(y_test, y_pred_LR))
%%time
clf_svc = LinearSVC(dual=True, max_iter=200)
clf_svc.fit(tfidf_train, y_train)
y_pred_SVC = clf_svc.predict(tfidf_test)
cm = confusion_matrix(y_test, y_pred_SVC)
new_cm = pd.DataFrame(cm , index = ['Fake','Not Fake'] , columns = ['Fake','Not Fake'])
sns.heatmap(new_cm,cmap= 'Blues', annot = True, fmt='',xticklabels = ['Fake','Not Fake'], yticklabels = ['Fake','Not Fake'])
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.title('Confusion matrix On Test Data')
plt.show()
print("Accuracy: {}".format(round(accuracy_score(y_test, y_pred_SVC)*100,2)))
print()
print(classification_report(y_test, y_pred_SVC))
%%time
# Create a Multinomial Naive Bayes classifier: nb_classifier
nb_classifier = MultinomialNB(alpha=0.01)
# Fit the classifier to the training data
nb_classifier.fit(tfidf_train, y_train)
# Create the predicted tags: pred
pred_NB = nb_classifier.predict(tfidf_test)
cm = confusion_matrix(y_test, pred_NB)
new_cm = pd.DataFrame(cm , index = ['Fake','Not Fake'] , columns = ['Fake','Not Fake'])
sns.heatmap(new_cm,cmap= 'Blues', annot = True, fmt='',xticklabels = ['Fake','Not Fake'], yticklabels = ['Fake','Not Fake'])
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.title('Confusion matrix On Test Data')
plt.show()
print("Accuracy: {}".format(round(accuracy_score(y_test, pred_NB)*100,2)))
print()
print(classification_report(y_test, pred_NB))
%%time
# Create a RandomForestclassifier: RFC_classifier
RFC_classifier = RandomForestClassifier(max_depth=2, random_state=0)
# Fit the classifier to the training data
RFC_classifier.fit(tfidf_train, y_train)
# Create the predicted tags: pred
pred_rfc = RFC_classifier.predict(tfidf_test)
cm = confusion_matrix(y_test, pred_rfc)
new_cm = pd.DataFrame(cm , index = ['Fake','Not Fake'] , columns = ['Fake','Not Fake'])
sns.heatmap(new_cm,cmap= 'Blues', annot = True, fmt='',xticklabels = ['Fake','Not Fake'], yticklabels = ['Fake','Not Fake'])
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.title('Confusion matrix On Test Data')
plt.show()
print("Accuracy: {}".format(round(accuracy_score(y_test, pred_rfc)*100,2)))
print()
print(classification_report(y_test, pred_rfc))
%%time
rdg_clf = RidgeClassifier(tol=1e-2, solver="sparse_cg")
rdg_clf.fit(tfidf_train, y_train)
pred_rdg = rdg_clf.predict(tfidf_test)
cm = confusion_matrix(y_test, pred_rdg)
new_cm = pd.DataFrame(cm , index = ['Fake','Not Fake'] , columns = ['Fake','Not Fake'])
sns.heatmap(new_cm,cmap= 'Blues', annot = True, fmt='',xticklabels = ['Fake','Not Fake'], yticklabels = ['Fake','Not Fake'])
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.title('Confusion matrix On Test Data')
plt.show()
print("Accuracy: {}".format(round(accuracy_score(y_test, pred_rdg)*100,2)))
print()
print(classification_report(y_test, pred_rdg))
%%time
pcp_clf = Perceptron(max_iter=50)
pcp_clf.fit(tfidf_train, y_train)
pred_pcp = pcp_clf.predict(tfidf_test)
cm = confusion_matrix(y_test, pred_pcp)
new_cm = pd.DataFrame(cm , index = ['Fake','Not Fake'] , columns = ['Fake','Not Fake'])
sns.heatmap(new_cm,cmap= 'Blues', annot = True, fmt='',xticklabels = ['Fake','Not Fake'], yticklabels = ['Fake','Not Fake'])
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.title('Confusion matrix On Test Data')
plt.show()
print("Accuracy: {}".format(round(accuracy_score(y_test, pred_pcp)*100,2)))
print()
print(classification_report(y_test, pred_pcp))
%%time
passagress_clf = PassiveAggressiveClassifier(max_iter=50)
passagress_clf.fit(tfidf_train, y_train)
passag_pred = passagress_clf.predict(tfidf_test)
cm = confusion_matrix(y_test, passag_pred)
new_cm = pd.DataFrame(cm , index = ['Fake','Not Fake'] , columns = ['Fake','Not Fake'])
sns.heatmap(new_cm,cmap= 'Blues', annot = True, fmt='',xticklabels = ['Fake','Not Fake'], yticklabels = ['Fake','Not Fake'])
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.title('Confusion matrix On Test Data')
plt.show()
print("Accuracy: {}".format(round(accuracy_score(y_test, passag_pred)*100,2)))
print()
print(classification_report(y_test, passag_pred))
%%time
kneigh_clf = KNeighborsClassifier(n_neighbors=10)
kneigh_clf.fit(tfidf_train, y_train)
kneigh_pred = kneigh_clf.predict(tfidf_test)
cm = confusion_matrix(y_test, kneigh_pred)
new_cm = pd.DataFrame(cm , index = ['Fake','Not Fake'] , columns = ['Fake','Not Fake'])
sns.heatmap(new_cm,cmap= 'Blues', annot = True, fmt='',xticklabels = ['Fake','Not Fake'], yticklabels = ['Fake','Not Fake'])
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.title('Confusion matrix On Test Data')
plt.show()
print("Accuracy: {}".format(round(accuracy_score(y_test, kneigh_pred)*100,2)))
print()
print(classification_report(y_test, kneigh_pred))
###Output
_____no_output_____
###Markdown
Hyperparameter tuningThe task is to use the validation set Xval, yval to determine the best C and $\sigma$ parameters.For both C and $\sigma$, it is suggested to try the following values (0.01; 0.03; 0.1; 0.3; 1; 3; 10; 30). Function *dataset3Params* tries all possible pairs of values for C and $\sigma$. For example, for the 8 values listed above, a total of 8^2 = 64 different models will be trained and evaluated (on the validation set).To generate the sets for hyperparameter tuning, stratified 10-Folds Cross Validation methods from sklearn will be used.The performance metrics to choose the best parameters are the Learning and ROC/AUC curves.
###Code
%%time
k10 = StratifiedKFold(n_splits=5, shuffle=False, random_state=None)
classifier = LogisticRegressionCV(Cs=10, fit_intercept=True, cv=k10, dual=False, penalty='l2', tol=0.0001, max_iter=100, class_weight=None, n_jobs=-1)
classifier.fit(tfidf_train, y_train)
y_pred_LR = classifier.predict(tfidf_test)
cm = confusion_matrix(y_test, y_pred_LR)
new_cm = pd.DataFrame(cm , index = ['Fake','Not Fake'] , columns = ['Fake','Not Fake'])
sns.heatmap(new_cm,cmap= 'Blues', annot = True, fmt='',xticklabels = ['Fake','Not Fake'], yticklabels = ['Fake','Not Fake'])
plt.xlabel("Actual")
plt.ylabel("Predicted")
plt.title('Confusion matrix On Test Data')
plt.show()
print("Accuracy: {}".format(round(accuracy_score(y_test, y_pred_LR)*100,2)))
print()
print(classification_report(y_test, y_pred_LR))
###Output
_____no_output_____
###Markdown
Feature Engineering: (Optimization)One way of optimizing the data and models is by reducing the size of our dataset without hampering model performance.A method of achieving optimization is with truncated single value decomposition.
###Code
tSVD = TruncatedSVD(n_components=100, algorithm='arpack', random_state=None)
Xmin = tSVD.fit_transform(tfidf_train)
###Output
_____no_output_____
###Markdown
Compute Loss function Recall that the Logistic Regression model is defined as: $h_{\theta}(x^{(i)})= \frac{1}{1+e^{-\theta (x^{(i)})}}$The cost function in Logistic Regression is: $J(\theta) = \frac{1}{m} \sum_{i=1}^{m} [ -y^{(i)}log(h_{\theta}(x^{(i)})) - (1 - y^{(i)})log(1 - (h_{\theta}(x^{(i)}))]$The gradient of $J(\theta)$ is a vector of the same length as $\theta$ where the jth element (for j = 0, 1,…. n) is defined as:$ \frac{\partial J(\theta)}{\partial \theta_j} = \frac{1}{m} \sum_{i=1}^{m} (h_{\theta}(x^{(i)}) - y^{(i)})x_j^{(i)}$Complete function *costFunction* to return $J(\theta)$ and the gradient ((partial derivative of $J(\theta)$ with respect to each $\theta$) for logistic regression.
###Code
def plot_learning_curve(
estimator,
title,
X,
y,
axes=None,
ylim=None,
cv=None,
n_jobs=None,
train_sizes=np.linspace(0.1, 1.0, 15),
):
"""
Generate 3 plots: the test and training learning curve, the training
samples vs fit times curve, the fit times vs score curve.
Parameters
----------
estimator : estimator instance
An estimator instance implementing `fit` and `predict` methods which
will be cloned for each validation.
title : str
Title for the chart.
X : array-like of shape (n_samples, n_features)
Training vector, where ``n_samples`` is the number of samples and
``n_features`` is the number of features.
y : array-like of shape (n_samples) or (n_samples, n_features)
Target relative to ``X`` for classification or regression;
None for unsupervised learning.
axes : array-like of shape (3,), default=None
Axes to use for plotting the curves.
ylim : tuple of shape (2,), default=None
Defines minimum and maximum y-values plotted, e.g. (ymin, ymax).
cv : int, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 5-fold cross-validation,
- integer, to specify the number of folds.
- :term:`CV splitter`,
- An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if ``y`` is binary or multiclass,
:class:`StratifiedKFold` used. If the estimator is not a classifier
or if ``y`` is neither binary nor multiclass, :class:`KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validators that can be used here.
n_jobs : int or None, default=None
Number of jobs to run in parallel.
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
``-1`` means using all processors. See :term:`Glossary <n_jobs>`
for more details.
train_sizes : array-like of shape (n_ticks,)
Relative or absolute numbers of training examples that will be used to
generate the learning curve. If the ``dtype`` is float, it is regarded
as a fraction of the maximum size of the training set (that is
determined by the selected validation method), i.e. it has to be within
(0, 1]. Otherwise it is interpreted as absolute sizes of the training
sets. Note that for classification the number of samples usually have
to be big enough to contain at least one sample from each class.
(default: np.linspace(0.1, 1.0, 5))
"""
if axes is None:
_, axes = plt.subplots(1, 3, figsize=(20, 5))
axes[0].set_title(title)
if ylim is not None:
axes[0].set_ylim(*ylim)
axes[0].set_xlabel("Training examples")
axes[0].set_ylabel("Score")
train_sizes, train_scores, test_scores, fit_times, _ = learning_curve(
estimator,
X,
y,
cv=cv,
n_jobs=n_jobs,
train_sizes=train_sizes,
return_times=True,
)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
fit_times_mean = np.mean(fit_times, axis=1)
fit_times_std = np.std(fit_times, axis=1)
# Plot learning curve
axes[0].grid()
axes[0].fill_between(
train_sizes,
train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std,
alpha=0.1,
color="r",
)
axes[0].fill_between(
train_sizes,
test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std,
alpha=0.1,
color="g",
)
axes[0].plot(
train_sizes, train_scores_mean, "o-", color="r", label="Training score"
)
axes[0].plot(
train_sizes, test_scores_mean, "o-", color="g", label="Cross-validation score"
)
axes[0].legend(loc="best")
# Plot n_samples vs fit_times
axes[1].grid()
axes[1].plot(train_sizes, fit_times_mean, "o-")
axes[1].fill_between(
train_sizes,
fit_times_mean - fit_times_std,
fit_times_mean + fit_times_std,
alpha=0.1,
)
axes[1].set_xlabel("Training examples")
axes[1].set_ylabel("fit_times")
axes[1].set_title("Scalability of the model")
# Plot fit_time vs score
axes[2].grid()
axes[2].plot(fit_times_mean, test_scores_mean, "o-")
axes[2].fill_between(
fit_times_mean,
test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std,
alpha=0.1,
)
axes[2].set_xlabel("fit_times")
axes[2].set_ylabel("Score")
axes[2].set_title("Performance of the model")
return plt
%%time
if __name__ == "__main__":
fig, axes = plt.subplots(3, 1, figsize=(10, 15))
title = "Learning Curves (Logistic Regression)"
# 15-Folds Cross validation learning curve:
estimator = LogisticRegression()
curves_plot = plot_learning_curve(estimator, title, tfidf_train, y_train,axes=axes, ylim=(0.7, 1.01), cv=15, n_jobs=-1)
plt.show()
###Output
_____no_output_____
###Markdown
"Brute force" method for achieving hyper-parameter optimization (similar to GridSearch): Note: GridSearchCV algorithm is in another file named Pipeline1! ROC/AUC curve example using the sparse matrix representation:
###Code
# Plot the ROC/AUC curve for the LinearSVC example:
from sklearn.metrics import RocCurveDisplay
classifier = LinearSVC(dual=True, max_iter=200)
y_pred_LSVC = classifier.fit(tfidf_train, y_train).decision_function(tfidf_test)
RocCurveDisplay.from_predictions(y_test, y_pred_LSVC)
plt.show()
###Output
_____no_output_____
###Markdown
ROC, AUC curves example failed because can't handle sparse matrix!!!
###Code
%%time
if __name__ == "__main__":
# Run classifier with cross-validation and plot ROC curves
cv = StratifiedKFold(n_splits=10, shuffle=False, random_state=None)
classifier = LinearSVC()
tprs = []
aucs = []
mean_fpr = np.linspace(0, 1, 100)
fig, ax = plt.subplots()
for i, (train, test) in enumerate(cv.split(X, y)):
classifier.fit(X[train], y[train])
viz = RocCurveDisplay.from_estimator(
classifier,
X[test],
y[test],
name="ROC fold {}".format(i),
alpha=0.3,
lw=1,
ax=ax,
)
interp_tpr = np.interp(mean_fpr, viz.fpr, viz.tpr)
interp_tpr[0] = 0.0
tprs.append(interp_tpr)
aucs.append(viz.roc_auc)
ax.plot([0, 1], [0, 1], linestyle="--", lw=2, color="r", label="Chance", alpha=0.8)
mean_tpr = np.mean(tprs, axis=0)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
std_auc = np.std(aucs)
ax.plot(
mean_fpr,
mean_tpr,
color="b",
label=r"Mean ROC (AUC = %0.2f $\pm$ %0.2f)" % (mean_auc, std_auc),
lw=2,
alpha=0.8,
)
std_tpr = np.std(tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
ax.fill_between(
mean_fpr,
tprs_lower,
tprs_upper,
color="grey",
alpha=0.2,
label=r"$\pm$ 1 std. dev.",
)
ax.set(
xlim=[-0.05, 1.05],
ylim=[-0.05, 1.05],
title="Receiver operating characteristic example",
)
ax.legend(loc="lower right")
plt.show()
# "Brute force" methods for achieving hyper-parameter optimization (similar to GridSearch):
# Note: GridSearchCV algorithm is in another file!
###Output
_____no_output_____ |
notebooks/04-mb-methods.ipynb | ###Markdown
The Game without interactions with the map or the google sheet initial conditions with (until now) random numbers:
###Code
import numpy as np
import math
# create dummy matrix
n_plots = 4
rows = cols = 80
plot_length = int(rows/n_plots)
dummy_plot = np.ones(plot_length**2).reshape((plot_length, plot_length))
A1 = dummy_plot
A2 = dummy_plot * 2
B1 = dummy_plot * 3
B2 = dummy_plot * 4
large_matrix = np.block([[A1, A2], [B1, B2]])
large_dummy_matrix = np.ones_like(large_matrix)
n = plot_length
coef_matrix = np.array([[1, 2], [3, 4]])
result = np.multiply(large_dummy_matrix, np.kron(coef_matrix, np.ones((n,n))))
matrix_indizes = np.indices((n_plots, n_plots), dtype="uint8") + 1
row_indizes, column_indizes = matrix_indizes[0], matrix_indizes[1]
plot_definition_matrix = np.char.add(row_indizes.astype(np.str), column_indizes.astype(np.str)).astype(np.uint8)
dummy_playing_field_matrix = np.ones(shape=(rows, cols), dtype=np.uint8)
large_plot_definition_matrix = np.multiply(
dummy_playing_field_matrix,
np.kron(plot_definition_matrix, np.ones(shape=(plot_length, plot_length)))
)
# what I used from your matrix-dummies-notebook:
lulc_matrix = dummy_playing_field_matrix # 80 x 80
cooperation_matrix = dummy_playing_field_matrix # 80 x 80
plot_definition_matrix # 4 x 4
tourism_matrix = plot_definition_matrix # 4 x 4
n_blocks = 4
n_pixel = 20
# other assumtions:
rounds = 10 # number of rounds
teams = 3 # in round 3 - first 3 rounds to grasp whats happening
brexit = 6 # in round 6 -
increased_timber_prices = 1.1 # +10%
dummy_playing_field_matrix
###Output
_____no_output_____
###Markdown
I thought with these dictionaries the change from dummy matrix to the real data should be easy.
###Code
dummy_playing_field_matrix.shape
# include new dictionary - the ownership dictionary was never used because i assigned each player a block. - this is also necessary for teamwork (at least with this code)
#ownership = {
# 'Forester1': [11, 12, 21, 22], # Forester 1 owns Plot 11, 12, 21, 22
# 'Farmer1': [13, 14,23, 24], # Farmer 1 owns Plot 13, 14, 23, 24
# 'Farmer2': [31, 32, 41, 42], # Farmer 2 owns Plot 31, 32, 41, 42
# 'Forester2': [33, 34, 43, 44] # Forester 2 owns Plot 33, 34, 43, 44
#}
landuse = {
'cattle': 1,
'sheep': 2,
'n_forest': 3,
'c_forest': 4
}
# i modified that because it doesn't have cattle cet
# describes the simplified LULC types
simplified_lulc_mapping = {
"Sheep Farming": 1,
"Native Forest": 2,
"Commercial Forest": 3,
"Cattle Farming": 4
}
#number_of_players = len(ownership) + 1 # +1 because of the tourism
# for more than 5 players there would be a new way to allocate ownership of plots.
###Output
_____no_output_____
###Markdown
A function for the decision if teamwork takes place. It assumes 4 players allocated as above and that all have to agree to it.
###Code
# assumes four players only / if the four corner player say yes then it's true.
def teamwork(cooperation_matrix):
teamwork = False
row, col = cooperation_matrix.shape
if cooperation_matrix[0][0] == cooperation_matrix[0][col-1] == cooperation_matrix[row-1][0] == cooperation_matrix[row-1][col-1] == True:
teamwork = True
return teamwork
team_work = teamwork(cooperation_matrix)
team_work
###Output
_____no_output_____
###Markdown
A function returning the number of landuse-pixel for a given matrix (according to the dictionary landuse above).
###Code
# get the total yield for the current map
def yield_map(field):
tot_cattle = np.count_nonzero(field == simplified_lulc_mapping['Cattle Farming'])
tot_sheep = np.count_nonzero(field == simplified_lulc_mapping['Sheep Farming'])
tot_n_forest = np.count_nonzero(field == simplified_lulc_mapping['Native Forest'])
tot_c_forest = np.count_nonzero(field == simplified_lulc_mapping['Commercial Forest'])
return tot_cattle, tot_sheep, tot_n_forest, tot_c_forest
# get the yield for the current playing field
tot_cattle, tot_sheep, tot_n_forest, tot_c_forest = yield_map(lulc_matrix)
tot_cattle, tot_sheep, tot_n_forest, tot_c_forest
###Output
_____no_output_____
###Markdown
Felix: this does not work
###Code
# get the yield for a plot
plot_cattle, plot_sheep, plot_n_forest, plot_c_forest = yield_map(plot_definition_matrix)
plot_cattle, plot_sheep, plot_n_forest, plot_c_forest
###Output
_____no_output_____
###Markdown
Felix: blocks are of size n_pixels / n_blocks, this looks like quadrants?
###Code
# cropping a block out of the map
indices_block_1 = list(range(0, int(rows/math.sqrt(n_blocks))))
indices_block_2 = list(range(int(rows/math.sqrt(n_blocks)), rows))
lulc_matrix[np.ix_(indices_block_1,indices_block_2)]
# global variables for the profit_pp function:
income_farmland_sheep = 30
income_farmland_cattle = 100
income_forest_commercial = 200
income_forest_native = 50
gdp_pc_scotland = 29.600
unempl_rate_scotland = 0.05
row = col = 80
estimate_farmland = 1/2 # half of it is farmland - maybe a bit lower than reality to increase pressure for the game?
estimate_forest = 1/3
number_of_farmer = row*col*income_farmland_sheep/gdp_pc_scotland*estimate_farmland/(1-unempl_rate_scotland)
number_of_forester = row*col*income_forest_commercial/gdp_pc_scotland*estimate_forest/(1-unempl_rate_scotland)
unempl_rate_scotland = 0.05
# uses the number_of_farmers or the numbers_of_forester above and the total amount of money from both farmer or forester as an argument
def unemployment(money, gdp_pc_scottland, number_of_workers):
unempl_rate_scotland = int(money / gdp_pc_scottland)/number_of_potential_workers
return unempl_rate_scotland
###Output
_____no_output_____
###Markdown
The idea for the function profit_pp is to try to adapt the prices and save the adapted prices in a list. these equations are based on the starting value with the assumption that they dictate the demand.The variable brexit says if brexit already happened this round. This would lead to increased_timber_prices. I thought that might slightly reduce all those different variables, becuase I felt it is getting a bit over-complicated. The tot_\\ is a list of the total area of the product gained by the function yield_map. \\_pp is a list with the former prices per pixel. The multiplying factors are a bit random but also based on the assumpton that the price difference of the production is an indicator on the productivity. i.e. 3 = income_farmland_cattle/income_farmland_sheep. Because I didn't want to risk dividing by zero I added a 1.
###Code
# adapt the prices
def profit_pp(round, brexit, increased_timber_prices, tot_sheep, tot_cattle, tot_n_forest, tot_c_forest, cattle_pp, sheep_pp, n_forest_pp, c_forest_pp):
#doesn't take tourism effects into account yet. and the equations are pretty random.
cattle_pp_new = tot_sheep[0] / (1 + tot_cattle[round]*3 + tot_sheep[round])*cattle_pp[0] # a certain demand - + 1 so that its never going to infinity should all land become forest
sheep_pp_new = sheep_pp[0] # assume sheep can go everywhere, eat everything and no degradation and its profit only influences cattle by competition
c_forest_pp_new = (tot_c_forest[0] + tot_n_forest[0])/(1 + tot_c_forest_tot[round] * 4 + area_n_forest_tot[round]) * c_forest_pp[0]
n_forest_pp_new = n_forest_pp[0] # assumes native forest can grow everywhere and its profit only influences the commercial forest through competition in the timber market
if brexit > round:
n_forest_new *= increased_timber_prices # less import of wood.
c_forest_new *= increased_timber_prices
return cattle_pp_new, sheep_pp_new, n_forest_pp_new, c_forest_pp_new
from pdb import set_trace
###Output
_____no_output_____
###Markdown
Here I try to calculate the money a farmer earns each **round**. The **tourism** is a factor t calculated with the tourism_matrixi.e. cattle, sheep, n_forest, c_forest = yield_map(tourism_matrix) t = (sheep*2 + cattle*1.2 + n_forest*2)/n_pixel/n_pixel/(1+c_forest*10) The numbers in this equation are just based on discussions on what is how bad. The one is again added so that there is no division through zero. The division by the number of pixel is to norm t a bit. It is then a factor that can be max twice as high. **brexit** is again an integer ot the round at which brexit happens. I discussed with the design team that teams could be about 3 and brexit then a few rounds after - i.e. 5. **teams** is an integer of the round after which teams are allowed. **teamwork** is the result of the function teamwork. the area_\\ is a list of the area of the individual farmer. It can be calculated by yield map by adressing an indicidual block. ie.yield_map(lulc_matrix[np.ix_([*list of pixels 0 to 39*],[*list of pixel 0 to 39*])])\\_pp is a list of the prices calculated with profit_pp the other arguments are the costs of landuse change. There are no approximations on marcos excel file but we agreed that it should cost something. I thought I'd ask marco next time about it. I made up this list it say: cf_to_nf = 0.5 nf_to_cf = 0.5 s_to_c = 0.5 s_to_nf = c_to_nf = 1 must stay the same (brexit calculations for farmer) c_to_s = 0.5 nf_to_s = -0.1 assuming farmers can convert native forest to farmland but not commercial forest (sell wood) nf_to_c = 0.8 These factors define on how profitable the land is on the first year with the new use. i.e. 0.5 means that it only has half the use. For the mpney_pp_forester function it's the same. subsidies is something that I thought could try to include the tax break for the native forests and the subsidies for the sheep farming. during brexit. That was before the taxation list. I think it could again simplify things. I ust set it to I think it could be something like 0.8.The tax-breaks might also not concern "normal taxes" but rather in regards to inheri
###Code
def money_pp_farmer(round, tourism, teams, brexit, teamwork, area_sheep, area_cattle, area_c_forest, area_n_forest, sheep_pp, cattle_pp, n_forest_pp, c_forest_pp, nf_to_s, nf_to_c, s_to_c, s_to_nf, c_to_s, c_to_nf, subsidies, starting_capital):
if round == 0:
money = starting_capital
else:
# costs of landscape change
try:
d_sheep = area_sheep[round] - area_sheep[round-1]
except: set_trace()
d_cattle = area_cattle[round] - area_cattle[round-1]
d_n_forest = area_n_forest[round] - area_n_forest[round-1] # necessary to potentially allow two changes (i.e. a rise or native forests and cattle on cost of sheep )
m_change = 0
m_brexit = 0
if d_n_forest < 0:
m_change += min([d_cattle, d_n_forest], key= abs)*nf_to_c*cattle_pp[round]
m_change += min([d_sheep, d_n_forest], key= abs)*nf_to_s*sheep_pp[round]
if d_sheep < 0:
m_change += min([d_cattle, d_sheep], key= abs)*s_to_c*cattle_pp[round]
m_change += min([d_sheep, d_n_forest], key= abs)*s_to_nf*n_forest_pp[round]
if d_cattle < 0:
m_change += min([d_cattle, d_sheep], key= abs)*c_to_s*sheep_pp[round]
m_change += min([d_cattle, d_n_forest], key= abs)*c_to_nf *n_forest_pp[round]
# money from the area
m_area = (area_sheep[round] * sheep_pp[round]) + (area_cattle[round] * cattle_pp[round])
if teamwork == True and teams > round:
m_teamwork = area_c_forest[round]*c_forest_pp[round] + area_n_forest[round]*n_forest_pp[round]
if brexit > round:
m_brexit = (subsidies-1)*(area_sheep[round] * sheep_pp[round])
if d_n_forest > 0:
m_brexit += d_n_forest * (subsidies-1)
m_tourism = tourism * m_area
# maybe return later on the performance of each landuse/industrie --> append() so that its easy to plot?
money = m_area + m_change + m_tourism + m_teamwork + brexit
return money
def money_pp_forester(round, tourism, teams, brexit, teamwork, area_sheep, area_c_forest, area_n_forest, sheep_pp, n_forest_pp, c_forest_pp, nf_to_cf, cf_to_nf, subsidies, starting_capital):
# '''
# area_c_forest: number of pixel displaying commercial forest
# area_n_forest: number of pixel displaying native forest
# round: round of the game (starting at 0)
# '''
if round == 0:
money = starting_capital
else:
d_n_forest = area_n_forest[round] - area_n_forest[round-1] # necessary to potentially allow two changes (i.e. a rise or native forests and cattle on cost of sheep )
m_change = 0
m_brexit = 0
# ich habe momentan gemacht, dass man nur etwas verkleinern darf!
if d_n_forest < 0:
m_change += d_c_forest * nf_to_cf * c_forest_pp[round]
if d_n_forest > 0:
m_change += d_n_forest * cf_to_nf *n_forest_pp[round]
# money from the area
m_area = (area_n_forest[round] * n_forest_pp[round]) + ((area_c_forest[round] * c_forest_pp[round]))
if teamwork == True and teams > round:
m_teamwork = area_sheep[round]*sheep_pp[round]
if brexit > round:
if d_n_forest > 0:
m_brexit = d_n_forest * (subsidies-1) + (subsidies-1)*(area_sheep[round] * sheep_pp[round])
m_tourism = tourism * m_area
# maybe return later on the performance of each landuse/industrie --> append() so that its easy to plot?
money = m_area*t + m_change + m_tourism + m_teamwork + brexit
return money
a,b,c,d = main(rounds, teams, brexit, lulc_matrix, tourism_matrix, n_blocks, rows)
###Output
_____no_output_____
###Markdown
The Game without interactions with the map or the google sheet initial conditions with (until now) random numbers:
###Code
import numpy as np
import math
# create dummy matrix
n_plots = 4
rows = cols = 80
plot_length = int(rows/n_plots)
dummy_plot = np.ones(plot_length**2).reshape((plot_length, plot_length))
A1 = dummy_plot
A2 = dummy_plot * 2
B1 = dummy_plot * 3
B2 = dummy_plot * 4
large_matrix = np.block([[A1, A2], [B1, B2]])
large_dummy_matrix = np.ones_like(large_matrix)
n = plot_length
coef_matrix = np.array([[1, 2], [3, 4]])
result = np.multiply(large_dummy_matrix, np.kron(coef_matrix, np.ones((n,n))))
matrix_indizes = np.indices((n_plots, n_plots), dtype="uint8") + 1
row_indizes, column_indizes = matrix_indizes[0], matrix_indizes[1]
plot_definition_matrix = np.char.add(row_indizes.astype(np.str), column_indizes.astype(np.str)).astype(np.uint8)
dummy_playing_field_matrix = np.ones(shape=(rows, cols), dtype=np.uint8)
large_plot_definition_matrix = np.multiply(
dummy_playing_field_matrix,
np.kron(plot_definition_matrix, np.ones(shape=(plot_length, plot_length)))
)
# what I used from your matrix-dummies-notebook:
lulc_matrix = dummy_playing_field_matrix # 80 x 80
cooperation_matrix = dummy_playing_field_matrix # 80 x 80
plot_definition_matrix # 4 x 4
tourism_matrix = plot_definition_matrix # 4 x 4
n_blocks = 4
n_pixel = 20
# other assumtions:
rounds = 10 # number of rounds
teams = 3 # in round 3 - first 3 rounds to grasp whats happening
brexit = 6 # in round 6 -
increased_timber_prices = 1.1 # +10%
###Output
_____no_output_____
###Markdown
I thought with these dictionaries the change from dummy matrix to the real data should be easy.
###Code
dummy_playing_field_matrix.shape
# include new dictionary - the ownership dictionary was never used because i assigned each player a block. - this is also necessary for teamwork (at least with this code)
#ownership = {
# 'Forester1': [11, 12, 21, 22], # Forester 1 owns Plot 11, 12, 21, 22
# 'Farmer1': [13, 14,23, 24], # Farmer 1 owns Plot 13, 14, 23, 24
# 'Farmer2': [31, 32, 41, 42], # Farmer 2 owns Plot 31, 32, 41, 42
# 'Forester2': [33, 34, 43, 44] # Forester 2 owns Plot 33, 34, 43, 44
#}
landuse = {
'cattle': 1,
'sheep': 2,
'n_forest': 3,
'c_forest': 4
}
# i modified that because it doesn't have cattle cet
# describes the simplified LULC types
simplified_lulc_mapping = {
"Sheep Farming": 1,
"Native Forest": 2,
"Commercial Forest": 3,
"Cattle Farming": 4
}
#number_of_players = len(ownership) + 1 # +1 because of the tourism
# for more than 5 players there would be a new way to allocate ownership of plots.
###Output
_____no_output_____
###Markdown
A function for the decision if teamwork takes place. It assumes 4 players allocated as above and that all have to agree to it.
###Code
# assumes four players only / if the four corner player say yes then it's true.
def teamwork(cooperation_matrix):
teamwork = False
row, col = cooperation_matrix.shape
if cooperation_matrix[0][0] == cooperation_matrix[0][col-1] == cooperation_matrix[row-1][0] == cooperation_matrix[row-1][col-1] == True:
teamwork = True
return teamwork
team_work = teamwork(cooperation_matrix)
team_work
###Output
_____no_output_____
###Markdown
A function returning the number of landuse-pixel for a given matrix (according to the dictionary landuse above).
###Code
# get the total yield for the current map
def yield_map(field):
tot_cattle = np.count_nonzero(field == simplified_lulc_mapping['Cattle Farming'])
tot_sheep = np.count_nonzero(field == simplified_lulc_mapping['Sheep Farming'])
tot_n_forest = np.count_nonzero(field == simplified_lulc_mapping['Commercial Forest'])
tot_c_forest = np.count_nonzero(field == simplified_lulc_mapping['Cattle Farming'])
return tot_cattle, tot_sheep, tot_n_forest, tot_c_forest
# get the yield for the current playing field
tot_cattle, tot_sheep, tot_n_forest, tot_c_forest = yield_map(lulc_matrix)
# get the yield for a plot
plot_cattle, plot_sheep, plot_n_forest, plot_c_forest = yield_map(plot_definition_matrix)
# cropping a block out of the map
indices_block_1 = list(range(0, int(rows/math.sqrt(n_blocks))))
indices_block_2 = list(range(int(rows/math.sqrt(n_blocks)), rows))
lulc_matrix[np.ix_(indices_block_1,indices_block_2)].shape
# global variables for the profit_pp function:
income_farmland_sheep = 30
income_farmland_cattle = 100
income_forest_commercial = 200
income_forest_native = 50
gdp_pc_scotland = 29.600
unempl_rate_scotland = 0.05
row = col = 80
estimate_farmland = 1/2 # half of it is farmland - maybe a bit lower than reality to increase pressure for the game?
estimate_forest = 1/3
number_of_farmer = row*col*income_farmland_sheep/gdp_pc_scotland*estimate_farmland/(1-unempl_rate_scotland)
number_of_forester = row*col*income_forest_commercial/gdp_pc_scotland*estimate_forest/(1-unempl_rate_scotland)
unempl_rate_scotland = 0.05
# uses the number_of_farmers or the numbers_of_forester above and the total amount of money from both farmer or forester as an argument
def unemployment(money, gdp_pc_scottland, number_of_workers):
unempl_rate_scotland = int(money / gdp_pc_scottland)/number_of_potential_workers
return unempl_rate_scotland
###Output
_____no_output_____
###Markdown
The idea for the function profit_pp is to try to adapt the prices and save the adapted prices in a list. these equations are based on the starting value with the assumption that they dictate the demand.The variable brexit says if brexit already happened this round. This would lead to increased_timber_prices. I thought that might slightly reduce all those different variables, becuase I felt it is getting a bit over-complicated. The tot_\\ is a list of the total area of the product gained by the function yield_map. \\_pp is a list with the former prices per pixel. The multiplying factors are a bit random but also based on the assumpton that the price difference of the production is an indicator on the productivity. i.e. 3 = income_farmland_cattle/income_farmland_sheep. Because I didn't want to risk dividing by zero I added a 1.
###Code
# adapt the prices
def profit_pp(round, brexit, increased_timber_prices, tot_sheep, tot_cattle, tot_n_forest, tot_c_forest, cattle_pp, sheep_pp, n_forest_pp, c_forest_pp):
#doesn't take tourism effects into account yet. and the equations are pretty random.
cattle_pp_new = tot_sheep[0] / (1 + tot_cattle[round]*3 + tot_sheep[round])*cattle_pp[0] # a certain demand - + 1 so that its never going to infinity should all land become forest
sheep_pp_new = sheep_pp[0] # assume sheep can go everywhere, eat everything and no degradation and its profit only influences cattle by competition
c_forest_pp_new = (tot_c_forest[0] + tot_n_forest[0])/(1 + tot_c_forest_tot[round] * 4 + area_n_forest_tot[round]) * c_forest_pp[0]
n_forest_pp_new = n_forest_pp[0] # assumes native forest can grow everywhere and its profit only influences the commercial forest through competition in the timber market
if brexit > round:
n_forest_new *= increased_timber_prices # less import of wood.
c_forest_new *= increased_timber_prices
return cattle_pp_new, sheep_pp_new, n_forest_pp_new, c_forest_pp_new
from pdb import set_trace
###Output
_____no_output_____
###Markdown
Here I try to calculate the money a farmer earns each **round**. The **tourism** is a factor t calculated with the tourism_matrixi.e. cattle, sheep, n_forest, c_forest = yield_map(tourism_matrix) t = (sheep*2 + cattle*1.2 + n_forest*2)/n_pixel/n_pixel/(1+c_forest*10) The numbers in this equation are just based on discussions on what is how bad. The one is again added so that there is no division through zero. The division by the number of pixel is to norm t a bit. It is then a factor that can be max twice as high. **brexit** is again an integer ot the round at which brexit happens. I discussed with the design team that teams could be about 3 and brexit then a few rounds after - i.e. 5. **teams** is an integer of the round after which teams are allowed. **teamwork** is the result of the function teamwork. the area_\\ is a list of the area of the individual farmer. It can be calculated by yield map by adressing an indicidual block. ie.yield_map(lulc_matrix[np.ix_([*list of pixels 0 to 39*],[*list of pixel 0 to 39*])])\\_pp is a list of the prices calculated with profit_pp the other arguments are the costs of landuse change. There are no approximations on marcos excel file but we agreed that it should cost something. I thought I'd ask marco next time about it. I made up this list it say: cf_to_nf = 0.5 nf_to_cf = 0.5 s_to_c = 0.5 s_to_nf = c_to_nf = 1 must stay the same (brexit calculations for farmer) c_to_s = 0.5 nf_to_s = -0.1 assuming farmers can convert native forest to farmland but not commercial forest (sell wood) nf_to_c = 0.8 These factors define on how profitable the land is on the first year with the new use. i.e. 0.5 means that it only has half the use. For the mpney_pp_forester function it's the same. subsidies is something that I thought could try to include the tax break for the native forests and the subsidies for the sheep farming. during brexit. That was before the taxation list. I think it could again simplify things. I ust set it to I think it could be something like 0.8.The tax-breaks might also not concern "normal taxes" but rather in regards to inheri
###Code
def money_pp_farmer(round, tourism, teams, brexit, teamwork, area_sheep, area_cattle, area_c_forest, area_n_forest, sheep_pp, cattle_pp, n_forest_pp, c_forest_pp, nf_to_s, nf_to_c, s_to_c, s_to_nf, c_to_s, c_to_nf, subsidies, starting_capital):
if round == 0:
money = starting_capital
else:
# costs of landscape change
try:
d_sheep = area_sheep[round] - area_sheep[round-1]
except: set_trace()
d_cattle = area_cattle[round] - area_cattle[round-1]
d_n_forest = area_n_forest[round] - area_n_forest[round-1] # necessary to potentially allow two changes (i.e. a rise or native forests and cattle on cost of sheep )
m_change = 0
m_brexit = 0
if d_n_forest < 0:
m_change += min([d_cattle, d_n_forest], key= abs)*nf_to_c*cattle_pp[round]
m_change += min([d_sheep, d_n_forest], key= abs)*nf_to_s*sheep_pp[round]
if d_sheep < 0:
m_change += min([d_cattle, d_sheep], key= abs)*s_to_c*cattle_pp[round]
m_change += min([d_sheep, d_n_forest], key= abs)*s_to_nf*n_forest_pp[round]
if d_cattle < 0:
m_change += min([d_cattle, d_sheep], key= abs)*c_to_s*sheep_pp[round]
m_change += min([d_cattle, d_n_forest], key= abs)*c_to_nf *n_forest_pp[round]
# money from the area
m_area = (area_sheep[round] * sheep_pp[round]) + (area_cattle[round] * cattle_pp[round])
if teamwork == True and teams > round:
m_teamwork = area_c_forest[round]*c_forest_pp[round] + area_n_forest[round]*n_forest_pp[round]
if brexit > round:
m_brexit = (subsidies-1)*(area_sheep[round] * sheep_pp[round])
if d_n_forest > 0:
m_brexit += d_n_forest * (subsidies-1)
m_tourism = tourism * m_area
# maybe return later on the performance of each landuse/industrie --> append() so that its easy to plot?
money = m_area + m_change + m_tourism + m_teamwork + brexit
return money
def money_pp_forester(round, tourism, teams, brexit, teamwork, area_sheep, area_c_forest, area_n_forest, sheep_pp, n_forest_pp, c_forest_pp, nf_to_cf, cf_to_nf, subsidies, starting_capital):
# '''
# area_c_forest: number of pixel displaying commercial forest
# area_n_forest: number of pixel displaying native forest
# round: round of the game (starting at 0)
# '''
if round == 0:
money = starting_capital
else:
d_n_forest = area_n_forest[round] - area_n_forest[round-1] # necessary to potentially allow two changes (i.e. a rise or native forests and cattle on cost of sheep )
m_change = 0
m_brexit = 0
# ich habe momentan gemacht, dass man nur etwas verkleinern darf!
if d_n_forest < 0:
m_change += d_c_forest * nf_to_cf * c_forest_pp[round]
if d_n_forest > 0:
m_change += d_n_forest * cf_to_nf *n_forest_pp[round]
# money from the area
m_area = (area_n_forest[round] * n_forest_pp[round]) + ((area_c_forest[round] * c_forest_pp[round]))
if teamwork == True and teams > round:
m_teamwork = area_sheep[round]*sheep_pp[round]
if brexit > round:
if d_n_forest > 0:
m_brexit = d_n_forest * (subsidies-1) + (subsidies-1)*(area_sheep[round] * sheep_pp[round])
m_tourism = tourism * m_area
# maybe return later on the performance of each landuse/industrie --> append() so that its easy to plot?
money = m_area*t + m_change + m_tourism + m_teamwork + brexit
return money
a,b,c,d = main(rounds, teams, brexit, lulc_matrix, tourism_matrix, n_blocks, rows)
###Output
[]
4
|
_notebooks/2020-11-06-Bob-Ross-Episode-Generator.ipynb | ###Markdown
Bob Ross Episode Text Generator> The following shows how to create a text generator using LSTM's in Keras.- toc: true - badges: true- comments: true- categories: [nlp, keras] This project shows how we can gather data and build a model to generate text in the style of bob ross. In order to gather data, we'll be using a script called [download-yt-playlist.py](bob_ross/scripts/download-yt-playlist.py) that uses the YouTube API to download a Bob Ross playlist. This playlist contains most of the Bob Ross epiodes as well as the transcript from each epiode
###Code
!pip install beautifulsoup4
import pandas as pd
import tensorflow as tf
from bs4 import BeautifulSoup
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we'll import the dataset that we created using the `download-ty-playlist` script,The csv is included in the repo we'll then load the dataset into a pandas dataframeOur csv contains 249 rows, which are the number of episodes that was returned by the script.We've removed any columns that are empty, since not all of the episodes had a transcript
###Code
df = pd.read_csv('bob_ross/bob_ross_episodes.csv', index_col=0, parse_dates=['snippet.publishedAt'], usecols=['snippet.description', 'snippet.publishedAt', 'snippet.title', 'transcript'])
df.dropna(inplace=True)
# df['snippet.publishedAt'] =pd.to_datetime(df['snippet.publishedAt'])
df.sort_values(by='snippet.publishedAt', inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
The following will build out text generator.We'll do the following,- load a sample of the dataset (about 30%)",- combine all the transcription into one long string,- We use BeautifulSoup to remove any html tags in the text,- we'll then generate a list of all the characters in the transcription
###Code
#only use about %20 of rows
test_df = df.sample(frac=.3)
len(test_df)
#combine transcription into 1 list
descriptions = ''
all_transcriptions = ''
for index, row in test_df.iterrows():
all_transcriptions += BeautifulSoup(row['transcript'],"lxml").get_text().replace('\n', ' ')
len(all_transcriptions)
###Output
_____no_output_____
###Markdown
Next, we'll just display a piece of the all_transcriptions just to see what it looks like
###Code
all_transcriptions[:100]
chars = sorted(list(set(all_transcriptions)))
print('Count of unique characters (i.e., features):', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
###Output
Count of unique characters (i.e., features): 81
###Markdown
Next, we'll generate seperate lists of all the strings that we'll feed into the modelThis list is 40 charcters of the full text, seperated by 3 characters(`step`)
###Code
# cut the text in semi-redundant sequences of maxlen characters
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(all_transcriptions) - maxlen, step):
sentences.append(all_transcriptions[i: i + maxlen])
next_chars.append(all_transcriptions[i + maxlen])
print('nb sequences:', len(sentences))
print(sentences[:10], "\n")
print(next_chars[:10])
###Output
nb sequences: 507374
["- Hi, welcome back. I'm certainly glad y", "i, welcome back. I'm certainly glad you ", "welcome back. I'm certainly glad you cou", "come back. I'm certainly glad you could ", "e back. I'm certainly glad you could joi", "ack. I'm certainly glad you could join u", ". I'm certainly glad you could join us t", "'m certainly glad you could join us toda", 'certainly glad you could join us today. ', 'tainly glad you could join us today. And']
['o', 'c', 'l', 'j', 'n', 's', 'o', 'y', 'A', ',']
###Markdown
We now have 507374 lists, that each contain 40 characters of the string,The first list is `- Hi, welcome back. I'm certainly glad y`, followed by `i, welcome back. I'm certainly glad you`Next, we'll create tensors of x and y, that contain the lists of all the sentences, we've created
###Code
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
x[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
###Output
_____no_output_____
###Markdown
Builing The ModelNext, we'll build out our model
###Code
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.optimizers import RMSprop
from keras.callbacks import LambdaCallback, ModelCheckpoint
import random
import sys
import io
###Output
_____no_output_____
###Markdown
The following are 2 functions that will print the prediction from each epoch, as well as the `temperature`temperature is defined as the following:"Temperature is a scaling factor applied to the outputs of our dense layer before applying the softmaxactivation function. In a nutshell, it defines how conservative or creative the model's guesses are for the next character in a sequence. Lower values of temperature (e.g., 0.2) will generate \"safe\" guesses whereas values of temperature above 1.0 will start to generate riskier guesses. Think of it as the amount of surpise you'd have at seeing an English word start with \"st\" versus \"sg\". When temperature is low, we may get lots of the's and and's; when temperature is high, things get more unpredictable.-- https://medium.freecodecamp.org/applied-introduction-to-lstms-for-text-generation-380158b29fb3
###Code
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
def on_epoch_end(epoch, logs):
# Function invoked for specified epochs. Prints generated text.
# Using epoch+1 to be consistent with the training epochs printed by Keras
if epoch+1 == 1 or epoch+1 == 15:
print()
print('----- Generating text after Epoch: %d' % epoch)
start_index = random.randint(0, len(all_transcriptions) - maxlen - 1)
for diversity in [0.2, 0.5, 1.0, 1.2]:
print('----- diversity:', diversity)
generated = ''
sentence = all_transcriptions[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(400):
x_pred = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_indices[char]] = 1.
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
else:
print()
print('----- Not generating text after Epoch: %d' % epoch)
generate_text = LambdaCallback(on_epoch_end=on_epoch_end)
def build_basic_model()
model = Sequential()
model.add(LSTM(batch_size, input_shape=(maxlen,len(chars))))
model.add(Dense(len(chars)))
model.add(Activation("softmax"))
return model
###Output
_____no_output_____
###Markdown
Here, we'll create our model.After a few tests, i've seen that having 2 LSTMs with a batch size of 256, returns very good results.The first model is a basic model with 1 LSTM'
###Code
batch_size=128
learning_rate = 0.01
model = build_basic_model()
optimizer = RMSprop(lr=learning_rate)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
# define the checkpoint
filepath = "weights.hdf5"
checkpoint = ModelCheckpoint(filepath,
monitor='loss',
verbose=1,
save_best_only=True,
mode='min')
# fit model using our gpu
with tf.device('/gpu:0'):
model.fit(x, y,
batch_size=batch_size,
epochs=15,
verbose=1,
callbacks=[generate_text, checkpoint])
You can see that the results were good, but lets go deeper
###Output
_____no_output_____
###Markdown
Builing a better model Here, we'll be using 2 LSTM's and dropout, durning training, we'll save the best model for later
###Code
from keras.layers import Dropout
batch_size=256
learning_rate = 0.01
def build_deeper_model():
model = Sequential()
model.add(LSTM(batch_size, input_shape=(maxlen, len(chars)), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(batch_size))
model.add(Dropout(0.2))
model.add(Dense(len(chars), activation='softmax'))
model = build_deeper_model()
model.compile(loss='categorical_crossentropy', optimizer='adam')
# define the checkpoint
filepath = "bob_ross/weights-deepeer.hdf5"
checkpoint = ModelCheckpoint(filepath,
monitor='loss',
verbose=1,
save_best_only=True,
mode='min')
# fit model using our gpu
with tf.device('/gpu:0'):
model.fit(x, y,
batch_size=64,
epochs=15,
verbose=1,
callbacks=[generate_text, checkpoint])
###Output
_____no_output_____
###Markdown
Loading the Model After training, which took about 2 hours to train, using a GCP instance with a Tesla P100 GPU, we load the best model and perfrom a predictionWe loaded our model from our weights, and now we can predictI choose a temperature of `0.5`. it seemed the have the best results
###Code
# model.load_weights("weights-deepeer.hdf5")
from keras.models import load_model
model = load_model("bob_ross/weights-deepeer.hdf5")
model
# model.compile(loss='categorical_crossentropy', optimizer='adam')
model.compile(loss='categorical_crossentropy', optimizer='adam')
int_to_char = dict((i, c) for i, c in enumerate(chars))
start_index = 0
for diversity in [0.5]:
print('----- diversity:', diversity)
generated = ''
sentence = all_transcriptions[start_index: start_index + maxlen]
generated += sentence
# print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(1000):
x_pred = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_indices[char]] = 1.
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
###Output
----- diversity: 0.5
- Hi, welcome back. I'm certainly glad you can do this black canvas. I have the same clouds that the light on that little bushes that lives on the brush, and I'm gonna go up in here. There, something like that. There, and we'll just put a little bit of this but of the Prussian blue to think on the brush here. We'll just push in some little bushes. And I wanna see what you looks like that, let's go back into the bright red. And you can make it a little bit of the little bushes and sidight to have a little bit of the little light color. Just a little bit of the background color to the colors on the brush, and I wanna do is in the background, I'm gonna put a little bit of black in here and there. Just sort of lay the color. There, that easy. And we can see it in a little more of the lighter and they go right into the one of the lay of the paintings that you have the colors that you go. And we got a little bit of lighter on the canvas on the canvas, and we can see the sun up and make it any signes that come back in the color on
|
ccsn/notebooks/ppn.py_demo_publ.ipynb | ###Markdown
CCSN and gamma-process IntroductionThis run is to test the production during gamma-process in CCSNe. Here we selected hot conditions, where the peak of Se74 is obtained, one of the lightest p-nuclei. TrajectoryExtracted from M=15Msun, Z=0.01 of Ritter et al. 2018 models. Mass coordinate: 1.84Msun. Science caseProduction of gamma-process in CCSNe. Notice that this is not representative of all the gamma process. The trajectory was selected looking at the production peak of the p-nuclei Se74. While also some other p-nuclei are still abundant, like Kr78 and Sr84, heavier p-nuclei are not made here. They need less extreme conditions. Comparison of master and modular2 runs RunsRun | Comment | Git master | Git modular2 |run date-----|--------|----------|------|---ppn_default_gammaprocess | the first run, everything default, extended network | 3e3f2c6 | e070ed8 | 11 September, 2021ppn_01_gammaprocess | done for modular2, integration_method=0 | 3e3f2c6 | e070ed8 | 11 September, 2021ppn_02_gammaprocess | done for modular2, integration_method=0, detailed_balance=.False. | 3e3f2c6 | e070ed8 | 11 September, 2021ppn_03_gammaprocess | done for modular2, as ppn_02 and screen_option=1 | 3e3f2c6 | e070ed8 | 12 September, 2021 Where`/user/scratch14_wendi3/NuGrid/OZoNE21/notebooks/ppn_gammaprocess_ccsn_se74` Differences between master and modular2* Initial differences in the proton and alpha-particle production from the reactions. Relevant differences in the gamma-process nuclei.* integration method downgraded for modular2, reduces differences significantly for protons and alphas. Differences in gamma-process final production still present* detailed_balance removed from modular2. Some differences still present, but extremely reduced. Notice that this trajectory does not need subtimesteps, so improved modular2 scheme should not play a role. * No impact of screening. I think that a reason could be that modular2 has a new interpolation for kadonis rates. Could it be it? It is difficult to test this. Overall is good. * DONE
###Code
# %pylab nbagg
%pylab ipympl
from nugridpy import ppn
from nugridpy import utils
import matplotlib.pyplot as plt
#%pylab nbagg
#from nugridpy import ppn
from nugridpy import utils as ut
# loading modular2
#dir_mod2='/user/scratch14_wendi3/NuGrid/OZoNE21/ppn-cases/modular2/ppn_gammaprocess_ccsn_se74/ppn_default_gammaprocess/'
#dir_mod2='/user/scratch14_wendi3/NuGrid/OZoNE21/ppn-cases/modular2/ppn_gammaprocess_ccsn_se74/ppn_01_gammaprocess/'
#dir_mod2='/user/scratch14_wendi3/NuGrid/OZoNE21/ppn-cases/modular2/ppn_gammaprocess_ccsn_se74/ppn_02_gammaprocess/'
#dir_mod2='/user/scratch14_wendi3/NuGrid/OZoNE21/ppn-cases/modular2/ppn_gammaprocess_ccsn_se74/ppn_03_gammaprocess/'
dir_mod2='/user/scratch14_wendi3/dpa/nuppn_xrb/frames/ppn/run_ppn_ccsn/gp_process_inse0/'
pa=ppn.abu_vector(dir_mod2); px = ppn.xtime(dir_mod2)
n_cyc = len(pa.files)-1
# loading master
#m_dir = '/user/scratch14_wendi3/NuGrid/OZoNE21/ppn-cases/master/ppn_gammaprocess_ccsn_se74/ppn_default_gammaprocess/'
m_dir='/user/scratch14_wendi3/dpa/nuppn_xrb/frames/ppn/run_ppn_ccsn/gp_process_inse1/'
pam=ppn.abu_vector(m_dir); pxm = ppn.xtime(m_dir)
ifig=1; plt.close(ifig); plt.figure(ifig)
ref_I_want = 0
pa.iso_abund(n_cyc,decayed=True,stable=True,elemaburtn=True,ref=ref_I_want)
plt.xlim(5,80)#; plt.ylim(-8,0.5)
specs = ['PROT','HE 4','C 12','O 16','SI 28','SE 74','KR 78','SR 84'] # isotopes to plot
y_lim = (-6,0.2)
legend_loc = 4
y_axis_offset = 0
# make the plot
symbs=utils.symbol_list('lines1')
abus=[]
for spec in specs:
abu=pxm.get(spec)
abus.append(abu)
yr = 60.*60.*24.*365.
time=pxm.get('time')*yr
close(10);figure(10)
for i in range(len(specs)):
plt.semilogx(time,log10(abus[i] + y_axis_offset),symbs[i],lw=0.5,label=specs[i])
plt.legend(loc='lower right', ncol=4, fancybox=True)
# and now for master
abus = []
for spec in specs:
abu=px.get(spec)
abus.append(abu)
for i in range(len(specs)):
plt.semilogx(time,log10(abus[i] + y_axis_offset),symbs[i],lw=2)
plt.xlabel('$\mathrm{time\ (sec)}$',fontsize=16); plt.ylabel('$\log_{10}(X_i)$',fontsize=16)
plt.ylim(-10,0.5); plt.title('master (thin lines) vs modular2 (thick lines)')
plt.tight_layout()
# different number of stable isotopes... to be debugged
#ifig=3;close(ifig);figure(ifig)
#ratio = pa.abunds/pam.abunds
#plt.semilogy(pa.a_iso_to_plot,ratio,'+m')
#plt.axhline(y=1)
#plt.ylim(0.1,10)
#pa.abu_chart?
# issue with using mass_range. Not functional
# to check better
ifig=2;close(ifig);figure(ifig)
pa.abu_chart(n_cyc,ifig=ifig,plotaxis=[10,30,10,30])
# this requires flux files available to be tested.
#pa.abu_flux_chart(n_cyc,plotaxis=[15,35,15,25],profile='neutron',prange=4)
close(157);figure(157)
z_ran = [5,25]; y_lim=[-6,3.7]
plot_cyc = 200 # n_cyc
pa.elemental_abund(plot_cyc,zrange=z_ran, ref=0, mark='o',linestyle='dotted',\
title_items=['densn','mod'],ylim=y_lim)
# trying to make production factor plot with elements. Does not work?
close(158);figure(158)
z_ran = [5,25]; y_lim=[-1,4.7]
solar_file = '/user/scratch14_wendi3/NuGrid/CODE/modular2/NuPPN/frames/mppnp/USEEPP/iniab2.0E-02GN93.ppn'
pa.elemental_abund(n_cyc,zrange=z_ran, ylim = y_lim, ref=1,solar_filename=solar_file, mark='o',linestyle='dotted',\
title_items=['mod'])
plt.grid(None)
#ifig = 11; plt.close(ifig)#; plt.figure(ifig)
fig,ax = plt.subplots() #plt.figure(ifig)
x = time; y = px.get('t9')
ax.semilogx(x,y,'k-',label='temperature')
ax.legend(loc='lower left')
ax.set_ylabel('Temperature (GK)',fontsize=15); ax.set_xlabel('Time (sec)',fontsize=15)
# twin object for two different y-axis on the sample plot
ax2=ax.twinx()
x = time; y = px.get('rho')
ax2.semilogx(x,y,'b-+',label='density',markevery=10)
ax2.set_ylabel('Density (g cm$^{-3}$)',fontsize=15)
ax2.legend(loc='upper right')
plt.gcf().subplots_adjust(right=0.85)
###Output
_____no_output_____ |
notebooks/puma_mapping.ipynb | ###Markdown
That's a lot of columns
###Code
block
puma_2010_path = Path("../data/raw/ipums_puma_2010/ipums_puma_2010.shp")
us_puma = gpd.read_file(
puma_2010_path,
encoding="utf-8",
)
us_puma.head()
md_puma = us_puma.drop(us_puma[us_puma['STATEFIP'] != '24'].index)
md_puma.head()
md_puma.plot()
len(md_puma)
block
# Ensure that the coordinate reference system is the same
block = block.to_crs(md_puma.crs)
block.plot()
###Output
_____no_output_____ |
example_nbrequests.ipynb | ###Markdown
Example nbrequestsPretty printing requests/responses from the [python requests](http://requests.readthedocs.io/) library in Jupyter notebook.
###Code
# autoreload for development
%load_ext autoreload
%autoreload 1
%aimport nbrequests
import requests
from nbrequests import display_request
###Output
_____no_output_____
###Markdown
GET requestExecute `requests.get('...')` and format the result in the notebook output using `display_request(r)`.
###Code
r = requests.get('http://httpbin.org/get')
display_request(r)
###Output
_____no_output_____
###Markdown
POST request
###Code
r = requests.post('http://httpbin.org/post', data='text data')
display_request(r)
r = requests.post('http://httpbin.org/post', data=b'binary data')
display_request(r)
###Output
_____no_output_____
###Markdown
JSON POST requestJSON is currently renderer using [Renderjson](https://github.com/caldwell/renderjson/).
###Code
import json
r = requests.post('http://httpbin.org/post', json={"a": "b"})
display_request(r)
###Output
_____no_output_____
###Markdown
500 error response
###Code
r = requests.get('http://httpbin.org/status/500')
display_request(r)
###Output
_____no_output_____ |
doc/source/usage/tutorial/Calibrant/new_calibrant.ipynb | ###Markdown
Creation of a new calibrantIn this tutorial we will see how to create a new calibrant. For this example we will use one of the calibrant sold by the NIST: Chromium oxide.The cell parameter are definied in this document:http://www.cristallografia.org/uploaded/614.pdfThe first step is to record the cell parameters and provide them to pyFAI to define the cell.
###Code
import pyFAI
print("pyFAI version",pyFAI.version)
from pyFAI.calibrant import Cell
crox = Cell.hexagonal(4.958979, 13.59592)
###Output
_____no_output_____
###Markdown
Chromium oxide has a crystal structure de Corrundom which is R-3c (space group 167). The selection rules are rather complicated and are available in:http://img.chem.ucl.ac.uk/sgp/large/167bz2.gifWe will setup a function corresponding to the selection rules. It returns True if the reflection is active and False otherwise.
###Code
def reflection_condition_167(h,k,l):
"""from http://img.chem.ucl.ac.uk/sgp/large/167bz2.htm"""
if h == 0 and k == 0:
# 00l: 6n
return l%6 == 0
elif h == 0 and l == 0:
# 0k0: k=3n
return k%3 == 0
elif k == 0 and l == 0:
# h00: h=3n
return h%3 == 0
elif h == k:
# hhl: l=3n
return l%3 == 0
elif l == 0:
# hk0: h-k = 3n
return (h-3)%3 == 0
elif k == 0:
# h0l: l=2n h-l = 3n
return (l%2 == 0) and ((h - l)%3 == 0)
elif h == 0:
# 0kl: l=2n h+l = 3n
return (l%2 == 0) and ((k + l)%3 == 0)
else:
# -h + k + l = 3n
return (-h + k + l) % 3 == 0
# Use the actual selection rule, not the short version:
#cro.selection_rules.append(lambda h, k, l: ((-h + k + l) % 3 == 0))
crox.selection_rules.append(reflection_condition_167)
for reflex in crox.d_spacing(1).values():
print(reflex[0], reflex[1])
crox.save("Cr2O3", "Eskolaite (R-3c)", dmin=0.1, doi="NIST reference compound SRM 674b")
###Output
_____no_output_____
###Markdown
Creation of a new calibrantIn this tutorial we will see how to create a new calibrant. For this example we will use one of the calibrant sold by the NIST: Chromium oxide.The cell parameter are definied in this document:http://www.cristallografia.org/uploaded/614.pdfThe first step is to record the cell parameters and provide them to pyFAI to define the cell.
###Code
import pyFAI
print("pyFAI version",pyFAI.version)
from pyFAI.calibrant import Cell
crox = Cell.hexagonal(4.958979, 13.59592)
###Output
_____no_output_____
###Markdown
Chromium oxide has a crystal structure de Corrundom which is R-3c (space group 167). The selection rules are rather complicated and are available in:http://img.chem.ucl.ac.uk/sgp/large/167bz2.gifWe will setup a function corresponding to the selection rules. It returns True if the reflection is active and False otherwise.
###Code
def reflection_condition_167(h,k,l):
"""from http://img.chem.ucl.ac.uk/sgp/large/167bz2.htm"""
if h == 0 and k == 0:
# 00l: 6n
return l%6 == 0
elif h == 0 and l == 0:
# 0k0: k=3n
return k%3 == 0
elif k == 0 and l == 0:
# h00: h=3n
return h%3 == 0
elif h == k:
# hhl: l=3n
return l%3 == 0
elif l == 0:
# hk0: h-k = 3n
return (h-3)%3 == 0
elif k == 0:
# h0l: l=2n h-l = 3n
return (l%2 == 0) and ((h - l)%3 == 0)
elif h == 0:
# 0kl: l=2n h+l = 3n
return (l%2 == 0) and ((k + l)%3 == 0)
else:
# -h + k + l = 3n
return (-h + k + l) % 3 == 0
# Use the actual selection rule, not the short version:
#cro.selection_rules.append(lambda h, k, l: ((-h + k + l) % 3 == 0))
crox.selection_rules.append(reflection_condition_167)
for reflex in crox.d_spacing(1).values():
print(reflex[0], reflex[1])
crox.save("Cr2O3", "Eskolaite (R-3c)", dmin=0.1, doi="NIST reference compound SRM 674b")
###Output
_____no_output_____
###Markdown
Creation of a new calibrantIn this tutorial we will see how to create a new calibrant. For this example we will use one of the calibrant sold by the NIST: Chromium oxide.The cell parameter are definied in this document:http://www.cristallografia.org/uploaded/614.pdfThe first step is to record the cell parameters and provide them to pyFAI to define the cell.
###Code
import pyFAI
print(pyFAI.version)
from pyFAI.calibrant import Cell
crox = Cell.hexagonal(4.958979, 13.59592)
###Output
_____no_output_____
###Markdown
Chromium oxide has a crystal structure de Corrundom which is R-3c (space group 167). The selection rules are rather complicated and are available in:http://img.chem.ucl.ac.uk/sgp/large/167bz2.gifWe will setup a function corresponding to the selection rules. It returns True if the reflection is active and False otherwise.
###Code
def reflection_condition_167(h,k,l):
"""from http://img.chem.ucl.ac.uk/sgp/large/167bz2.htm"""
if h == 0 and k == 0:
# 00l: 6n
return l%6 == 0
elif h == 0 and l == 0:
# 0k0: k=3n
return k%3 == 0
elif k == 0 and l == 0:
# h00: h=3n
return h%3 == 0
elif h == k:
# hhl: l=3n
return l%3 == 0
elif l == 0:
# hk0: h-k = 3n
return (h-3)%3 == 0
elif k == 0:
# h0l: l=2n h-l = 3n
return (l%2 == 0) and ((h - l)%3 == 0)
elif h == 0:
# 0kl: l=2n h+l = 3n
return (l%2 == 0) and ((k + l)%3 == 0)
else:
# -h + k + l = 3n
return (-h + k + l) % 3 == 0
# Use the actual selection rule, not the short version:
#cro.selection_rules.append(lambda h, k, l: ((-h + k + l) % 3 == 0))
crox.selection_rules.append(reflection_condition_167)
for reflex in crox.d_spacing(1).values():
print(reflex[0], reflex[1])
crox.save("Cr2O3", "Eskolaite (R-3c)", dmin=0.1, doi="NIST reference compound SRM 674b")
###Output
_____no_output_____ |
Patterns for Cleaner Python/String-formatting.ipynb | ###Markdown
A shocking truth about string formatting '*Old Style*' String formattingStrings in Python have a unique built-in operation that can be accessed with the %-operator. It's a shortcut that lets you do simple positional formatting very easily.In old style string formatting there are also other format specifiers available that let you control string. For example, It's possible to convert numbers to hexadecimal notation.It's also possible to refer to variable substitutions by name in your format string, if you pass a mapping to the %-operator.This makes your format strings easier to maintain and easier to modify in the future. You don't have to worry about making sure the order you're passing in the values matches up with the order the values are referenced in the future. It is still supported in the latest versions of Python
###Code
name = 'Bob'
errno = 156344
print('Hello, %s' % name)
## Hexadecimal parser
print('Hello, 0x%x' % errno)
## Format string by name
print('Hey, %(name)s, there is a 0x%(errno)x error!' % {'name': name, 'errno': errno})
###Output
Hello, Bob
Hello, 0x262b8
Hey, Bob, there is a 0x262b8 error!
###Markdown
'*New Style*' String formattingPython3 introduced a new way to do string formatting that was also later back-ported to Python2.7. Formatting is now handled by calling a *format()* on a string object.Or, you can refer to your variable substitutions by name and use them in any order you want. This is quite a powerful feature as it allows for re-arranging the order of display without changing the arguments passed to the format function.This also shows that the syntax to format an int variable as a hexadecimal string has changed. Overall, the format string syntax has become more powerful without complicating the simpler use cases. It pays off to read up on this *string formatting mini-language* in the Python documentation
###Code
print('Hello, {}'.format(name))
print('Hey, {name}, there is a 0x{errno:x} error!'.format(name=name, errno=errno))
###Output
Hello, Bob
Hey, Bob, there is a 0x262b8 error!
###Markdown
Literal String Interpolation (Python 3.6+)Python 3.6 adds yet another way to format strings, called *Formatted String Literals*. This new way of formatting strings lets you use embedded Python expressions inside string constants.String literals also support the existing format string syntax of the str.format() method.
###Code
print(f'Hello, {name}')
print(f'Hey, {name}, there is a {errno:#x} error!')
###Output
Hello, Bob
Hey, Bob, there is a 0x262b8 error!
###Markdown
Template StringOne more technique for string formatting in Python is Template Strings. It's simpler and less powerful mechanism, but in some cases this might be exactly what you're looking for. Template strings are not a core language feature but they're supplied by a module in the standard library. Another difference is that template strings don't allow forma specifiers.That worked great but you're probably wondering when you use template strings in your Python programs. This is the best use case for template strings is when you're handlings forma strings generated by users of your program.
###Code
from string import Template
temp = Template('Hey, $name!')
print(temp.substitute(name=name))
templ_string = 'Hey $name, there is a $error error!'
print(Template(templ_string).substitute(name=name, error=hex(errno)))
###Output
Hey, Bob!
Hey Bob, there is a 0x262b8 error!
|
Santander Coders - Data Science Schoolarship/Santander_Python_Basics_Module_01.ipynb | ###Markdown
Operadores Operadores aritméticos
###Code
# Podemos fazer operações aritméticas simples
a = 2 + 3 # Soma
b = 2 - 3 # Subtração
c = 2 * 3 # Multiplicação
d = 2 / 3 # Divisão
e = 2 // 3 # Divisão inteira
f = 2 ** 3 # Potência
g = 2 % 3 # Resto de divisão
print (a, b, c, d, e, f, g)
# Podemos fazer operações dentro do print
print (a+1, b+1)
#Podemos fazer operações com variáveis não inteiras
nome = input('Digite seu primeiro nome:')
nome = nome + ' Leal'
print(nome)
###Output
6 0
Digite seu primeiro nome:Welliton
Welliton Leal
###Markdown
Operadores relacionais
###Code
comparacao1 = 5 > 3
print(comparacao1)
comparacao2 = 5 < 3
print(comparacao2)
###Output
True
False
###Markdown
Estuturas Sequênciais Outputs
###Code
y = 3.14 # uma variável do tipo real (float)
escola = "Let's Code" # uma variável literal (string)
# Podemos exibir textos na tela e/ou valores de variáveis com a função print().
print('eu estudo na ', escola)
print('pi vale', y)
# Podemos fazer operações dentro do print:
print (y+1, y**2)
###Output
eu estudo na Let's Code
pi vale 3.14
4.140000000000001 9.8596
###Markdown
Inputs
###Code
# Podemos ler valores do teclado com a função input().
# Ela permite que a gente passe uma mensagem para o usuário.
nome = input('Digite o seu nome: ')
# Tudo que é lido por input() é considerado uma string (str).
# Para tratar como outros tipos de dados é necessário realizar a conversão:
peso = float(input('Digite o seu peso: ')) # lê o peso como número real
idade = int(input('Digite a sua idade: ')) # lê a idade como número inteiro
print(nome, 'pesa', peso, 'kg e tem', idade, 'anos de idade.')
salario_mensal = input('Digite o valor do seu salário mensal: ')
salario_mensal = float(salario_mensal)
gasto_mensal = input("Digite o seu gasto mensal: ")
gasto_mensal = float(gasto_mensal)
montante = salario_mensal - gasto_mensal
print(montante)
###Output
Digite o valor do seu salário mensal: 1000
Digite o seu gasto mensal: 500
500.0
###Markdown
Estruturas Condicionais if
###Code
idade = int(input('Digite sua idade:'))
if idade >= 12:
print('Você pode entrar na montanha russa.')
print('Obrigado por participar.')
altura = float(input('Digite sua altura, em metros:'))
if idade >= 12 and altura >= 1.60:
print('Você pode entrar na montanha russa.')
print('Obrigado por participar.')
valor_passagem = 4.30
valor_corrida = input('Qual é o valor da corrida?')
if float(valor_corrida) <= valor_passagem*5:
print('pague a corrida')
else:
print('pegue um ônibus')
###Output
Qual é o valor da corrida?40
pegue um ônibus
###Markdown
else
###Code
idade = int(input('Digite sua idade:'))
altura = float(input('Digite sua altura, em metros:'))
if idade >= 12 and altura >= 1.60:
print('Você pode entrar na montanha russa.')
else:
print('Você não pode entrar na montanha russa.')
print('Obrigado por participar.')
###Output
Digite sua idade:12
Digite sua altura, em metros:1
Você não pode entrar na montanha russa.
Obrigado por participar.
###Markdown
Estrutura de Repetição while
###Code
horario = int(input('Qual horario é agora? '))
# Testando a condição uma única vez com o if:
if 0 < horario < 6:
print('Você está no horario da madrugada')
else:
print('Você nao está no horario da madrugada')
# Testando a condição em loop com o while:
while 0 < horario < 6:
print('Você está no horario da madrugada')
horario = horario + 1
else:
print('Você nao está no horario da madrugada')
# O while permite continuar decrementando o número de pipocas até chegar em 0:
num_pipocas = int(input('Digite o numero de pipocas: '))
while num_pipocas > 0:
print('O numero de pipocas é: ', num_pipocas)
num_pipocas = num_pipocas - 1
###Output
Qual horario é agora? 3
Você está no horario da madrugada
Você está no horario da madrugada
Você está no horario da madrugada
Você está no horario da madrugada
Você nao está no horario da madrugada
Digite o numero de pipocas: 3
O numero de pipocas é: 3
O numero de pipocas é: 2
O numero de pipocas é: 1
###Markdown
Validação de entrada
###Code
# o exemplo abaixo não aceita um salário menor do que o mínimo atual:
salario = float(input('Digite seu salario: '))
while salario < 998.0:
salario = float(input('Entre com um salario MAIOR DO QUE 998.0: '))
else:
print('O salario que você entrou foi: ', salario)
# o exemplo abaixo só sai do loop quando o usuário digitar "OK":
resposta = input('Digite OK: ')
while resposta != 'OK':
resposta = input('Não foi isso que eu pedi, digite OK: ')
###Output
Digite OK: nao
Não foi isso que eu pedi, digite OK: OK
###Markdown
Contador
###Code
# Declaramos um contador como 0:
contador = 0
# Definimos o número de repetições:
numero = int(input('Digite um numero: '))
# Rodamos o while até o contador se igualar ao número de repetições:
while contador < numero:
print(contador)
contador = contador + 1
###Output
Digite um numero: 9
0
1
2
3
4
5
6
7
8
###Markdown
Break
###Code
while True:
resposta = input('Digite OK: ')
if resposta == 'OK':
break
###Output
Digite OK: 3
Digite OK: nao
Digite OK: OK
|
1_code/3_results_sample_characteristics.ipynb | ###Markdown
Setup directory variables
###Code
print(os.environ['PIPELINEDIR'])
if not os.path.exists(os.environ['PIPELINEDIR']): os.makedirs(os.environ['PIPELINEDIR'])
figdir = os.path.join(os.environ['OUTPUTDIR'], 'figs')
print(figdir)
if not os.path.exists(figdir): os.makedirs(figdir)
phenos = ['Psychosis_Positive','Psychosis_NegativeDisorg','Overall_Psychopathology']
phenos_label = ['Psychosis (positive)','Psychosis (negative)','Overall psychopathology']
phenos_short = ['Psy. (pos)','Psy. (neg)','Ov. psy.']
metrics = ['str', 'ac', 'bc', 'cc', 'sgc']
metrics_label = ['Strength', 'Average controllability', 'Betweenness centrality', 'Closeness centrality', 'Subgraph centrality']
###Output
_____no_output_____
###Markdown
Setup plots
###Code
if not os.path.exists(figdir): os.makedirs(figdir)
os.chdir(figdir)
sns.set(style='white', context = 'paper', font_scale = 1)
sns.set_style({'font.family':'sans-serif', 'font.sans-serif':['Public Sans']})
cmap = my_get_cmap('pair')
###Output
_____no_output_____
###Markdown
Load data
###Code
df = pd.read_csv(os.path.join(os.environ['PIPELINEDIR'], '1_compute_node_features', 'store', outfile_prefix+'df.csv'))
df.set_index(['bblid', 'scanid'], inplace = True)
df_node = pd.read_csv(os.path.join(os.environ['PIPELINEDIR'], '1_compute_node_features', 'store', outfile_prefix+'df_node.csv'))
df_node.set_index(['bblid', 'scanid'], inplace = True)
df_node_ac_i2 = pd.read_csv(os.path.join(os.environ['PIPELINEDIR'], '1_compute_node_features', 'store', outfile_prefix+'df_node_ac_i2.csv'))
df_node_ac_i2.set_index(['bblid', 'scanid'], inplace = True)
df_node_ac_overc = pd.read_csv(os.path.join(os.environ['PIPELINEDIR'], '1_compute_node_features', 'store', outfile_prefix+'df_node_ac_overc.csv'))
df_node_ac_overc.set_index(['bblid', 'scanid'], inplace = True)
df_node_ac_overc_i2 = pd.read_csv(os.path.join(os.environ['PIPELINEDIR'], '1_compute_node_features', 'store', outfile_prefix+'df_node_ac_overc_i2.csv'))
df_node_ac_overc_i2.set_index(['bblid', 'scanid'], inplace = True)
c = pd.read_csv(os.path.join(os.environ['PIPELINEDIR'], '1_compute_node_features', 'out', outfile_prefix+'c.csv'))
c.set_index(['bblid', 'scanid'], inplace = True); print(c.shape)
print(np.sum(df_node.filter(regex = 'ac').corrwith(df_node_ac_i2, method='spearman') < 0.9999))
print(np.sum(df_node_ac_overc.corrwith(df_node_ac_overc_i2, method='spearman') < 0.9999))
print(np.sum(df_node.filter(regex = 'ac').corrwith(df_node_ac_i2, method='pearson') < 0.9999))
print(np.sum(df_node_ac_overc.corrwith(df_node_ac_overc_i2, method='pearson') < 0.9999))
df['sex'].unique()
print(np.sum(df.loc[:,'sex'] == 1))
print((np.sum(df.loc[:,'sex'] == 1)/df.shape[0]) * 100)
print(np.sum(df.loc[:,'sex'] == 2))
print((np.sum(df.loc[:,'sex'] == 2)/df.shape[0]) * 100)
print(df['ageAtScan1_Years'].mean())
print(c['ageAtScan1'].mean())
print(np.sum(c['ageAtScan1'] < c['ageAtScan1'].mean()))
print(c.shape[0]-np.sum(c['ageAtScan1'] <= c['ageAtScan1'].mean()))
df['ageAtScan1_Years'].std()
###Output
_____no_output_____
###Markdown
Sex
###Code
stats = pd.DataFrame(index = phenos, columns = ['test_stat', 'pval'])
for i, pheno in enumerate(phenos):
x = df.loc[df.loc[:,'sex'] == 1,pheno]
y = df.loc[df.loc[:,'sex'] == 2,pheno]
test_output = sp.stats.ttest_ind(x,y)
stats.loc[pheno,'test_stat'] = test_output[0]
stats.loc[pheno,'pval'] = test_output[1]
stats.loc[:,'pval_corr'] = get_fdr_p(stats.loc[:,'pval'])
stats.loc[:,'sig'] = stats.loc[:,'pval_corr'] < 0.05
stats
f, ax = plt.subplots(1,len(phenos))
f.set_figwidth(len(phenos)*2.5)
f.set_figheight(2.5)
# sex: 1=male, 2=female
for i, pheno in enumerate(phenos):
x = df.loc[df.loc[:,'sex'] == 1,pheno]
sns.distplot(x, ax = ax[i], label = 'male')
y = df.loc[df.loc[:,'sex'] == 2,pheno]
sns.distplot(y, ax = ax[i], label = 'female')
if i == 0:
ax[i].legend()
ax[i].set_xlabel(pheno)
if stats.loc[pheno,'sig']:
ax[i].set_title('t-stat:' + str(np.round(stats.loc[pheno,'test_stat'],2)) + ', p-value: ' + str(np.round(stats.loc[pheno,'pval_corr'],4)), fontweight="bold")
else:
ax[i].set_title('t-stat:' + str(np.round(stats.loc[pheno,'test_stat'],2)) + ', p-value: ' + str(np.round(stats.loc[pheno,'pval_corr'],4)))
f.savefig(outfile_prefix+'symptoms_distributions_sex.png', dpi = 300, bbox_inches = 'tight', pad_inches = 0)
###Output
_____no_output_____
###Markdown
Age
###Code
stats = pd.DataFrame(index = phenos, columns = ['r', 'pval'])
x = df['ageAtScan1_Years']
for i, pheno in enumerate(phenos):
y = df[pheno]
r,p = sp.stats.pearsonr(x,y)
stats.loc[pheno,'r'] = r
stats.loc[pheno,'pval'] = p
stats.loc[:,'pval_corr'] = get_fdr_p(stats.loc[:,'pval'])
stats.loc[:,'sig'] = stats.loc[:,'pval_corr'] < 0.05
stats
f, ax = plt.subplots(1,len(phenos))
f.set_figwidth(len(phenos)*2.5)
f.set_figheight(2.5)
x = df['ageAtScan1_Years']
for i, pheno in enumerate(phenos):
y = df[pheno]
sns.regplot(x, y, ax=ax[i], scatter=False)
ax[i].scatter(x, y, color='gray', s=5, alpha=0.5)
if stats.loc[pheno,'sig']:
ax[i].set_title('r:' + str(np.round(stats.loc[pheno,'r'],2)) + ', p-value: ' + str(np.round(stats.loc[pheno,'pval_corr'],4)), fontweight="bold")
else:
ax[i].set_title('r:' + str(np.round(stats.loc[pheno,'r'],2)) + ', p-value: ' + str(np.round(stats.loc[pheno,'pval_corr'],4)))
f.savefig(outfile_prefix+'symptoms_correlations_age.png', dpi = 300, bbox_inches = 'tight', pad_inches = 0)
###Output
_____no_output_____
###Markdown
DWI data quality
###Code
# 'dti64MeanRelRMS', 'dti64Tsnr', 'dti64Outmax', 'dti64Outmean',
f, ax = plt.subplots()
f.set_figwidth(2)
f.set_figheight(2)
x = df['dti64MeanRelRMS']
sns.distplot(x, ax = ax)
ax.set_xlabel('In-scanner motion \n(mean relative framewise displacement)')
ax.set_ylabel('Counts')
ax.tick_params(pad = -2)
textstr = 'median = {:.2f}\nmean = {:.2f}\nstd = {:.2f}'.format(x.median(), x.mean(), x.std())
ax.text(0.975, 0.975, textstr, transform=ax.transAxes,
verticalalignment='top', horizontalalignment='right')
f.savefig(outfile_prefix+'inscanner_motion.svg', dpi = 300, bbox_inches = 'tight', pad_inches = 0)
f, ax = plt.subplots()
f.set_figwidth(2)
f.set_figheight(2)
x = df['dti64Tsnr']
sns.distplot(x, ax = ax)
ax.set_xlabel('Temporal signal to noise ratio')
ax.set_ylabel('Counts')
ax.tick_params(pad = -2)
textstr = 'median = {:.2f}\nmean = {:.2f}\nstd = {:.2f}'.format(x.median(), x.mean(), x.std())
ax.text(0.05, 0.975, textstr, transform=ax.transAxes,
verticalalignment='top', horizontalalignment='left')
f.savefig(outfile_prefix+'dwi_tsnr.svg', dpi = 300, bbox_inches = 'tight', pad_inches = 0)
x = df['dti64MeanRelRMS']
# x = df['dti64Tsnr']
stats = pd.DataFrame(index = df_node.columns, columns = ['r','p'])
for col in df_node.columns:
r = sp.stats.spearmanr(x, df_node.loc[:,col])
stats.loc[col,'r'] = r[0]
stats.loc[col,'p'] = r[1]
f, ax = plt.subplots(1,len(metrics))
f.set_figwidth(len(metrics)*1.5)
f.set_figheight(1.5)
for i, metric in enumerate(metrics):
sns.distplot(stats.filter(regex = metric, axis = 0)['r'], ax = ax[i])
ax[i].set_title(metrics_label[i])
if i == 2: ax[i].set_xlabel('QC-SC (Spearman\'s rho)')
else: ax[i].set_xlabel('')
if i == 0: ax[i].set_ylabel('Counts')
ax[i].tick_params(pad = -2)
qc_sc = np.sum(stats.filter(regex = metric, axis = 0)['p']<.05)/num_parcels*100
textstr = '{:.0f}%'.format(qc_sc)
ax[i].text(0.975, 0.975, textstr, transform=ax[i].transAxes,
verticalalignment='top', horizontalalignment='right')
f.savefig(outfile_prefix+'qc_sc.svg', dpi = 300, bbox_inches = 'tight', pad_inches = 0)
###Output
_____no_output_____
###Markdown
Diagnostic table
###Code
# to_screen = ['goassessSmryPsy', 'goassessSmryMood', 'goassessSmryEat', 'goassessSmryAnx', 'goassessSmryBeh']
# counts = np.sum(df.loc[:,to_screen] == 4)
# print(counts)
# print(counts/df.shape[0]*100)
df['goassessDxpmr4_bin'] = df.loc[:,'goassessDxpmr4'] == '4PS'
df['goassessDxpmr4_bin'] = df['goassessDxpmr4_bin'].astype(int)*4
to_screen = ['goassessDxpmr4_bin','goassessSmryMan', 'goassessSmryDep', 'goassessSmryBul', 'goassessSmryAno', 'goassessSmrySoc',
'goassessSmryPan', 'goassessSmryAgr', 'goassessSmryOcd', 'goassessSmryPtd', 'goassessSmryAdd',
'goassessSmryOdd', 'goassessSmryCon']
counts = np.sum(df.loc[:,to_screen] == 4)
print(counts)
print(counts/df.shape[0]*100)
to_keep = counts[counts >= 50].index
list(to_keep)
counts[counts >= 50]
my_xticklabels = ['Psychosis spectrum (n=303)',
'Depression (n=156)',
'Social anxiety disorder (n=261)',
'Agoraphobia (n=61)',
'PTSD (n=136)',
'ADHD (n=168)',
'ODD (n=353)',
'Conduct disorder (n=85)']
f, ax = plt.subplots(1,len(phenos))
f.set_figwidth(len(phenos)*2.5)
f.set_figheight(2)
for i, pheno in enumerate(phenos):
mean_scores = np.zeros(len(to_keep))
for j, diagnostic_score in enumerate(to_keep):
idx = df.loc[:,diagnostic_score] == 4
mean_scores[j] = df.loc[idx,pheno].mean()
ax[i].bar(x = np.arange(0,len(mean_scores)), height = mean_scores, color = 'w', edgecolor = 'k', linewidth = 1.5)
ax[i].set_ylim([-.2,1.2])
ax[i].set_xticks(np.arange(0,len(mean_scores)))
ax[i].set_xticklabels(my_xticklabels, rotation = 90)
ax[i].tick_params(pad = -2)
ax[i].set_title(phenos_label[i])
if i == 1:
ax[i].set_xlabel('Psychopathology group')
if i == 0:
ax[i].set_ylabel('Factor score (z)')
f.savefig(outfile_prefix+'symptom_dimensions_groups.svg', dpi = 300, bbox_inches = 'tight', pad_inches = 0)
###Output
_____no_output_____
###Markdown
Setup directory variables
###Code
figdir = os.path.join(os.environ['OUTPUTDIR'], 'figs')
print(figdir)
if not os.path.exists(figdir): os.makedirs(figdir)
labels = ['Train', 'Test']
phenos = ['Overall_Psychopathology','Psychosis_Positive','Psychosis_NegativeDisorg','AnxiousMisery','Externalizing','Fear']
phenos_label_short = ['Ov. psych.', 'Psy. (pos.)', 'Psy. (neg.)', 'Anx.-mis.', 'Ext.', 'Fear']
phenos_label = ['Overall psychopathology','Psychosis (positive)','Psychosis (negative)','Anxious-misery','Externalizing','Fear']
###Output
_____no_output_____
###Markdown
Setup plots
###Code
if not os.path.exists(figdir): os.makedirs(figdir)
os.chdir(figdir)
sns.set(style='white', context = 'paper', font_scale = 0.8)
sns.set_style({'font.family':'sans-serif', 'font.sans-serif':['Public Sans']})
cmap = my_get_cmap('pair')
###Output
_____no_output_____
###Markdown
Load data
###Code
df = pd.read_csv(os.path.join(os.environ['PIPELINEDIR'], '1_compute_node_features', 'out', outfile_prefix+'df.csv'))
df.set_index(['bblid', 'scanid'], inplace = True)
print(df.shape)
df['ageAtScan1_Years'].mean()
df['ageAtScan1_Years'].std()
df['sex'].unique()
print(np.sum(df.loc[:,'sex'] == 1))
print(np.round((np.sum(df.loc[:,'sex'] == 1)/df.shape[0]) * 100,2))
print(np.sum(df.loc[:,'sex'] == 2))
print(np.round((np.sum(df.loc[:,'sex'] == 2)/df.shape[0]) * 100,2))
np.sum(df.loc[:,'averageManualRating'] == 2)
# train/test proportion
print('train N:', np.sum(df.loc[:,train_test_str] == 0))
print(np.round(df.loc[df.loc[:,train_test_str] == 0,'ageAtScan1_Years'].mean(),2))
print(np.round(df.loc[df.loc[:,train_test_str] == 0,'ageAtScan1_Years'].std(),2))
print('test N:', np.sum(df.loc[:,train_test_str] == 1))
print(np.round(df.loc[df.loc[:,train_test_str] == 1,'ageAtScan1_Years'].mean(),2))
print(np.round(df.loc[df.loc[:,train_test_str] == 1,'ageAtScan1_Years'].std(),2))
###Output
test N: 990
15.13
3.54
###Markdown
0 = Male, 1 = Female
###Code
# train/test proportion
print('train, sex = 1, N:', np.sum(df.loc[df.loc[:,train_test_str] == 0,'sex'] == 1))
print(np.round((np.sum(df.loc[df.loc[:,train_test_str] == 0,'sex'] == 1)/np.sum(df.loc[:,train_test_str] == 0)) * 100,2))
print('train, sex = 2, N:',np.sum(df.loc[df.loc[:,train_test_str] == 0,'sex'] == 2))
print(np.round((np.sum(df.loc[df.loc[:,train_test_str] == 0,'sex'] == 2)/np.sum(df.loc[:,train_test_str] == 0)) * 100,2))
print('test, sex = 1, N:', np.sum(df.loc[df.loc[:,train_test_str] == 1,'sex'] == 1))
print(np.round((np.sum(df.loc[df.loc[:,train_test_str] == 1,'sex'] == 1)/np.sum(df.loc[:,train_test_str] == 1)) * 100,2))
print('test, sex = 2, N:',np.sum(df.loc[df.loc[:,train_test_str] == 1,'sex'] == 2))
print(np.round((np.sum(df.loc[df.loc[:,train_test_str] == 1,'sex'] == 2)/np.sum(df.loc[:,train_test_str] == 1)) * 100,2))
###Output
train, sex = 1, N: 146
51.96
train, sex = 2, N: 135
48.04
test, sex = 1, N: 457
46.16
test, sex = 2, N: 533
53.84
###Markdown
Sex
###Code
stats = pd.DataFrame(index = phenos, columns = ['test_stat', 'pval'])
for i, pheno in enumerate(phenos):
x = df.loc[df.loc[:,'sex'] == 1,pheno]
# x = df.loc[np.logical_and(df[train_test_str] == 1,df['sex'] == 1),pheno]
y = df.loc[df.loc[:,'sex'] == 2,pheno]
# y = df.loc[np.logical_and(df[train_test_str] == 1,df['sex'] == 2),pheno]
test_output = sp.stats.ttest_ind(x,y)
stats.loc[pheno,'test_stat'] = test_output[0]
stats.loc[pheno,'pval'] = test_output[1]
stats.loc[:,'pval_corr'] = get_fdr_p(stats.loc[:,'pval'])
stats.loc[:,'sig'] = stats.loc[:,'pval_corr'] < 0.05
np.round(stats.astype(float),2)
f, ax = plt.subplots(1,len(phenos))
f.set_figwidth(len(phenos)*1.4)
f.set_figheight(1.25)
# sex: 1=male, 2=female
for i, pheno in enumerate(phenos):
x = df.loc[df.loc[:,'sex'] == 1,pheno]
# x = df.loc[np.logical_and(df[train_test_str] == 1,df['sex'] == 1),pheno]
sns.kdeplot(x, ax = ax[i], label = 'male', color = 'b')
y = df.loc[df.loc[:,'sex'] == 2,pheno]
# y = df.loc[np.logical_and(df[train_test_str] == 1,df['sex'] == 2),pheno]
sns.kdeplot(y, ax = ax[i], label = 'female', color = 'r')
ax[i].set_xlabel('')
ax[i].set_title(phenos_label[i])
# if stats.loc[pheno,'sig']:
# ax[i].set_title('t-stat:' + str(np.round(stats.loc[pheno,'test_stat'],2)) + ', p-value: ' + str(np.round(stats.loc[pheno,'pval_corr'],4)), fontweight="bold")
# else:
# ax[i].set_title('t-stat:' + str(np.round(stats.loc[pheno,'test_stat'],2)) + ', p-value: ' + str(np.round(stats.loc[pheno,'pval_corr'],4)))
ax[i].tick_params(pad = -2)
ax[i].set_ylim([0,0.5])
if i == 0:
ax[i].set_ylabel('Counts')
else:
ax[i].set_ylabel('')
if i != 0:
ax[i].set_yticklabels('')
# if i == 0:
# ax[i].legend()
if stats.loc[pheno,'sig']:
textstr = 't = {:.2f} \np < 0.05'.format(stats.loc[pheno,'test_stat'])
else:
textstr = 't = {:.2f} \np = {:.2f}'.format(stats.loc[pheno,'test_stat'], stats.loc[pheno,'pval_corr'])
ax[i].text(0.05, 0.95, textstr, transform=ax[i].transAxes,
verticalalignment='top')
f.savefig(outfile_prefix+'symptoms_distributions_sex.svg', dpi = 300, bbox_inches = 'tight', pad_inches = 0)
###Output
_____no_output_____
###Markdown
nuisance correlations
###Code
stats = pd.DataFrame(index = phenos, columns = ['r', 'pval'])
covs = ['ageAtScan1_Years', 'medu1', 'mprage_antsCT_vol_TBV', 'averageManualRating', 'T1_snr']
covs_label = ['Age (yrs)', 'Maternal education \n(yrs)', 'TBV', 'T1 QA', 'T1 SNR']
for c, cov in enumerate(covs):
x = df[cov]
nan_filt = x.isna()
if nan_filt.any():
x = x[~nan_filt]
for i, pheno in enumerate(phenos):
y = df[pheno]
if nan_filt.any():
y = y[~nan_filt]
r,p = sp.stats.pearsonr(x,y)
stats.loc[pheno,'r'] = r
stats.loc[pheno,'pval'] = p
stats.loc[:,'pval_corr'] = get_fdr_p(stats.loc[:,'pval'])
stats.loc[:,'sig'] = stats.loc[:,'pval_corr'] < 0.05
f, ax = plt.subplots(1,len(phenos))
f.set_figwidth(len(phenos)*1.4)
f.set_figheight(1.25)
for i, pheno in enumerate(phenos):
y = df[pheno]
if nan_filt.any():
y = y[~nan_filt]
plot_data = pd.merge(x,y, on=['bblid','scanid'])
sns.kdeplot(x = cov, y = pheno, data = plot_data, ax=ax[i], color='gray', thresh=0.05)
sns.regplot(x = cov, y = pheno, data = plot_data, ax=ax[i], scatter=False)
# ax[i].scatter(x = plot_data[cov], y = plot_data[pheno], color='gray', s=1, alpha=0.25)
ax[i].set_ylabel(phenos_label[i], labelpad=-1)
ax[i].set_xlabel(covs_label[c])
ax[i].tick_params(pad = -2.5)
# ax[i].set_xlim([x.min()-x.min()*.10,
# x.max()+x.max()*.10])
if stats.loc[pheno,'sig']:
textstr = 'r = {:.2f} \np < 0.05'.format(stats.loc[pheno,'r'])
else:
textstr = 'r = {:.2f} \np = {:.2f}'.format(stats.loc[pheno,'r'], stats.loc[pheno,'pval_corr'])
ax[i].text(0.05, 0.975, textstr, transform=ax[i].transAxes,
verticalalignment='top')
f.subplots_adjust(wspace=0.5)
f.savefig(outfile_prefix+'symptoms_correlations_'+cov+'.png', dpi = 150, bbox_inches = 'tight', pad_inches = 0.1)
f.savefig(outfile_prefix+'symptoms_correlations_'+cov+'.svg', dpi = 300, bbox_inches = 'tight', pad_inches = 0)
###Output
_____no_output_____
###Markdown
Diagnostic table
###Code
df['goassessDxpmr4_bin'] = df.loc[:,'goassessDxpmr4'] == '4PS'
df['goassessDxpmr4_bin'] = df['goassessDxpmr4_bin'].astype(int)*4
to_screen = ['goassessDxpmr4_bin','goassessSmryMan', 'goassessSmryDep', 'goassessSmryBul', 'goassessSmryAno', 'goassessSmrySoc',
'goassessSmryPan', 'goassessSmryAgr', 'goassessSmryOcd', 'goassessSmryPtd', 'goassessSmryAdd',
'goassessSmryOdd', 'goassessSmryCon']
# counts = np.sum(df.loc[:,to_screen] == 4)
# counts = np.sum(df.loc[df.loc[:,train_test_str] == 0,to_screen] == 4)
counts = np.sum(df.loc[df.loc[:,train_test_str] == 1,to_screen] == 4)
print(counts)
print(np.round(counts/df.shape[0]*100,2))
to_keep = counts[counts >= 50].index
list(to_keep)
counts[counts >= 50]
# my_xticklabels = ['Psychosis spectrum (n=389)',
# 'Depression (n=191)',
# 'Social anxiety disorder (n=318)',
# 'Agoraphobia (n=77)',
# 'PTSD (n=168)',
# 'ADHD (n=226)',
# 'ODD (n=448)',
# 'Conduct disorder (n=114)']
my_xticklabels = ['Psychosis spectrum (n=364)',
'Depression (n=179)',
'Social anxiety disorder (n=295)',
'Agoraphobia (n=73)',
'PTSD (n=156)',
'ADHD (n=206)',
'ODD (n=407)',
'Conduct disorder (n=102)']
f, ax = plt.subplots(1,len(phenos))
f.set_figwidth(len(phenos)*1.4)
f.set_figheight(2)
for i, pheno in enumerate(phenos):
mean_scores = np.zeros(len(to_keep))
for j, diagnostic_score in enumerate(to_keep):
idx = df.loc[:,diagnostic_score] == 4
mean_scores[j] = df.loc[idx,pheno].mean()
ax[i].bar(x = np.arange(0,len(mean_scores)), height = mean_scores, color = 'w', edgecolor = 'k', linewidth = 1.5)
ax[i].set_ylim([-.2,1.2])
ax[i].set_xticks(np.arange(0,len(mean_scores)))
ax[i].set_xticklabels(my_xticklabels, rotation = 90)
ax[i].tick_params(pad = -2)
ax[i].set_title(phenos_label[i])
# if i == 1:
# ax[i].set_xlabel('Diagnostic group')
if i == 0:
ax[i].set_ylabel('Factor score (z)')
if i != 0:
ax[i].set_yticklabels('')
f.savefig(outfile_prefix+'symptom_dimensions_groups.svg', dpi = 300, bbox_inches = 'tight', pad_inches = 0)
###Output
_____no_output_____
###Markdown
Siblings
###Code
def count_dups(df):
dup_bool = df['famid'].duplicated(keep=False)
unique_famid = np.unique(df.loc[dup_bool,'famid'])
number_dups = np.zeros(len(unique_famid),)
for i, famid in enumerate(unique_famid):
number_dups[i] = np.sum(df['famid'] == famid)
count_of_dups = []
unique_dups = np.unique(number_dups)
for i in unique_dups:
count_of_dups.append(np.sum(number_dups == i))
return unique_famid, unique_dups, count_of_dups
unique_famid, unique_dups, count_of_dups = count_dups(df)
print(len(unique_famid))
print(count_of_dups)
print(unique_dups)
print(np.multiply(count_of_dups,unique_dups))
print(np.multiply(count_of_dups,unique_dups))
unique_famid, unique_dups, count_of_dups = count_dups(df.loc[df['train_test'] == 0,:])
print(len(unique_famid))
print(count_of_dups)
print(unique_dups)
print(np.multiply(count_of_dups,unique_dups))
unique_famid, unique_dups, count_of_dups = count_dups(df.loc[df['train_test'] == 1,:])
print(len(unique_famid))
print(count_of_dups)
print(unique_dups)
print(np.multiply(count_of_dups,unique_dups))
###Output
0
[]
[]
[]
|
packaging/notebooks/2018-05-10_gallant_data.ipynb | ###Markdown
Make the DataArray/.nc
###Code
gd = xr.open_dataarray("/Users/jjpr/dev/scratch/gallant_data/gallant-V1/data.nc")
gd
gd = gd.T.rename({"image_file_name": "presentation"})
gd.coords["presentation_id"] = ("presentation", range(gd.shape[1]))
gd.coords["neuroid_id"] = ("neuroid", gd["neuroid"].values)
file_base = "/Users/jjpr/dev/scratch/gallant_data/gallant-V1/stimuli"
def massage_file_name(file_name):
split = re.split("\\\\|/", file_name)
relative_path = os.path.join(*split[4:])
full_path = os.path.join(file_base, relative_path)
basename = split[-1]
exists = os.path.exists(full_path)
sha1 = kf(full_path).sha1
result = {
"image_file_path_original": relative_path,
"image_id": sha1
}
return result
df_massage = pd.DataFrame(list(map(massage_file_name, gd["presentation"].values)))
df_massage
for column in df_massage.columns:
gd.coords[column] = ("presentation", df_massage[column])
gd
gd.reset_index(["neuroid", "presentation"], drop=True, inplace=True)
gd
gd.to_netcdf("gallant_v1_single_electrode.nc")
###Output
_____no_output_____
###Markdown
Make the image zip file
###Code
df_image_meta = pd.DataFrame({"image_id": np.unique(gd["image_id"].values)})
def first_dupe(sha1):
fr = FileRecord.get(sha1=sha1)
return fr.sightings[0].location
# order is not guaranteed, so on subsequent runs test that you got the same result, see below
df_image_meta["first_dupe"] = list(map(first_dupe, df_image_meta["image_id"]))
df_image_meta
def get_relative(path, base):
split_path = path.split("/")
split_base = base.split("/")
target_path = "/".join(split_path[len(split_base):])
return target_path
df_image_meta["relative_path"] = list(map(lambda x: get_relative(x, file_base), df_image_meta["first_dupe"]))
df_image_meta
target_zip_path = "/Users/jjpr/.mkgu/data/gallant.David2004/gallant_crcns_v1_stimuli.zip"
with zipfile.ZipFile(target_zip_path, 'w') as target_zip:
for image in df_image_meta.itertuples():
target_zip.write(image.first_dupe, arcname=image.relative_path)
containing_dir = os.path.dirname(target_zip_path)
with zipfile.ZipFile(target_zip_path, 'r') as new_zip:
new_zip.extractall(containing_dir)
def copied(source):
split = source.split("/")
target = os.path.join(containing_dir, "/".join(split[8:]))
return os.path.exists(target)
df_image_meta["copied"] = list(map(copied, df_image_meta["first_dupe"]))
df_image_meta
all(df_image_meta["copied"])
ls $target_base
###Output
[34mr0148A[m[m/ [34mr0156A[m[m/ [34mr0162B[m[m/ [34mr0168B[m[m/ [34mr0170A[m[m/ [34mr0208D[m[m/ [34mr0211A[m[m/ [34mr0215B[m[m/
[34mr0154B[m[m/ [34mr0158A[m[m/ [34mr0164C[m[m/ [34mr0169B[m[m/ [34mr0206B[m[m/ [34mr0210A[m[m/ [34mr0212B[m[m/ [34mr0217B[m[m/
###Markdown
Make the StimulusSet lookup meta
###Code
pwdb.connect(reuse_if_open=True)
pwdb.create_tables(models=[ImageModel, AttributeModel, ImageMetaModel, StimulusSetModel, ImageStoreModel, StimulusSetImageMap, ImageStoreMap])
gallant_v1_images, created = StimulusSetModel.get_or_create(name="gallant.David2004")
gallant_v1_image_store, created = ImageStoreModel.get_or_create(location_type="S3", store_type="zip",
location="https://mkgu-gallant-crcns.s3.amazonaws.com/gallant_crcns_v1_stimuli.zip")
eav_image_file_sha1 = AttributeModel(name="image_file_sha1", type="str")
eav_image_file_path_unique = AttributeModel(name="image_file_path_unique", type="str")
eav_image_file_sha1.save()
eav_image_file_path_unique.save()
for image in df_image_meta.itertuples():
pw_image = ImageModel(image_id=image.image_id)
pw_stimulus_set_image_map = StimulusSetImageMap(stimulus_set=gallant_v1_images, image=pw_image)
pw_image_image_store_map = ImageStoreMap(image=pw_image, image_store=gallant_v1_image_store,
path=image.relative_path)
pw_image.save()
pw_stimulus_set_image_map.save()
pw_image_image_store_map.save()
ImageMetaModel(image=pw_image, attribute=eav_image_file_sha1, value=str(image.image_id)).save()
ImageMetaModel(image=pw_image, attribute=eav_image_file_path_unique, value=str(image.relative_path)).save()
gallant_v1_stimulus_set = mkgu.get_stimulus_set("gallant.David2004")
gallant_v1_stimulus_set
###Output
_____no_output_____
###Markdown
Make the DataAssembly lookup meta
###Code
pwdb.create_tables(models=[AssemblyModel, AssemblyStoreMap, AssemblyStoreModel])
assy = AssemblyModel(name="gallant.David2004", assembly_class="NeuronRecordingAssembly",
stimulus_set=gallant_v1_images)
assy.save()
store = AssemblyStoreModel(assembly_type="netCDF",
location_type="S3",
location="https://mkgu-gallant-crcns.s3.amazonaws.com/gallant_v1_single_electrode.nc")
store.save()
assy_store_map = AssemblyStoreMap(assembly_model=assy, assembly_store_model=store, role="gallant.David2004")
assy_store_map.save()
gallant_v1 = mkgu.get_assembly("gallant.David2004")
gallant_v1
len(np.unique(gallant_v1["image_file_path_original"].values))
len(np.unique(gallant_v1["image_file_path_unique"].values))
###Output
_____no_output_____ |
nbs/00b_mltypes.ipynb | ###Markdown
Mltypes
###Code
# hide
%load_ext autoreload
%autoreload 2
#exporti
import warnings
import random
import uuid
import attr
from typing import List
# hide
import ipytest
import pytest
ipytest.autoconfig(raise_on_error=True)
###Output
_____no_output_____
###Markdown
Types
###Code
#export
@attr.define(slots=False)
class Coordinate:
x: int
y: int
@attr.define(slots=False)
class BboxCoordinate(Coordinate):
width: int
height: int
@attr.define(slots=False)
class BboxVideoCoordinate(BboxCoordinate):
id: str
def bbox_coord(self) -> BboxCoordinate:
return BboxCoordinate(*list(attr.asdict(self).values())[:4])
#hide
from attr import asdict
bbox_coord = BboxCoordinate(*[1, 2, 3, 4])
assert asdict(bbox_coord) == {'x': 1, 'y': 2, 'width': 3, 'height': 4}
video_coord = BboxVideoCoordinate(1, 1, 1, 1, '1')
assert asdict(video_coord) == {'x': 1, 'y': 1, 'width': 1, 'height': 1, 'id': '1'}
assert video_coord.bbox_coord() == BboxCoordinate(*[1, 1, 1, 1])
#export
# todo: use pydantic
class Input():
"""
Abstract class to represent input
"""
def __repr__(self):
return f"Annotator Input type: {self.__class__.__name__}"
class Output():
"""
Abstract class to represent input
"""
def __repr__(self):
return f"Annotator Output type: {self.__class__.__name__}"
###Output
_____no_output_____
###Markdown
Image classes
###Code
#export
class InputImage(Input):
"""
image_dir: string
Directory of the image
image_width: int
Width of the image
image_height: int
Height of the image
fit_canvas: bool
Ignores the image size and fit image on the canvas size
"""
def __init__(
self,
image_dir: str = 'pics',
image_width: int = 100,
image_height: int = 100,
fit_canvas: bool = False
):
self.width = image_width
self.height = image_height
self.dir = image_dir
self.fit_canvas = fit_canvas
if fit_canvas:
warnings.warn("Image size will be ignored since fit_canvas is activated")
%%ipytest
def test_it_warn_if_fit_canvas_is_activate_with_size():
with warnings.catch_warnings(record=True) as w:
inp_img = InputImage(image_width = 300, image_height = 300, fit_canvas=True)
assert bool(w) is True
%%ipytest
def test_it_doesnt_warn_if_fit_canvas_is_deactivate_with_size():
with warnings.catch_warnings(record=True) as w:
inp_img = InputImage(image_width = 300, image_height = 300, fit_canvas=False)
assert bool(w) is False
%%ipytest
def test_it_warn_if_fit_canvas_is_activate_with_size_none():
with warnings.catch_warnings(record=True) as w:
inp_img = InputImage(image_width=None, image_height=None, fit_canvas=True)
assert bool(w) is True
%%ipytest
def test_it_doesnt_warn_if_fit_canvas_is_deactivate_with_size_none():
with warnings.catch_warnings(record=True) as w:
inp_img = InputImage(image_width=None, image_height=None, fit_canvas=False)
assert bool(w) is False
# hide
imz = InputImage()
imz.dir
#export
class OutputImageLabel(Output):
"""
Configures the image output.
If no `label_dir` is specified, it generates randomized one.
"""
def __init__(self, label_dir=None, label_width=50, label_height=50):
self.width = label_width
self.height = label_height
if label_dir is None:
self.dir = 'class_autogenerated_' + ''.join(random.sample(str(uuid.uuid4()), 5))
else:
self.dir = label_dir
#export
class OutputLabel(Output):
def __init__(self, class_labels: List[str], label_width=50, label_height=50):
self.width = label_width
self.height = label_height
self.class_labels = class_labels
# hide
# label dir exists
lblz = OutputImageLabel(label_dir='class_images')
assert lblz.dir == 'class_images'
# hide
# no label dir, should generate randomized name
lblz = OutputImageLabel()
assert 'class_autogenerated_' in lblz.dir
#export
class OutputImageBbox(Output):
"""
classes: List[str]
Define the classes of objects available
to be classified
"""
def __init__(self, classes: List[str] = None):
self.classes = classes or []
self.drawing_enabled = True
#export
class OutputVideoBbox(OutputImageBbox):
"""
Specialization of the OutputImageBbox.
classes: List[str]
Define the classes of objects available
to be classified
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.drawing_enabled = True
self.drawing_trajectory_enabled = True
#export
class OutputGridBox(Output):
"""Configures the capture annotator"""
pass
#export
class NoOutput(Output):
"""Explore the data without worring
about which type of data it's wanted
to output"""
pass
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
_____no_output_____ |
AI_Class/032/Time-series.ipynb | ###Markdown
시계열 데이터 시각화 함수plot_series() 함수는 임의의 시간 값 (time), 시계열 데이터 (series)를 입력받아 Matplotlib 그래프로 나타내는 함수입니다.X, Y축 레이블을 각각 ‘Time’, ‘Value’로 지정하고, 데이터 영역에 그리드를 표시했습니다.
###Code
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
plt.style.use('default')
plt.rcParams['figure.figsize'] = (6, 3)
plt.rcParams['font.size'] = 12
def plot_series(time, series, format="-", start=0, end=None, label=None):
plt.plot(time[start:end], series[start:end], format, label=label)
plt.xlabel("Time")
plt.ylabel("Value")
if label:
plt.legend(fontsize=14)
plt.grid(True)
###Output
_____no_output_____
###Markdown
경향성을 갖는 시계열 데이터trend() 함수는 경향성을 갖는 시계열 데이터를 반환합니다.slope 값에 따라서 시간에 따라 양의 경향성, 음의 경향성을 가질 수 있습니다.아래의 코드는 길이 4 * 365 + 1의 시간 동안 시간에 따라 0.1의 기울기를 갖는 시계열 데이터를 만들었습니다.
###Code
def trend(time, slope=0):
return slope * time
time = np.arange(4 * 365 + 1)
series = trend(time, slope=0.1)
plot_series(time, series)
plt.show()
###Output
_____no_output_____
###Markdown
계절성을 갖는 시계열 데이터seasonal_pattern() 함수는 입력 season_time에 대해서 0.6보다 작은 경우에는 np.cos(season_time * 2 * np.pi) 값을,그렇지 않은 경우에는 1 / np.exp(3 * season_time)을 반환합니다.seasonality() 함수는 주어진 주기 period에 대해 특정 값을 반복하는 시계열 데이터를 반환하는 함수입니다.
###Code
def seasonal_pattern(season_time):
return np.where(season_time < 0.6,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
amplitude = 40
series = seasonality(time, period=365, amplitude=amplitude)
plot_series(time, series)
plt.show()
###Output
_____no_output_____
###Markdown
trend(), seasonality() 함수를 사용해서 경향성 (Trend)과 계절성 (Seasonality)을 모두 갖는 시계열 데이터를 만들었습니다.
###Code
baseline = 10
slope = 0.05
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
plot_series(time, series)
plt.show()
###Output
_____no_output_____
###Markdown
노이즈를 갖는 시계열 데이터white_noise() 함수는 0에서 noise_level 값 사이의 임의의 실수를 갖는 시계열 데이터를 반환합니다.
###Code
def white_noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.rand(len(time)) * noise_level
noise_level = 5
noise = white_noise(time, noise_level, seed=42)
plot_series(time, noise)
plt.show()
###Output
_____no_output_____
###Markdown
이번에는 경향성 (Trend), 계절성 (Seasonality)과 노이즈 (Noise)를 모두 갖는 시계열 데이터를 만들었습니다.
###Code
baseline = 10
slope = 0.05
noise_level = 5
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude) \
+ white_noise(time, noise_level, seed=42)
plot_series(time, series)
plt.show()
###Output
_____no_output_____
###Markdown
자기상관성을 갖는 시계열 데이터autocorrelation() 함수는 자기상관성 (Autocorrelation)을 갖는 시계열 데이터를 반환합니다.ar은 정규분포를 갖는 임의의 데이터입니다.이전 시간 스텝 값의 0.8배를 더해주고, 크기 amplitude를 곱한 시계열 데이터를 반환합니다.
###Code
split_time = 1000
time_train, x_train = time[:split_time], series[:split_time]
time_valid, x_valid = time[split_time:], series[split_time:]
def autocorrelation(time, amplitude, seed=None):
rnd = np.random.RandomState(seed)
pi = 0.8
ar = rnd.randn(len(time) + 1)
for step in range(1, len(time) + 1):
ar[step] += pi * ar[step - 1] ## 이전의 값의 0.8배를 더하기
return ar[1:] * amplitude
series = autocorrelation(time, 10, seed=42)
plot_series(time[:200], series[:200])
plt.show()
###Output
_____no_output_____
###Markdown
자기상관성 (Autocorrelation)과 경향성 (Trend)을 갖는 시계열 데이터를 만들었습니다.
###Code
series = autocorrelation(time, 10, seed=42) + trend(time, 2)
plot_series(time[:200], series[:200])
plt.show()
###Output
_____no_output_____
###Markdown
이번에는 자기상관성 (Autocorrelation), 경향성 (Trend)과 함께 계절성 (Seasonality)을 갖는 시계열 데이터를 만들었습니다.
###Code
series = autocorrelation(time, 10, seed=42) + seasonality(time, period=50, amplitude=150) + trend(time, 2)
plot_series(time[:200], series[:200])
plt.show()
###Output
_____no_output_____
###Markdown
이번에는 특정 시점 이후로 다른 특성을 갖는 시계열 데이터를 만들어보겠습니다.2/3 지점 이후로 크기 (amplitude)와 주기 (period), 경향성 (slope)이 모두 달라진 특성을 갖는 시계열 데이터입니다.또한 전체 구간에서 노이즈 (Noise)를 갖습니다.
###Code
series = autocorrelation(time, 10, seed=42) + seasonality(time, period=50, amplitude=150) + trend(time, 2)
series2 = autocorrelation(time, 5, seed=42) + seasonality(time, period=50, amplitude=2) + trend(time, -1) + 550
series[200:] = series2[200:] # 자기상관 amp 10->5, 계절성 amp 150->2, 경향성 slope 2->-1 + 550
series += white_noise(time, 30)
plot_series(time[:300], series[:300])
plt.show()
###Output
_____no_output_____ |
documentation/source/usersGuide/usersGuide_31_clefs.ipynb | ###Markdown
User's Guide, Chapter 31: Clefs, Ties, and BeamsThroughout the first thirty chapters, we have repeatedly been using fundamental music notation principles, such as clefs, ties, and beams, but we have never talked about them directly. This chapter gives a chance to do so and to look at some `Stream` methods that make use of them.Let's first look at clefs. They all live in the :ref:`moduleClef` module:
###Code
alto = clef.AltoClef()
m = stream.Measure([alto])
m.show()
###Output
_____no_output_____
###Markdown
Since clefs can be put into Streams, they are Music21Objects, with offsets, etc., but they generally have a Duration of zero.
###Code
alto.offset
alto.duration
###Output
_____no_output_____
###Markdown
Multiple clefs can coexist in the same measure, and will all display (so long as there's at least one note between them; a problem of our MusicXML readers):
###Code
m.append(note.Note('C4'))
bass = clef.BassClef()
m.append(bass)
m.append(note.Note('C4'))
m.show()
###Output
_____no_output_____
###Markdown
Most of the clefs in common use are `PitchClefs` and they know what line they are on:
###Code
alto.line
tenor = clef.TenorClef()
tenor.show()
tenor.line
###Output
_____no_output_____
###Markdown
In this case, the line refers to the pitch that it's "sign" can be found on.
###Code
tenor.sign
treble = clef.TrebleClef()
treble.sign
###Output
_____no_output_____
###Markdown
Clefs also have an `.octaveChange` value which specifies how many octaves "off" from the basic clef they are.
###Code
treble.octaveChange
t8vb = clef.Treble8vbClef()
t8vb.octaveChange
t8vb.show()
###Output
_____no_output_____
###Markdown
There are some clefs that do not support Pitches, such as NoClef:
###Code
noClef = clef.NoClef()
###Output
_____no_output_____
###Markdown
This clef is not supported in MuseScore (which I use to generate these docs), but in some other MusicXML readers, will render a score without a clef. Percussion clefs also are not pitch clefs:
###Code
clef.PercussionClef().show()
###Output
_____no_output_____
###Markdown
There are a lot of clefs that are pre-defined in `music21` including unusual ones such as `MezzoSopranoClef`, `SubBassClef`, and `JianpuClef`. The :ref:`moduleClef` module lists them all. But you can also create your own clef.
###Code
pc = clef.PitchClef()
pc.sign = 'F'
pc.line = 4
pc.octaveChange = -1
pc.show()
###Output
_____no_output_____
###Markdown
And you can get a clef from a string by using the :func:`~music21.clef.clefFromString` function:
###Code
clef.clefFromString('treble')
###Output
_____no_output_____
###Markdown
Or from a sign and a number of the line:
###Code
c = clef.clefFromString('C4')
c.show()
###Output
_____no_output_____
###Markdown
Note, be very careful not to name your variable `clef` or you will lose access to the `clef` module! Automatic Clef GenerationLook at this quick Stream:
###Code
n = note.Note('B2')
s = stream.Stream([n])
s.show()
###Output
_____no_output_____
###Markdown
How did `music21` know to make the clef be bass clef? It turns out that there's a function in `clef` called :func:`~music21.clef.bestClef` which can return the best clef given the contents of the stream:
###Code
clef.bestClef(s)
s.append(note.Note('C6'))
clef.bestClef(s)
s.show()
###Output
_____no_output_____
###Markdown
`bestClef` has two configurable options, `allowTreble8vb` if set to True, gives the very useful `Treble8vb` clef:
###Code
n = note.Note('B3')
s = stream.Stream([n])
clef.bestClef(s, allowTreble8vb=True)
###Output
_____no_output_____
###Markdown
And it also has a `recurse` parameter, which should be set to True when running on a nested stream structure, such as a part:
###Code
bass = corpus.parse('bwv66.6').parts['bass']
clef.bestClef(bass)
clef.bestClef(bass, recurse=True)
###Output
_____no_output_____
###Markdown
TiesThat's enough about clefs, let's move to a similarly basic musical element called "Ties". Ties connect two pitches at the same pitch level attached to different notes or chords. All notes have a `.tie` attribute that specifies where the tie lives. Let's look at the top voice of an Agnus Dei by Palestrina:
###Code
agnus = corpus.parse('palestrina/Agnus_01')
agnusSop = agnus.parts[0]
agnusSop.measures(1, 7).show()
###Output
_____no_output_____
###Markdown
The second note of the first measure is tied, so let's find it:
###Code
n1 = agnusSop.recurse().notes[1]
n1
###Output
_____no_output_____
###Markdown
Now let's look at the `.tie` attribute:
###Code
n1.tie
###Output
_____no_output_____
###Markdown
This tie says "start". I'll bet that if we get the next note, we'll find it has a Tie marked "stop":
###Code
n1.next('Note').tie
###Output
_____no_output_____
###Markdown
The second `.tie` does not produce a graphical object. Thus the Tie object really represents a tied-state for a given note rather than the notational "tie" itself. The previous `Note` though, has a `.tie` of None
###Code
print(n1.previous('Note').tie)
###Output
None
###Markdown
We can find the value of 'start' or 'stop' in the `.type` attribute of the :class:`~music21.tie.Tie`.
###Code
n1.tie.type
n1.next('Note').tie.type
###Output
_____no_output_____
###Markdown
There is a third tie type, 'continue' if a the note is tied from before and tied to the next note, we'll demonstrate it by creating some notes and ties manually:
###Code
c0 = note.Note('C4')
c0.tie = tie.Tie('start')
c1 = note.Note('C4')
c1.tie = tie.Tie('continue')
c2 = note.Note('C4')
c2.tie = tie.Tie('stop')
s = stream.Measure()
s.append([c0, c1, c2])
s.show()
###Output
_____no_output_____
###Markdown
(Note that if you've worked with MusicXML, the our 'continue' value is similar to the notion in MusicXML of attaching two ties, both a 'stop' and a 'start' tie.) Ties also have a `.placement` attribute which can be 'above', 'below', or None, the last meaning to allow renderers to determine the position from context:
###Code
c0.tie.placement = 'above'
s.show()
###Output
_____no_output_____
###Markdown
Setting the placement on a 'stop' tie does nothing. Ties also have a style attribute that represents how the tie should be displayed. It can be one of 'normal', 'dotted', 'dashed', or 'hidden'.
###Code
s = stream.Stream()
for tie_style in ('normal', 'dotted', 'dashed', 'hidden'):
nStart = note.Note('E4')
tStart = tie.Tie('start')
tStart.style = tie_style
nStart.tie = tStart
nStop = note.Note('E4')
tStop = tie.Tie('stop')
tStop.style = tie_style # optional
nStop.tie = tStop
s.append([nStart, nStop])
s.show()
###Output
_____no_output_____
###Markdown
It can be hard to tell the difference between 'dotted' and 'dashed' in some notation programs. Ties and chordsChords also have a `.tie` attribute:
###Code
ch0 = chord.Chord('C4 G4 B4')
ch0.tie = tie.Tie('start')
ch1 = chord.Chord('C4 G4 B4')
ch1.tie = tie.Tie('stop')
s = stream.Measure()
s.append([ch0, ch1])
s.show()
###Output
_____no_output_____
###Markdown
This is great and simple if you have two chords that are identical, but what if there are two chords where some notes should be tied and some should not be, such as:
###Code
ch2 = chord.Chord('D4 G4 A4')
ch3 = chord.Chord('D4 F#4 A4')
s = stream.Measure()
s.append([ch2, ch3])
s.show()
###Output
_____no_output_____
###Markdown
The D and the A might want to be tied, but the suspended G needs to resolve to the F without having a tie in it. The solution obviously relies on assigning a :class:`~music21.tie.Tie` object to a `.tie` attribute somewhere, but this is not the right approach:
###Code
p0 = ch2.pitches[0]
p0
p0.tie = tie.Tie('start') # Don't do this.
###Output
_____no_output_____
###Markdown
Pitch objects generally do not have `.tie` attributes, and While we can assign an attribute to almost any object, `music21` looks for the `.tie` attribute on Notes or Chords, not Pitches. So to do this properly, we need to know that internally, Chords store not just pitch objects, but also Note objects, which you can access by iterating over the Chord:
###Code
for n in ch2:
print(n)
###Output
<music21.note.Note D>
<music21.note.Note G>
<music21.note.Note A>
###Markdown
Aha, so this is a trick. We could say:
###Code
ch2[0]
ch2[0].tie = tie.Tie('start')
###Output
_____no_output_____
###Markdown
And that works rather well. But maybe you don't want to bother remembering which note number in a chord refers to the particular note you want tied? You can also get Notes out of a chord by treating passing in the pitch name of the Note to the chord:
###Code
ch2['A4']
###Output
_____no_output_____
###Markdown
Note that this only works properly if the chord does not have any repeated pitches. We are safe here. We can also retrieve and specify information directly in the chord from the index:
###Code
ch2['D4.tie']
###Output
_____no_output_____
###Markdown
Or alternatively (though note that this is a string):
###Code
ch2['0.tie']
###Output
_____no_output_____
###Markdown
And we can set the information this way too:
###Code
ch2['A4.tie'] = tie.Tie('start')
###Output
_____no_output_____
###Markdown
Now let's set the stop information on the next chord:
###Code
ch3['D4.tie'] = tie.Tie('start')
ch3['A4.tie'] = tie.Tie('start')
s.show()
###Output
_____no_output_____
###Markdown
Voila! it works well. Now what does `ch2.tie` return?
###Code
ch2.tie
###Output
_____no_output_____
###Markdown
The chord returns information from the lowest note that is tied. So if we delete the tie on D4, we get the same answer:
###Code
ch2['D4.tie'] = None
ch2.tie
###Output
_____no_output_____
###Markdown
But if we delete it from A4, we get a different answer:
###Code
ch2['A4'].tie = None
ch2.tie is None
###Output
_____no_output_____
###Markdown
Here is an example of a case where we might want to set the `.placement` attribute of a tie manually:
###Code
c1 = chord.Chord('C#4 E4 G4')
c2 = chord.Chord('C4 E4 G4')
c1[1].tie = tie.Tie('start')
c2[1].tie = tie.Tie('stop')
c1[2].tie = tie.Tie('start')
c2[2].tie = tie.Tie('stop')
s = stream.Stream()
s.append([c1, c2])
s.show()
###Output
_____no_output_____
###Markdown
Hmm... the E tie intersects with the accidental and looks too confusing with a tie on the C to C. However, there's a placement attribute beginning in music21 v.4 which can fix this:
###Code
c1[1].tie.placement = 'above'
s.show()
###Output
_____no_output_____
###Markdown
Making and Stripping Ties from a StreamSometimes ties get in the way of analysis. For instance, take this simple melody created in TinyNotation:
###Code
littlePiece = converter.parse('tinyNotation: 2/4 d4. e8~ e4 d4~ d8 f4.')
littlePiece.show()
###Output
_____no_output_____
###Markdown
Suppose we wanted to know how many D's are in this melody. This, unfortunately, isn't the right approach:
###Code
numDs = 0
for n in littlePiece.recurse().notes:
if n.pitch.name == 'D':
numDs += 1
numDs
###Output
_____no_output_____
###Markdown
The first D is found properly, but the second D, being spanned across a barline, is counted twice. It is possible to get the right number with some code like this:
###Code
numDs = 0
for n in littlePiece.recurse().notes:
if (n.pitch.name == 'D'
and (n.tie is None
or n.tie.type == 'start')):
numDs += 1
numDs
###Output
_____no_output_____
###Markdown
But this code will get very tedious if you also want to do something more complex, say based on the total duration of all the D's, so it would be better if the Stream had no tied notes in it.To take a Stream with tied notes and change it into a Stream with tied notes represented by a single note, call :meth:`~music21.stream.Stream.stripTies` on the Stream:
###Code
c = chord.Chord('C#4 E4 G4')
c.tie = tie.Tie('start')
c2 = chord.Chord('C#4 E4 G4')
c2.tie = tie.Tie('stop')
s = stream.Measure()
s.append([c, c2])
s.show()
s2 = s.stripTies()
s2.show()
###Output
_____no_output_____
###Markdown
So, getting back to our little piece, all of its notes are essentially dotted quarter notes, but some of them are tied across the barline. To fix this, let's get a score where the ties are stripped, but we'll retain the measures.
###Code
littleStripped = littlePiece.stripTies()
###Output
_____no_output_____
###Markdown
Now we'll count the D's again:
###Code
numDs = 0
for n in littleStripped.recurse().notes:
if n.pitch.name == 'D':
numDs += 1
numDs
###Output
_____no_output_____
###Markdown
That's a lot better. Let's look at `littleStripped` in a bit more detail, by showing it as a text output with end times of each object added:
###Code
littleStripped.show('text', addEndTimes=True)
###Output
{0.0 - 3.0} <music21.stream.Measure 1 offset=0.0>
{0.0 - 0.0} <music21.clef.TrebleClef>
{0.0 - 0.0} <music21.meter.TimeSignature 2/4>
{0.0 - 1.5} <music21.note.Note D>
{1.5 - 3.0} <music21.note.Note E>
{2.0 - 4.5} <music21.stream.Measure 2 offset=2.0>
{1.0 - 2.5} <music21.note.Note D>
{4.0 - 6.0} <music21.stream.Measure 3 offset=4.0>
{0.5 - 2.0} <music21.note.Note F>
{2.0 - 2.0} <music21.bar.Barline type=final>
###Markdown
One thing to notice is that the note E extends now beyond the end of the first 2/4 measure. The second D, in measure 2, by contrast, does not begin at the beginning of the measure, but instead halfway through the first measure. This is why it's sometimesmost helpful to follow `stripTies()` with a `.flatten()`:
###Code
stripped2 = littlePiece.stripTies().flatten()
stripped2.show('text', addEndTimes=True)
###Output
{0.0 - 0.0} <music21.clef.TrebleClef>
{0.0 - 0.0} <music21.meter.TimeSignature 2/4>
{0.0 - 1.5} <music21.note.Note D>
{1.5 - 3.0} <music21.note.Note E>
{3.0 - 4.5} <music21.note.Note D>
{4.5 - 6.0} <music21.note.Note F>
{6.0 - 6.0} <music21.bar.Barline type=final>
###Markdown
In this view, it's easier to see what is going on with the lengths of the various notes and where they should begin.Remember from :ref:`Chapter 17`, that if we want to go from the strip-tie note to the original, we can use `.derivation`. For instance, let's put an accent mark on every other note of the original score, not counting tied notes:
###Code
for i, n in enumerate(stripped2.notes):
if i % 2 == 1:
nOrigin = n.derivation.origin
nOrigin.articulations.append(articulations.Accent())
littlePiece.show()
###Output
_____no_output_____
###Markdown
To undo the effect of `.stripTies`, run `.makeTies`. For instance, let's take `.littleStripped` and change all the D's to C's and then get a new part:
###Code
for n in littleStripped.recurse().notes:
if n.pitch.name == 'D':
n.pitch.name = 'C'
unstripped = littleStripped.makeTies()
unstripped.show()
###Output
_____no_output_____
###Markdown
Actually, one thing you can count on is that `music21` will run `.makeTies` before showing a piece (since otherwise it can't be displayed in MusicXML) so if all you are going to do is show a piece, go ahead and skip the `.makeTies` call:
###Code
littleStripped.show()
###Output
_____no_output_____
###Markdown
BeamsBeams are the little invention of the seventeenth century (replacing the earlier "ligatures") that make it easier to read groups of eighth, sixteenth, and smaller notes by grouping them together. Formerly not used in vocal music (except in melismas), today beams are used in nearly all contexts, so of course `music21` supports them.There are two objects that deal with beams, the :class:`~music21.beam.Beam` object which represents a single horizontal line, and the :class:`~music21.beam.Beams` object (with "s" at the end) which deals with collections of `Beam` objects. Both live in the module called :ref:`moduleBeam`.Let's create a measure with some nice notes in them:
###Code
m = stream.Measure()
c = note.Note('C4', type='quarter')
m.append(c)
d1 = note.Note('D4', type='eighth')
d2 = note.Note('D4', type='eighth')
m.append([d1, d2])
e = note.Note('E4', type='16th')
m.repeatAppend(e, 4)
m.show('text')
###Output
{0.0} <music21.note.Note C>
{1.0} <music21.note.Note D>
{1.5} <music21.note.Note D>
{2.0} <music21.note.Note E>
{2.25} <music21.note.Note E>
{2.5} <music21.note.Note E>
{2.75} <music21.note.Note E>
###Markdown
Every note and chord has a `.beams` attribute which returns a `Beams` object.
###Code
c.beams
###Output
_____no_output_____
###Markdown
That there is nothing after "`music21.beam.Beams`" shows that there are no `Beam` objects inside it. Since `c` is a quarter note, it doesn't make much sense to add a Beam to it, but `d1` and `d2` being eighth notes, should probably be beamed. So we will create a `Beam` object for `d1` and give it the `.type` of "start" since it is the start of a beam, and the number of "1" since it is the first beam:
###Code
beam1 = beam.Beam(type='start', number=1)
###Output
_____no_output_____
###Markdown
Now we can add it to the `Beams` object in `d1`:
###Code
d1Beams = d1.beams
d1Beams.append(beam1)
d1.beams
###Output
_____no_output_____
###Markdown
Now we can see that there is a start beam on `d1.beams`. This way of constructing `Beam` objects individually can get tedious for the programmer, so for `d2` we'll make the stop beam in an easier manner, using the same `Beams.append` method, but just giving it the "stop" attribute:
###Code
d2.beams.append('stop')
d2.beams
###Output
_____no_output_____
###Markdown
Now when we show the score we'll see it with some beams:
###Code
m.show()
###Output
_____no_output_____
###Markdown
Now let us add beams to the sixteenth notes, there's an even easier way to add multiple beams rather than calling append repeatedly, we can simply get the notes and call `.beams.fill()` with the number of beams we want (2) and their type, which will be "start", twice "continue", and once "stop":
###Code
m.notes[3].beams.fill(2, 'start')
m.notes[4].beams.fill(2, 'continue')
m.notes[5].beams.fill(2, 'continue')
m.notes[6].beams.fill(2, 'stop')
m.show()
###Output
_____no_output_____
###Markdown
Suppose we wanted to put a secondary beam break in the middle of the sixteenth notes? It involves changing the second beam (beam number 2) on `notes[4]` and `notes[5]`. We do not want to change beam number 1, because it continues across the four notes:
###Code
m.notes[4].beams.setByNumber(1, 'stop')
m.notes[5].beams.setByNumber(1, 'start')
###Output
_____no_output_____
###Markdown
The output is not rendered in MuseScore, but works great in Finale 25: There are cases, such as dotted eighths followed by sixteenths, where partial beams are needed, these partial beams need to know their direction. For instance:
###Code
m2 = stream.Measure()
m2.append(meter.TimeSignature('6/8'))
c4 = note.Note('C4')
d8 = note.Note('D4', type='eighth')
e8 = note.Note('E4', type='eighth')
e8.beams.append('start')
f16 = note.Note('F4', type='16th')
f16.beams.append('continue')
###Output
_____no_output_____
###Markdown
Now comes the second, partial beam, which we'll make go right:
###Code
f16.beams.append('partial', 'right')
g8 = note.Note('G4', type='eighth')
g8.beams.append('continue')
a16 = note.Note('A4', type='16th')
a16.beams.append('stop')
a16.beams.append('partial', 'left')
m2.append([c4, d8, e8, f16, g8, a16])
m2.show()
###Output
_____no_output_____
###Markdown
This beamming implies that the dotted quarter is divided into three eighth notes. If we wanted the beams to imply that the dotted quarter was divided into two dotted-eighth notes, we could switch the partial beam on `f16` to point to the left. Unfortunately, none of the major MusicXML readers properly import direction of partial beams ('backward hook' vs. 'forward hook') Beams the easy way This section began by explaining what beams were like on the lowest possible level, but most of the time we're going to be too busy solving the world's great musicological/music theoretical/cognition/composition problems to worry about things like beaming! So let's jump all the way to the other extreme, and look at beams in the easiest possible way. If all you want is your score to look decently beamed when you show it, forget about setting beaming at all and just show it!
###Code
m = stream.Measure()
ts34 = meter.TimeSignature('3/4')
m.append(ts34)
c = note.Note('C4', type='quarter')
m.append(c)
d1 = note.Note('D4', type='eighth')
d2 = note.Note('D4', type='eighth')
m.append([d1, d2])
e = note.Note('E4', type='16th')
m.repeatAppend(e, 4)
m.show()
###Output
_____no_output_____
###Markdown
If the TimeSignature changes, `music21` will rebeam it differently:
###Code
ts68 = meter.TimeSignature('6/8')
m.replace(ts34, ts68)
m.show()
###Output
_____no_output_____
###Markdown
This is accomplished because before showing the Stream, `music21` runs the powerful method :meth:`~music21.stream.base.Stream.makeNotation` on the stream. This calls a function in :ref:`moduleStreamMakeNotation` module called :func:`~music21.stream.makeNotation.makeBeams` that does the real work. That function checks the stream to see if any beams exist on it:
###Code
m.streamStatus.haveBeamsBeenMade()
###Output
_____no_output_____
###Markdown
If there are any beams in the stream, then that will return `True` and no beams will be made:
###Code
m.notes[-2].beams.fill(2, 'start')
m.notes[-1].beams.fill(2, 'stop')
m.streamStatus.haveBeamsBeenMade()
m.show()
###Output
_____no_output_____ |
Xpresso/kipoi_example.ipynb | ###Markdown
Source Model
###Code
# Source model directly from directory
model = kipoi.get_model("../Xpresso_kipoi/human_median", source="dir")
###Output
Using downloaded and verified file: /home/vagar/Xpresso_kipoi/downloaded/model_files/human_median/weights/9d00a3bc614da81655328b6e278569e2
###Markdown
Download and prepare example files (optional)
###Code
import urllib.request
import gzip
import shutil
import pyranges as pr
# make ExampleFile directory if it does not exist
if not os.path.exists("ExampleFiles"):
os.makedirs("ExampleFiles")
# Download GTF
urllib.request.urlretrieve("https://zenodo.org/record/1466102/files/example_files-gencode.v24.annotation_chr22.gtf?download=1", 'ExampleFiles/chrom22.gtf')
# Download fasta
urllib.request.urlretrieve("https://zenodo.org/record/1466102/files/example_files-hg38_chr22.fa?download=1", 'ExampleFiles/chrom22.fa')
# Extract implied TSS sites from gtf
# Read in with pyranges
gr = pr.read_gtf('ExampleFiles/chrom22.gtf')
# Extract protein coding genes
prot_genes = gr.df[(gr.df.Feature == 'gene') & (gr.df.gene_type == 'protein_coding')]
# Compute implied TSS
prot_genes['TSS'] = (prot_genes.Start * (prot_genes.Strand == "+")) + (prot_genes.End * (prot_genes.Strand == "-"))
# Determine region around TSS
prot_genes['region_start'] = prot_genes.TSS + (-7000*(prot_genes.Strand == "+")) + (-3500 * (prot_genes.Strand == "-"))
prot_genes['region_end'] = prot_genes.TSS + (3500*(prot_genes.Strand == "+")) + (7000 * (prot_genes.Strand == "-"))
# Add nuisance column to make bed6
prot_genes["score"] = "."
# write bed file
bed = prot_genes[['Chromosome', 'region_start', 'region_end', 'gene_id', 'score', 'Strand']]
bed.to_csv("ExampleFiles/chrom22.bed", sep='\t', header=False, index=False)
###Output
_____no_output_____
###Markdown
Provide the Parameters
###Code
# Path of the fasta file
fasta_path = "ExampleFiles/chrom22.fa"
# Set false if fasta has a chr prefix, true otherwise
num_chr = False
# Path of the bed file specifying the promoter regions
bed_path = "ExampleFiles/chrom22.bed"
# output file path
output_file_path = "predictions.tsv"
###Output
_____no_output_____
###Markdown
Run Prediction
###Code
model.pipeline.predict_to_file(output_file_path, {"intervals_file":bed_path,
"fasta_file":fasta_path,
"num_chr_fasta":num_chr},
batch_size=64)
###Output
100%|██████████| 7/7 [00:06<00:00, 1.20it/s]
###Markdown
Load results
###Code
# Load data as dataframe
df = pd.read_csv(output_file_path, sep="\t")
df
# Merge back with gene_ids
df = df.rename(columns={"metadata/ranges/chr":"Chromosome", "metadata/ranges/start":"region_start", "metadata/ranges/end":"region_end", "metadata/ranges/strand":"strand"})
merged = prot_genes.merge(df, on=["Chromosome", "region_start", "region_end"])
merged
###Output
_____no_output_____ |
module4-roc-auc/gh_224_roc_auc.ipynb | ###Markdown
Assignment***Submit any final predictions to our Kaggle challenge!***If your Kaggle Public Leaderboard score is:- **Nonexistent**: You need to work on your model and submit predictions- **< 70%**: You should work on your model and submit predictions- **70% < score < 80%**: You may want to work on visualizations and write a blog post- **> 80%**: You should work on visualizations and write a blog postExplore the class_weight demo in this notebook.Read these articles (if you haven't already) - [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/) - [Learning from Imbalanced Classes](https://www.svds.com/tbt-learning-imbalanced-classes/) Stretch GoalsExplore these links!- [imbalance-learn](https://github.com/scikit-learn-contrib/imbalanced-learn)- [Machine Learning Meets Economics](http://blog.mldb.ai/blog/posts/2016/01/ml-meets-economics/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)- [Visualizing Machine Learning Thresholds to Make Better Business Decisions](https://blog.insightdatascience.com/visualizing-machine-learning-thresholds-to-make-better-business-decisions-4ab07f823415)- [Attacking discrimination with smarter machine learning](https://research.google.com/bigpicture/attacking-discrimination-in-ml/) Use the class_weight parameter in scikit-learn What you can do about imbalanced classes[Learning from Imbalanced Classes](https://www.svds.com/tbt-learning-imbalanced-classes/) gives "a rough outline of useful approaches" : - Do nothing. Sometimes you get lucky and nothing needs to be done. You can train on the so-called natural (or stratified) distribution and sometimes it works without need for modification.- Balance the training set in some way: - Oversample the minority class. - Undersample the majority class. - Synthesize new minority classes.- Throw away minority examples and switch to an anomaly detection framework.- At the algorithm level, or after it: - Adjust the class weight (misclassification costs). - Adjust the decision threshold. - Modify an existing algorithm to be more sensitive to rare classes.- Construct an entirely new algorithm to perform well on imbalanced data. We demonstrated two of these options: - "Adjust the class weight (misclassification costs)" — many scikit-learn classifiers have a `class_balance` parameter- "Adjust the decision threshold" — you can lean more about this in a great blog post, [Visualizing Machine Learning Thresholds to Make Better Business Decisions](https://blog.insightdatascience.com/visualizing-machine-learning-thresholds-to-make-better-business-decisions-4ab07f823415). Another option to be aware of:- The [imbalance-learn](https://github.com/scikit-learn-contrib/imbalanced-learn) library can be used to "oversample the minority class, undersample the majority class, or synthesize new minority classes." Here's a fun demo you can explore! The next code cells do five things: 1. Generate dataWe use scikit-learn's [make_classification](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_classification.html) function to generate fake data for a binary classification problem, based on several parameters, including:- Number of samples- Weights, meaning "the proportions of samples assigned to each class."- Class separation: "Larger values spread out the clusters/classes and make the classification task easier."(We are generating fake data so it is easy to visualize.) 2. Split dataWe split the data three ways, into train, validation, and test sets. (For this toy example, it's not really necessary to do a three-way split. A two-way split, or even no split, would be ok. But I'm trying to demonstrate good habits, even in toy examples, to avoid confusion.) 3. Fit modelWe use scikit-learn to fit a [Logistic Regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) on the training data.We use this model parameter:> **class_weight : _dict or ‘balanced’, default: None_**> Weights associated with classes in the form `{class_label: weight}`. If not given, all classes are supposed to have weight one.> The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as `n_samples / (n_classes * np.bincount(y))`. 4. Evaluate modelWe use our Logistic Regression model, which was fit on the training data, to generate predictions for the validation data.Then we print [scikit-learn's Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report), with many metrics, and also the accuracy score. We are comparing the correct labels to the Logistic Regression's predicted labels, for the validation set. 5. Visualize decision functionBased on these examples- https://imbalanced-learn.readthedocs.io/en/stable/auto_examples/combine/plot_comparison_combine.html- http://rasbt.github.io/mlxtend/user_guide/plotting/plot_decision_regions/example-1-decision-regions-in-2d
###Code
!pip install category_encoders
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import percentileofscore
import seaborn as sns
from tqdm import tnrange
from IPython.display import display
from sklearn.datasets import make_classification
from sklearn.metrics import accuracy_score, classification_report
from sklearn.linear_model import LogisticRegression
from mlxtend.plotting import plot_decision_regions
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
from sklearn.model_selection import train_test_split
def train_validation_test_split(
X, y, train_size=0.8, val_size=0.1, test_size=0.1,
random_state=None, shuffle=True):
assert train_size + val_size + test_size == 1
X_train_val, X_test, y_train_val, y_test = train_test_split(
X, y, test_size=test_size, random_state=random_state, shuffle=shuffle)
X_train, X_val, y_train, y_val = train_test_split(
X_train_val, y_train_val, test_size=val_size/(train_size+val_size),
random_state=random_state, shuffle=shuffle)
return X_train, X_val, X_test, y_train, y_val, y_test
def plot_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
table = pd.DataFrame(confusion_matrix(y_true, y_pred),
columns=columns, index=index)
return sns.heatmap(table, annot=True, fmt='d', cmap='viridis')
#1. Generate data
# Try re-running the cell with different values for these parameters
n_samples = 1000
weights = (0.95, 0.05)
class_sep = 0.8
X, y = make_classification(n_samples=n_samples, n_features=2, n_informative=2,
n_redundant=0, n_repeated=0, n_classes=2,
n_clusters_per_class=1, weights=weights,
class_sep=class_sep, random_state=0)
# 2. Split data
# Uses our custom train_validation_test_split function
X_train, X_val, X_test, y_train, y_val, y_test = train_validation_test_split(
X, y, train_size=0.8, val_size=0.1, test_size=0.1, random_state=1)
# 3. Fit model
# Try re-running the cell with different values for this parameter
class_weight = None
model = LogisticRegression(solver='lbfgs', class_weight=class_weight)
model.fit(X_train, y_train)
# 4. Evaluate model
y_pred = model.predict(X_val)
print(classification_report(y_val, y_pred))
plot_confusion_matrix(y_val, y_pred)
# 5. Visualize decision regions
plt.figure(figsize=(10, 6))
plot_decision_regions(X_val, y_val, model, legend=0);
###Output
precision recall f1-score support
0 0.98 1.00 0.99 96
1 1.00 0.50 0.67 4
accuracy 0.98 100
macro avg 0.99 0.75 0.83 100
weighted avg 0.98 0.98 0.98 100
|
src/exploratory_analysis.ipynb | ###Markdown
* Eliminate rows using the 'abstract_length' column* Clear abstracts using beautiful soup* Clear advisor names, committee member names
###Code
count = 0
for idx, row in thesis_df.iterrows():
if type(row['committee_chair'])==float and type(row['committee_members'])==float: count += 1
print(count)
###Output
3
|
Supporting Information/.ipynb_checkpoints/F-Test - Nonlinear Sturm-Liouville-checkpoint.ipynb | ###Markdown
Linear Sturm-Liouville Problem Overview:For details on problem formulation, visit the data folder and view the dataset for System 1.Noise: None (0% $\sigma$)Known Operator? No. Learning GoalsKnowns: $f_j(x)$ forcing functions and observed responses $u_j(x)$ Unknowns: Operator $L$, parametric coefficients $p(x)$ and $q(x)$----------------Input: Observations of $u_j(x)$ and the corresponding forcings $f_j(x)$Output: Operator $L$, including parametric coefficients $p(x)$ and $q(x)$
###Code
%load_ext autoreload
%autoreload 2
# Import Python packages
import pickle
# Third-Party Imports
import matplotlib.pyplot as plt
from scipy.ndimage import gaussian_filter
from sklearn.model_selection import train_test_split
# Package Imports
from tools.variables import DependentVariable, IndependentVariable
from tools.term_builder import TermBuilder, build_datapools, NoiseMaker
from tools.differentiator import Differentiator, FiniteDiff
from tools.regressions import *
from tools.misc import report_learning_results
from tools.plotter3 import Plotter, compute_coefficients
from tools.Grouper import PointwiseGrouper
from tools.GroupRegressor import GroupRegressor
np.random.seed(seed=1)
%%time
## STEP 1. Collect the data
file_stem = "./data/S2-NLSL-a0.8-ijk3-q2-"
x_array = pickle.load(open(file_stem +"x.pickle", "rb"))
ode_sols = pickle.load(open(file_stem +"sols.pickle", "rb"))
forcings = pickle.load(open(file_stem + "fs.pickle", "rb"))
sl_coeffs = pickle.load(open(file_stem + "coeffs.pickle", "rb"))
len(ode_sols)
%%time
## STEP 2. BUILD DATAPOOLS
# Datapools are a table of numerically evaluated terms, which will be used to create $\Theta$ and $\U$ and learn $\Xi$
# Set LHS term
lhs_term = 'f'
# Get a random number for shuffling the train and test data
seed = np.random.randint(1000)
print("Random seed:", seed)
# Split up the data into test and train:
sol_train, sol_test, f_train, f_test = train_test_split(ode_sols, forcings, train_size=100, random_state = seed)
# Configure a differentiatior to numerically differentiate data
diff_options = Differentiator(diff_order = 2, diff_method = 'FD')
nm = None
# Configure a differentiatior to numerically differentiate data
diff_options = Differentiator(diff_order = 2, diff_method = 'poly', cheb_width = 10, cheb_degree = 6)
nm = NoiseMaker(noise_magnitude = 0.01, gaussian_filter_sigma = 10)
# works with 1% noise, 10 sigma, 7 width, 5 degree
# Generate the datapools from the ODE solutions
train_dps = build_datapools(sol_train, diff_options, lhs_term, f_train, noise_maker = nm)
#test_dps = build_datapools(sol_test, diff_options, lhs_term, f_test, noise_maker = nm)
%%time
## STEP 3. ORGANIZE AND FORMAT DATA FOR LEARNING
## Generate training and test data sets
# Define Grouper object
grouper = PointwiseGrouper(lhs_term = lhs_term)
# Define the regression function as a lambda function which only expects lists of Thetas, LHSs as inputs
RegFunc = lambda Thetas, LHSs: TrainSGTRidge3(Thetas, LHSs, num_tols = 100, lam = 1e-5, epsilon = 1e-6, normalize = 2)
# Create the group regressor
groupreg = GroupRegressor(RegFunc, grouper, train_dps, 'x')
%%time
# Regress coefficients
groupreg.group_regression()#known_vars=['du/dx', 'u', 'f', 'u^{2}', 'd^{2}u/dx^{2}'])
# Report the learned coefficients
groupreg.report_learning_results(5)
from tools.plotter3 import Plotter
### PLOT RESULTS
# Define the total observations string
text_str = r'Total Observations={}'.format(len(sol_train))
# Add the magnitude of the noise
try:
text_str += "\n{}% noise".format(nm.noise_mag*100)
except:
text_str += "\n0% noise"
# Define linecolors and markers to be used in analysis plots
lcolors = ['mediumseagreen', 'mediumaquamarine', 'lightseagreen', 'darkcyan', 'cadetblue', 'magenta']
lcolors = ['#43a2ca', '#a8ddb5', '#e0f3db']
lcolors = ['red', 'orangered', 'saddlebrown', 'darkorange']
lcolors = ['#7f000f', '#d7301f', '#fc8d59', 'red']
markers = ['s', 'h', 'd', '^', 'o']
# purple, orange, turqoise
ode_colors = ['#257352', '#ff6e54', '#8454ff', '#ffc354']
## Plot results
pltr = Plotter(groupreg = groupreg, ode_sols = ode_sols,
x_vector = ode_sols[0].t,
dependent_variable='u',
xi_index=None,
true_coeffs = sl_coeffs,
is_sturm_liouville = False,
colors = lcolors, markers=markers,
text_str=text_str, fontsize=14, show_legends=False,
text_props=dict(boxstyle='round', facecolor='wheat', alpha=0.5))
# Print the inferred operator
#pltr.print_inferred_operator()
# Plot ODE solutions, regressed Xi vectors, inferred p and q, and reconstruction plots for test data
# Make Figure:
gs = dict(hspace=0, wspace=0)
fig, axes =plt.subplots(nrows=3, ncols=1, sharex=True, figsize=(6,10), gridspec_kw = gs)
pltr.plot_ode_solutions(axes[0], ode_sols, lprops=dict(lw=3), number=3, colors=ode_colors)
# define true line color
reg_opts = dict(color='black', ms=5, mec='black', mfc='white', lw=1, linestyle='dashed')
true_opts = dict(linestyle='-', linewidth=3)
npts = 50
sl_alpha=0.4
pltr.plot_xi(axes[1], reg_opts, true_opts, xlims = [0,10], ylims = [-4,4], npts=npts, sl_alpha=sl_alpha, mean_sub=False)
#pltr.plot_p_and_q(axes[2], reg_opts, true_opts, xlims = [0,10], ylims=[-1,3], npts=npts, sl_alpha=sl_alpha)
for ax in axes:
ax.tick_params(axis='y', which='both', left = False, labelleft=False)
ax.tick_params(axis='x', which='both', bottom=True, labelbottom=False)
# Save figure
#plt.savefig('./Figs/2b-KO-summary.svg', dpi=600, transparent=True)
# Make second figure for just p,q:
fig = plt.figure()
ax = plt.gca()
# define true line color
#pltr.plot_p_and_q(ax, reg_opts, true_opts, xlims = [0,10], ylims=[-1,3], npts=npts, sl_alpha=sl_alpha)
# Remove labels from axes
ax.tick_params(axis='x', which='both', bottom=True, labelbottom=False)
ax.tick_params(axis='y', which='both', left=False, labelleft=False)
# Save figure
#plt.savefig('./Figs/2b-KO-pq.svg', dpi=600, transparent=True)
# Third figure for just xi
fig = plt.figure()
ax = plt.gca()
ax.plot(x_array, -1*sl_coeffs['p'], label='u_xx')
ax.plot(x_array, -1*sl_coeffs['p_x'], label='u_x')
ax.plot(x_array, sl_coeffs['q'], label='u')
ax.plot(x_array, sl_alpha*sl_coeffs['q'], label='u^2')
pltr.plot_xi_2(ax, reg_opts, true_opts, xlims = [0,10], npts=npts, sl_alpha=sl_alpha)
plt.ylim([-5,5])
plt.legend(loc='center left', bbox_to_anchor=(1.05, 0.5))
# Remove labels from axes
#ax.tick_params(axis='x', which='both', bottom=True, labelbottom=False)
#ax.tick_params(axis='y', which='both', left=False, labelleft=False)
# Save figure
#plt.savefig('./Figs/2b-KO-xi.svg', dpi=600, transparent=True)
# Show all the plots (pyplot command)
plt.show()
#low_idcs = np.where(pltr.true_x_vector > 0.1)
#high_idcs = np.where(pltr.true_x_vector < 9.9)
#idcs = np.intersect1d(low_idcs, high_idcs)
#
#p_error = np.linalg.norm(pltr.inferred_phi[idcs] - pltr.p_x[idcs])/np.linalg.norm(pltr.p_x[idcs])
#print('L2 p error: %.4f' % (p_error))
#
#q_error = np.linalg.norm(pltr.inferred_q[idcs] - pltr.q_x[idcs])/np.linalg.norm(pltr.q_x[idcs])
#print('L2 q error: %.4f' % (q_error))
#
###Output
_____no_output_____ |
course/Data Visualization/hello-seaborn.ipynb | ###Markdown
Welcome to Data Visualization! In this hands-on course, you'll learn how to take your data visualizations to the next level with [seaborn](https://seaborn.pydata.org/index.html), a powerful but easy-to-use data visualization tool. To use seaborn, you'll also learn a bit about how to write code in Python, a popular programming language. That said,- the course is aimed at those with no prior programming experience, and- each chart uses short and simple code, making seaborn much faster and easier to use than many other data visualization tools (_such as Excel, for instance_). So, if you've never written a line of code, and you want to learn the **_bare minimum_** to start making faster, more attractive plots today, you're in the right place! To take a peek at some of the charts you'll make, check out the figures below. Your coding environmentTake the time now to scroll quickly up and down this page. You'll notice that there are a lot of different types of information, including:1. **text** (like the text you're reading right now!),2. **code** (which is always contained inside a gray box called a **code cell**), and2. **code output** (or the printed result from running code that always appears immediately below the corresponding code).We refer to these pages as **Jupyter notebooks** (or, often just **notebooks**), and we'll work with them throughout the mini-course. Another example of a notebook can be found in the image below. In the notebook you're reading now, we've already run all of the code for you. Soon, you will work with a notebook that allows you to write and run your own code! Set up the notebookThere are a few lines of code that you'll need to run at the top of every notebook to set up your coding environment. It's not important to understand these lines of code now, and so we won't go into the details just yet. (_Notice that it returns as output: `Setup Complete`._)
###Code
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
print("Setup Complete")
###Output
Setup Complete
###Markdown
Load the dataIn this notebook, we'll work with a dataset of historical FIFA rankings for six countries: Argentina (ARG), Brazil (BRA), Spain (ESP), France (FRA), Germany (GER), and Italy (ITA). The dataset is stored as a CSV file (short for [comma-separated values file](https://bit.ly/2Iu5D4x). Opening the CSV file in Excel shows a row for each date, along with a column for each country. To load the data into the notebook, we'll use two distinct steps, implemented in the code cell below as follows:- begin by specifying the location (or [filepath](https://bit.ly/1lWCX7s)) where the dataset can be accessed, and then- use the filepath to load the contents of the dataset into the notebook.
###Code
# Path of the file to read
fifa_filepath = "../input/fifa.csv"
# Read the file into a variable fifa_data
fifa_data = pd.read_csv(fifa_filepath, index_col="Date", parse_dates=True)
###Output
_____no_output_____
###Markdown
Note that the code cell above has **four** different lines. CommentsTwo of the lines are preceded by a pound sign (``) and contain text that appears faded and italicized. Both of these lines are completely ignored by the computer when the code is run, and they only appear here so that any human who reads the code can quickly understand it. We refer to these two lines as **comments**, and it's good practice to include them to make sure that your code is readily interpretable. Executable codeThe other two lines are **executable code**, or code that is run by the computer (_in this case, to find and load the dataset_). The first line sets the value of `fifa_filepath` to the location where the dataset can be accessed. In this case, we've provided the filepath for you (in quotation marks). _Note that the **comment** immediately above this line of **executable code** provides a quick description of what it does!_The second line sets the value of `fifa_data` to contain all of the information in the dataset. This is done with `pd.read_csv`. It is immediately followed by three different pieces of text (underlined in the image above) that are enclosed in parentheses and separated by commas. These are used to customize the behavior when the dataset is loaded into the notebook: - `fifa_filepath` - The filepath for the dataset always needs to be provided first. - `index_col="Date"` - When we load the dataset, we want each entry in the first column to denote a different row. To do this, we set the value of `index_col` to the name of the first column (`"Date"`, found in cell A1 of the file when it's opened in Excel). - `parse_dates=True` - This tells the notebook to understand the each row label as a date (as opposed to a number or other text with a different meaning). These details will make more sense soon, when you have a chance to load your own dataset in a hands-on exercise. > For now, it's important to remember that the end result of running both lines of code is that we can now access the dataset from the notebook by using `fifa_data`.By the way, you might have noticed that these lines of code don't have any output (whereas the lines of code you ran earlier in the notebook returned `Setup Complete` as output). This is expected behavior -- not all code will return output, and this code is a prime example! Examine the dataNow, we'll take a quick look at the dataset in `fifa_data`, to make sure that it loaded properly. We print the _first_ five rows of the dataset by writing one line of code as follows:- begin with the variable containing the dataset (in this case, `fifa_data`), and then - follow it with `.head()`.You can see this in the line of code below.
###Code
# Print the first 5 rows of the data
fifa_data.head()
###Output
_____no_output_____
###Markdown
Check now that the first five rows agree with the image of the dataset (_from when we saw what it would look like in Excel_) above. Plot the dataIn this course, you'll learn about many different plot types. In many cases, you'll only need one line of code to make a chart!For a sneak peak at what you'll learn, check out the code below that generates a line chart.
###Code
# Set the width and height of the figure
plt.figure(figsize=(16,6))
# Line chart showing how FIFA rankings evolved over time
sns.lineplot(data=fifa_data)
###Output
_____no_output_____ |
jupyter_notebooks/Examples/ModelMemberGraph.ipynb | ###Markdown
ModelMemberGraph and SerializationExample notebook of ModelMemberGraph functionality
###Code
import numpy as np
import pygsti
from pygsti.modelpacks import smq2Q_XYICNOT
###Output
_____no_output_____
###Markdown
Similar/Equivalent
###Code
ex_mdl1 = smq2Q_XYICNOT.target_model()
ex_mdl2 = ex_mdl1.copy()
ex_mmg1 = ex_mdl1.create_modelmember_graph()
ex_mmg1.print_graph()
ex_mmg1.mm_nodes['operations']['Gxpi2', 0]
ex_mmg2 = ex_mdl2.create_modelmember_graph()
print(ex_mmg1.is_similar(ex_mmg2))
print(ex_mmg1.is_equivalent(ex_mmg2))
ex_mdl2.operations['Gxpi2', 0][0, 0] = 0.0
ex_mmg2 = ex_mdl2.create_modelmember_graph()
print(ex_mmg1.is_similar(ex_mmg2))
print(ex_mmg1.is_equivalent(ex_mmg2))
ex_mdl2.operations['Gxpi2', 0] = pygsti.modelmembers.operations.StaticArbitraryOp(ex_mdl2.operations['Gxpi2', 0])
ex_mmg2 = ex_mdl2.create_modelmember_graph()
print(ex_mmg1.is_similar(ex_mmg2))
print(ex_mmg1.is_equivalent(ex_mmg2))
pspec = pygsti.processors.QubitProcessorSpec(2, ['Gi', 'Gxpi2', 'Gypi2', 'mygate'], geometry='line', nonstd_gate_unitaries={'mygate': np.eye(2, dtype='complex')})
ln_mdl1 = pygsti.models.create_crosstalk_free_model(pspec,
depolarization_strengths={('Gxpi2', 0): 0.1, ('mygate', 0): 0.2},
lindblad_error_coeffs={('Gypi2', 1): {('H', 1): 0.2, ('S', 2): 0.3}})
print(ln_mdl1)
ln_mmg1 = ln_mdl1.create_modelmember_graph()
ln_mmg1.print_graph()
# Should be exactly the same
ln_mdl2 = pygsti.models.create_crosstalk_free_model(pspec,
depolarization_strengths={('Gxpi2', 0): 0.1},
lindblad_error_coeffs={('Gypi2', 1): {('H', 1): 0.2, ('S', 2): 0.3}})
ln_mmg2 = ln_mdl2.create_modelmember_graph()
print(ln_mmg1.is_similar(ln_mmg2))
print(ln_mmg1.is_equivalent(ln_mmg2))
# Should be similar if we change params
ln_mdl3 = pygsti.models.create_crosstalk_free_model(pspec,
depolarization_strengths={('Gxpi2', 0): 0.01},
lindblad_error_coeffs={('Gypi2', 1): {('H', 1): 0.5, ('S', 2): 0.1}})
ln_mmg3 = ln_mdl3.create_modelmember_graph()
print(ln_mmg1.is_similar(ln_mmg3))
print(ln_mmg1.is_equivalent(ln_mmg3))
# Should fail both, depolarize is on different gate
ln_mdl4 = pygsti.models.create_crosstalk_free_model(pspec,
depolarization_strengths={('Gypi2', 0): 0.1},
lindblad_error_coeffs={('Gypi2', 1): {('H', 1): 0.2, ('S', 2): 0.3}})
ln_mmg4 = ln_mdl4.create_modelmember_graph()
print(ln_mmg1.is_similar(ln_mmg4))
print(ln_mmg1.is_equivalent(ln_mmg4))
###Output
_____no_output_____
###Markdown
Serialization
###Code
ex_mdl1.write('example_files/ex_mdl1.json')
ln_mdl1.write('example_files/ln_mdl1.json')
###Output
_____no_output_____ |
notebooks/load_data.ipynb | ###Markdown
Setup Notebook: Import, Cleanup, Normalize & Split Data Lib imports & Options
###Code
import sys
from typing import Dict, OrderedDict, Tuple
import warnings
from collections import namedtuple
# ML libs
import numpy as np
import pandas as pd
# ASSERTS
# Python ≥3.5 is required
assert sys.version_info >= (3, 5)
# Pandas options
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 25)
# Ignore useless warnings (see SciPy issue #5998)
warnings.filterwarnings(action="ignore", message="^internal gelsd")
###Output
_____no_output_____
###Markdown
Import my libs - The `loader` module is a py module located in the same dir of the notebooks.It sets up the `PYTHONPATH` in order to import py modules from other dirs within the project root.This way you can import the generated scripts (in `/src`) from within a notebook.- The `reloader` module let you reload the imported py modules (deep reload) that were modified since last time they were imported.By calling `reloader.clear()` it invalidated the cache of imported modules that were modified.
###Code
import loader # set PYTHONPATH for imports
import reloader # Reload local modified files with
###Output
_____no_output_____
###Markdown
Verify that the syspath was indeed modified by the `loader`. You should see a list of path here where python module are looked up, the last one should be your project root.
###Code
print(sys.path)
###Output
_____no_output_____
###Markdown
Once `import loader` has been executed you can now import other modules located in different directories under your project root.
###Code
from lib.pd import load_df, drop_na_cols, print_na_cols
###Output
_____no_output_____
###Markdown
If any of your external modules was modified then execute this cell but don't forget to comment it again once reloaded the modules or it will created import troubles when this notebook will be imported as module.
###Code
# reloader.clear() # ⚠️ Uncomment and execute this to reload modules that were modified
###Output
_____no_output_____
###Markdown
Load the Data & Arrange the data structure `load_df` with no arguments takes the `data_file_abs_path` OR `data_dir` + `data_file_name` defined in `config.toml`
###Code
df = load_df()
df
print_na_cols(df)
###Output
_____no_output_____
###Markdown
Example of imported function from the `lib` dir
###Code
# Drop all columns having more than 60% of missing values
df = drop_na_cols(df, perc=0.6)
df.head() # Content and Headquarters were dropped
###Output
_____no_output_____
###Markdown
we can pass multiple arguments to `load_df`, they will combine with what's defined in `config.toml`
###Code
df_future50 = load_df(data_file_name='Future50.csv')
df_future50.head()
print_na_cols(df_future50)
###Output
_____no_output_____
###Markdown
Other data ManipulationUsually you need to cleanup and re-arrange the data structure Exported VarsSince we want to import this notebooks as if it was (it will be) a python module from another notebook, it would be nice not to pollute that notebooks with all the variables of this one. A solution would be to have a function to return only the needed variables, for example:
###Code
def get_export():
return df, df_future50
###Output
_____no_output_____
###Markdown
DICOM - the data
###Code
scan_number = '23262134'
!ls ../data/scans/{scan_number}/ | head -n 3
!ls ../data/scans/{scan_number}/ | wc -l
!ls ../data/masks/
def load_scan(path, use_even=False):
slices = [pydicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: float(x.ImagePositionPatient[2]))
if use_even:
return slices[::2]
return slices
def get_pixels_hu(slices):
pixel_arrays = [s.pixel_array for s in slices]
image = np.stack(pixel_arrays)
# Convert to int16 (from sometimes int16),
# should be possible as values should always be low enough (<32k)
image = image.astype(np.int16)
# Set outside-of-scan pixels to 0
# The intercept is usually -1024, so air is approximately 0
image[image == -2000] = 0
# Convert to Hounsfield units (HU)
for slice_number in range(len(slices)):
intercept = slices[slice_number].RescaleIntercept
slope = slices[slice_number].RescaleSlope
if slope != 1:
image[slice_number] = slope * image[slice_number].astype(np.float64)
image[slice_number] = image[slice_number].astype(np.int16)
image[slice_number] += np.int16(intercept)
return np.array(image, dtype=np.int16)
###Output
_____no_output_____
###Markdown
###Code
scan = load_scan(f'../data/scans/{scan_number}', use_even=True)
len(scan)
scan[0]
scan_pixels = get_pixels_hu(scan)
plt.hist(scan_pixels.flatten(), bins=80, color='c')
plt.xlabel("Hounsfield Units (HU)")
plt.ylabel("Frequency")
plt.show()
# Show some slice in the middle
plt.imshow(scan_pixels[150], cmap=plt.cm.gray)
plt.show()
scan_pixels.shape
###Output
_____no_output_____
###Markdown
Adjust grayscale
###Code
scan_pixels_copy = scan_pixels.copy()
mask = (scan_pixels_copy >= -50) & (scan_pixels_copy <= 150)
scan_pixels_copy[mask] = np.interp(
scan_pixels_copy[mask],
(scan_pixels_copy[mask].min(), scan_pixels_copy[mask].max()),
(0, 6000),
)
scan_pixels_copy[~mask] = -1000
for i in range(130, len(scan_pixels) - 50, 8):
plt.imshow(scan_pixels_copy[i], cmap=plt.cm.gray)
plt.show()
# fig = plt.figure(figsize=(10,10))
# ax1 = fig.add_subplot(1,2,1)
# ax1.imshow(scan_pixels_copy[i], cmap=plt.cm.gray)
# ax2 = fig.add_subplot(1,2,2)
# ax2.imshow(segmentation[i], cmap=plt.cm.Reds)
# plt.show()
###Output
_____no_output_____
###Markdown
NRRD - segmenation
###Code
segmentation, metadata = nrrd.read(f'../data/masks/{scan_number}.nrrd')
segmentation.shape
metadata
###Output
_____no_output_____
###Markdown
Make axis consistent
###Code
segmentation = np.rollaxis(np.rollaxis(segmentation, 1), 2)
###Output
_____no_output_____
###Markdown
Show segmeneted
###Code
for i in range(130, len(scan_pixels) - 50, 10):
plt.imshow(scan_pixels[i], cmap=plt.cm.gray)
plt.imshow(segmentation[i], cmap=plt.cm.Reds, alpha=0.2, vmin=0, vmax=1)
plt.show()
###Output
_____no_output_____
###Markdown
Segmented & mask side by side
###Code
for i in range(130, len(scan_pixels) - 50, 8):
fig = plt.figure(figsize=(10,10))
ax1 = fig.add_subplot(1,2,1)
ax1.imshow(scan_pixels_copy[i], cmap=plt.cm.gray)
ax2 = fig.add_subplot(1,2,2)
ax2.imshow(segmentation[i], cmap=plt.cm.Reds)
plt.show()
###Output
_____no_output_____
###Markdown
Make stuff machine learning friendly - spacing
###Code
def resample(image, scan, new_spacing=[1,1,1]):
# Determine current pixel spacing
spacing = np.array([scan[0].SliceThickness] + list(scan[0].PixelSpacing), dtype=np.float32)
resize_factor = spacing / new_spacing
new_real_shape = image.shape * resize_factor
new_shape = np.round(new_real_shape)
real_resize_factor = new_shape / image.shape
new_spacing = spacing / real_resize_factor
image = scipy.ndimage.interpolation.zoom(image, real_resize_factor, mode='nearest')
return image
scan_pixels = resample(scan_pixels, scan, [1, 1, 1])
segmentation = resample(segmentation, scan, [1, 1, 1])
scan_pixels.shape
for i in range(130, len(scan_pixels) - 50, 10):
plt.imshow(scan_pixels[i], cmap=plt.cm.gray)
plt.imshow(segmentation[i], cmap=plt.cm.Reds, alpha=0.2, vmin=0, vmax=1)
plt.show()
###Output
_____no_output_____
###Markdown
###Code
%%time
!git clone https://github.com/sbooeshaghi/SBA-PPP-Loan-Data.git
!unzip /content/SBA-PPP-Loan-Data/over_150k/foia_150k_plus.csv.zip
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
def nd(arr):
return np.asarray(arr).reshape(-1)
def yex(ax):
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes
np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes
]
# now plot both limits against eachother
ax.plot(lims, lims, 'k-', alpha=0.75, zorder=0)
ax.set_aspect('equal')
ax.set_xlim(lims)
ax.set_ylim(lims)
return ax
fsize=20
plt.rcParams.update({'font.size': fsize})
%config InlineBackend.figure_format = 'retina'
###Output
_____no_output_____
###Markdown
Load data > $150k
###Code
df = pd.read_csv("/content/foia_150k_plus.csv")
df.head()
###Output
_____no_output_____
###Markdown
Load data < $150k (per state)
###Code
fl = pd.read_csv("/content/SBA-PPP-Loan-Data/under_150k/Florida/foia_up_to_150k_FL.csv")
fl.head()
###Output
_____no_output_____
###Markdown
train.rename(columns={'fecha':'fecha_venta'}, inplace=True)
###Code
train.shape
train.to_csv('../data/raw/train_aggr.csv', sep=';', index=False)
print("Num. id_pos de ventas: ",ventas.id_pos.nunique())
print("Num. id_pos de train (ventas x pos) : ",train.id_pos.nunique())
print("Num. id_pos no encontrados: " , ventas.id_pos.nunique() - train.id_pos.nunique())
###Output
Num. id_pos no encontrados: 449
###Markdown
Terminar: Falta agregar lo de envios que hay que tener en cuenta la info de las fechas, fecha_venta > fecha_envio
###Code
# Nos quedamos con un unico id_pos, sin tener en cuenta la fecha
envios_tmp = envios[['id_pos']].drop_duplicates()
train[train.id_pos.isin(envios.id_pos)].id_pos.nunique()
train[train.id_pos.isin(envios.id_pos)!=True].id_pos.nunique()
train = pd.merge(train, envios_tmp, how='left', left_on='id_pos', right_on='id_pos')
train.shape
train.head()
submittion.id_pos.nunique()
submittion.dtypes
submittion[submittion.id_pos.isin(pos.id_pos)]['id_pos'].nunique()
###Output
_____no_output_____
###Markdown
>v0.1 This code implements a simple feature extraction and train using Lightgbm.Feature extraction is very simple and can be improved.
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import librosa
import matplotlib.pyplot as plt
import gc
from tqdm import tqdm, tqdm_notebook
from sklearn.metrics import label_ranking_average_precision_score
from sklearn.metrics import roc_auc_score
from scipy import stats
from sklearn.model_selection import KFold
import warnings
warnings.filterwarnings('ignore')
tqdm.pandas()
from sklearn.preprocessing import LabelEncoder
def split_and_label(rows_labels):
row_labels_list = []
for row in rows_labels:
row_labels = row.split(',')
labels_array = np.zeros((80))
for label in row_labels:
index = label_mapping[label]
labels_array[index] = 1
row_labels_list.append(labels_array)
return row_labels_list
def create_features( pathname ):
y, sr = librosa.load( pathname)
# trim silence
if 0 < len(y): # workaround: 0 length causes error
y, _ = librosa.effects.trim(y)
xc = pd.Series(y)
X = []
X.append(len(xc)/sr)
X.append( xc.mean() )
X.append( xc.median() )
X.append( xc.std() )
X.append( xc.max() )
X.append( xc.min() )
X.append( xc.skew() )
X.append( xc.mad() )
X.append( xc.kurtosis() )
X.append( np.mean(np.diff(xc)) )
X.append( np.mean(np.nonzero((np.diff(xc) / xc[:-1]))[0]) )
X.append( np.abs(xc).max() )
X.append( np.abs(xc).min() )
X.append( xc[:4410].std() )
X.append( xc[-4410:].std() )
X.append( xc[:44100].std() )
X.append( xc[-44100:].std() )
X.append( xc[:4410].mean() )
X.append( xc[-4410:].mean() )
X.append( xc[:44100].mean() )
X.append( xc[-44100:].mean() )
X.append( xc[:4410].min() )
X.append( xc[-4410:].min() )
X.append( xc[:44100].min() )
X.append( xc[-44100:].min() )
X.append( xc[:4410].max() )
X.append( xc[-4410:].max() )
X.append( xc[:44100].max() )
X.append( xc[-44100:].max() )
X.append( xc[:4410].skew() )
X.append( xc[-4410:].skew() )
X.append( xc[:44100].skew() )
X.append( xc[-44100:].skew() )
X.append( xc.max() / np.abs(xc.min()) )
X.append( xc.max() - np.abs(xc.min()) )
X.append( xc.sum() )
X.append( np.mean(np.nonzero((np.diff(xc[:4410]) / xc[:4410][:-1]))[0]) )
X.append( np.mean(np.nonzero((np.diff(xc[-4410:]) / xc[-4410:][:-1]))[0]) )
X.append( np.mean(np.nonzero((np.diff(xc[:44100]) / xc[:44100][:-1]))[0]) )
X.append( np.mean(np.nonzero((np.diff(xc[-44100:]) / xc[-44100:][:-1]))[0]) )
X.append( np.quantile(xc, 0.95) )
X.append( np.quantile(xc, 0.99) )
X.append( np.quantile(xc, 0.10) )
X.append( np.quantile(xc, 0.05) )
X.append( np.abs(xc).mean() )
X.append( np.abs(xc).std() )
return np.array( X )
train_curated = pd.read_csv('../data/raw/train_curated.csv')
train_noisy = pd.read_csv('../data/raw/train_noisy.csv')
test = pd.read_csv('../data/raw/sample_submission.csv')
print(train_curated.shape, test.shape, train_noisy.shape)
label_columns = list( test.columns[1:] )
label_mapping = dict((label, index) for index, label in enumerate(label_columns))
label_mapping
len(label_mapping)
train_curated_labels = split_and_label(train_curated['labels'])
train_noisy_labels = split_and_label(train_noisy ['labels'])
len(train_curated_labels), len(train_noisy_labels)
for f in label_columns:
train_curated[f] = 0.0
train_noisy[f] = 0.0
train_curated[label_columns] = train_curated_labels
train_noisy[label_columns] = train_noisy_labels
train_curated['num_labels'] = train_curated[label_columns].sum(axis=1)
train_noisy['num_labels'] = train_noisy[label_columns].sum(axis=1)
train_curated['path'] = '../data/raw/train_curated/'+ train_curated['fname']
train_noisy ['path'] = '../data/raw/train_noisy/'+ train_noisy['fname']
test['path'] = '../data/raw/test/' + test['fname']
train_curated.head()
train_noisy.head()
train_noisy.shape
train = pd.concat([train_curated, train_noisy],axis=0)
train.shape
train.to_pickle('../data/processed/train.pkl')
train.to_csv('../data/processed/train.csv',sep=';',index=False)
train_curated.to_csv('../data/processed/train_curated.csv',sep=';',index=False)
train_noisy.to_csv('../data/processed/train_noisy.csv',sep=';',index=False)
train_noisy.to_pickle('../data/processed/train_noisy.pkl')
test.to_pickle('../data/processed/test.pkl')
np.save('../data/processed/y_onehotenc_train_curated.npy', train_curated_labels)
del train_curated
del train_noisy
###Output
_____no_output_____
###Markdown
Encoder target
###Code
label_encoder = LabelEncoder()
integer_encoded = label_encoder.fit_transform(train['labels'].to_list())
print(integer_encoded)
np.save('../data/processed/train_curated_classes.npy', encoder.classes_)
label_encoder = LabelEncoder()
integer_encoded = label_encoder.fit_transform(test['labels'].to_list())
print(integer_encoded)
###Output
_____no_output_____
###Markdown
Making features from train curated
###Code
X = [create_features(fn) for fn in tqdm(train['path'].values)]
X = np.array(X)
X.shape
np.save('../data/processed/train_curated_features.npy', X)
###Output
_____no_output_____
###Markdown
Making features from test curated
###Code
X = [create_features(fn) for fn in tqdm(test['path'].values)]
X = np.array(X)
X.shape
np.save('../data/processed/test_features.npy', X)
###Output
_____no_output_____
###Markdown
X = Parallel(n_jobs= 4)(delayed(create_features)(fn) for fn in tqdm(train['path'].values) )X = np.array( X )X.shape Xtest = Parallel(n_jobs= 4)(delayed(create_features)( '../input/test/'+fn) for fn in tqdm(test['fname'].values) )Xtest = np.array( Xtest )Xtest.shape n_fold = 5folds = KFold(n_splits=n_fold, shuffle=True, random_state=69)params = {'num_leaves': 15, 'min_data_in_leaf': 200, 'objective':'binary', "metric": 'auc', 'max_depth': -1, 'learning_rate': 0.05, "boosting": "gbdt", "bagging_fraction": 0.85, "bagging_freq": 1, "feature_fraction": 0.20, "bagging_seed": 42, "verbosity": -1, "nthread": -1, "random_state": 69}PREDTRAIN = np.zeros( (X.shape[0],80) )PREDTEST = np.zeros( (Xtest.shape[0],80) )for f in range(len(label_columns)): y = train[ label_columns[f] ].values oof = np.zeros( X.shape[0] ) oof_test = np.zeros( Xtest.shape[0] ) for fold_, (trn_idx, val_idx) in enumerate(folds.split(X,y)): model = lgb.LGBMClassifier(**params, n_estimators = 20000) model.fit(X[trn_idx,:], y[trn_idx], eval_set=[(X[val_idx,:], y[val_idx])], eval_metric='auc', verbose=0, early_stopping_rounds=25) oof[val_idx] = model.predict_proba(X[val_idx,:], num_iteration=model.best_iteration_)[:,1] oof_test += model.predict_proba(Xtest , num_iteration=model.best_iteration_)[:,1]/5.0 PREDTRAIN[:,f] = oof PREDTEST [:,f] = oof_test print( f, str(roc_auc_score( y, oof ))[:6], label_columns[f] ) from sklearn.metrics import roc_auc_scoredef calculate_overall_lwlrap_sklearn(truth, scores): """Calculate the overall lwlrap using sklearn.metrics.lrap.""" sklearn doesn't correctly apply weighting to samples with no labels, so just skip them. sample_weight = np.sum(truth > 0, axis=1) nonzero_weight_sample_indices = np.flatnonzero(sample_weight > 0) overall_lwlrap = label_ranking_average_precision_score( truth[nonzero_weight_sample_indices, :] > 0, scores[nonzero_weight_sample_indices, :], sample_weight=sample_weight[nonzero_weight_sample_indices]) return overall_lwlrapprint( 'lwlrap cv:', calculate_overall_lwlrap_sklearn( train[label_columns].values, PREDTRAIN ) ) test[label_columns] = PREDTESTtest.to_csv('submission.csv', index=False)test.head()
###Code
train_curated.index
###Output
_____no_output_____ |
day5_hyperOpp.ipynb | ###Markdown
Feature Engineering
###Code
SUFFIX_CAT = '__cat'
for feat in df.columns:
if isinstance(df[feat][0], list): continue
factorized_values = df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat] = factorized_values
else:
df[feat + SUFFIX_CAT] = factorized_values
df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x))
df['param_moc'] = df['param_moc'].map(lambda x: -1 if str(x) == 'None' else int(x.split(' ')[0]))
df['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x: -1 if str(x) == 'None' else int(x.split('cm')[0].replace(' ','')))
def run_model(model, feats):
X = df[feats].values
y = df['price_value'].values
scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
feats = ['param_napęd__cat','param_rok-produkcji','param_stan__cat','param_skrzynia-biegów__cat','param_faktura-vat__cat','param_moc','param_marka-pojazdu__cat','feature_kamera-cofania__cat','param_typ__cat','param_pojemność-skokowa','seller_name__cat','feature_wspomaganie-kierownicy__cat','param_model-pojazdu__cat','param_wersja__cat','param_kod-silnika__cat','feature_system-start-stop__cat','feature_asystent-pasa-ruchu__cat','feature_czujniki-parkowania-przednie__cat','feature_łopatki-zmiany-biegów__cat','feature_regulowane-zawieszenie__cat']
xgb_params = {
'max_depth': 5,
'n_estimators': 50,
'learning_rate': 0.1,
'seed': 0
}
model = xgb.XGBRegressor(**xgb_params)
run_model(model, feats)
###Output
[17:25:24] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[17:25:28] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[17:25:32] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
###Markdown
Hyperopt
###Code
def obj_func(params):
print("Training with prams: ")
print(params)
mean_mae, score_std = run_model(xgb.XGBRegressor(**params), feats)
return {'loss': np.abs(mean_mae), 'status': STATUS_OK}
#space
xgb_reg_params ={
'learning_rate': hp.choice('learning_rate', np.arange(0.05, 0.31, 0.05)),
'max_depth': hp.choice('max_depth', np.arange(5, 16, 1, dtype =int)),
'subsample': hp.quniform('subsample', 0.5, 1, 0.05),
'colsample_bytree': hp.quniform('colsample_bytree', 0.5, 1, 0.05),
'objective': 'reg:squarederror',
'n_estimators': 100,
'seed': 0,
}
## run
best = fmin(obj_func, xgb_reg_params, algo=tpe.suggest, max_evals=25)
best
###Output
_____no_output_____ |
notebooks/kubeflow/explore-dvf.ipynb | ###Markdown
Option 1: Merge raw data in one file
###Code
df2021s1 = pd.read_csv(homedir + '/data/dvf/2021s1.txt', sep='|', decimal=',', low_memory=False)
df2021s1 = df2021s1.query("`Commune` == 'TOULOUSE' & `Nature mutation` == 'Vente' & `Type local` == 'Appartement' & `Nombre de lots` == 1 & not(`Surface Carrez du 1er lot`.isnull())")
df2020 = pd.read_csv(homedir + '/data/dvf/2020.txt', sep='|', decimal=',', low_memory=False)
df2020 = df2020.query("`Commune` == 'TOULOUSE' & `Nature mutation` == 'Vente' & `Type local` == 'Appartement' & `Nombre de lots` == 1 & not(`Surface Carrez du 1er lot`.isnull())")
df2019 = pd.read_csv(homedir + '/data/dvf/2019.txt', sep='|', decimal=',', low_memory=False)
df2019 = df2019.query("`Commune` == 'TOULOUSE' & `Nature mutation` == 'Vente' & `Type local` == 'Appartement' & `Nombre de lots` == 1 & not(`Surface Carrez du 1er lot`.isnull())")
df2018 = pd.read_csv(homedir + '/data/dvf/2018.txt', sep='|', decimal=',', low_memory=False)
df2018 = df2018.query("`Commune` == 'TOULOUSE' & `Nature mutation` == 'Vente' & `Type local` == 'Appartement' & `Nombre de lots` == 1 & not(`Surface Carrez du 1er lot`.isnull())")
dfdvf_tls = pd.concat([df2018,df2019,df2020,df2021s1])
dfdvf_tls.to_csv(homedir + '/data/dvf/tls.txt', sep='|', index=None)
###Output
_____no_output_____
###Markdown
Option 2: Load directly all data
###Code
dfdvf_tls = pd.read_csv(homedir + '/data/dvf/tls.txt', sep='|')
###Output
_____no_output_____
###Markdown
Starting exploring data
###Code
list(dfdvf_tls.columns)
dfdvf.head(5)
dfdvf_tls = dfdvf_tls[['Code postal', 'Nombre pieces principales', 'Surface Carrez du 1er lot', 'Valeur fonciere']]
dfdvf_tls = dfdvf_tls.rename(columns={
"Code postal": "code_postal",
"Nombre pieces principales": "nb_pieces",
"Surface Carrez du 1er lot": "surface",
"Valeur fonciere": "prix_vente"}
)
dfdvf_tls = dfdvf_tls.astype({'code_postal': 'int32', 'nb_pieces': 'int32', 'surface': 'int32', 'prix_vente': 'int32'})
dfdvf_tls = dfdvf_tls.astype({'code_postal': 'str'})
dfdvf_tls.head(5)
dfdvf_tls.count()
dfdvf_tls[dfdvf_tls['code_postal']=='31400']
#dfdvf_tls.to_parquet('/bd-fs-mnt/project_repo/data/dvf/cleaned/2021s1.parquet.gzip', compression='gzip')
dfdvf_tls.to_csv(homedir + '/data/dvf/cleaned/tls.txt', index=None)
###Output
_____no_output_____ |
notebooks/ch08_Graphical_Models.ipynb | ###Markdown
8. Graphical Models
###Code
%matplotlib inline
import itertools
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import fetch_mldata
from prml import bayesnet as bn
np.random.seed(1234)
b = bn.discrete([0.1, 0.9])
f = bn.discrete([0.1, 0.9])
g = bn.discrete([[[0.9, 0.8], [0.8, 0.2]], [[0.1, 0.2], [0.2, 0.8]]], b, f)
print("b:", b)
print("f:", f)
print("g:", g)
g.observe(0)
print("b:", b)
print("f:", f)
print("g:", g)
b.observe(0)
print("b:", b)
print("f:", f)
print("g:", g)
###Output
b: DiscreteVariable(observed=[1. 0.])
f: DiscreteVariable(proba=[0.11111111 0.88888889])
g: DiscreteVariable(observed=[1. 0.])
###Markdown
8.3.3 Illustration: Image de-noising
###Code
mnist = fetch_mldata("MNIST original")
x = mnist.data[0]
binarized_img = (x > 127).astype(np.int).reshape(28, 28)
plt.imshow(binarized_img, cmap="gray")
indices = np.random.choice(binarized_img.size, size=int(binarized_img.size * 0.1), replace=False)
noisy_img = np.copy(binarized_img)
noisy_img.ravel()[indices] = 1 - noisy_img.ravel()[indices]
plt.imshow(noisy_img, cmap="gray")
markov_random_field = np.array([
[[bn.discrete([0.5, 0.5], name=f"p(z_({i},{j}))") for j in range(28)] for i in range(28)],
[[bn.DiscreteVariable(2) for _ in range(28)] for _ in range(28)]])
a = 0.9
b = 0.9
pa = [[a, 1 - a], [1 - a, a]]
pb = [[b, 1 - b], [1 - b, b]]
for i, j in itertools.product(range(28), range(28)):
bn.discrete(pb, markov_random_field[0, i, j], out=markov_random_field[1, i, j], name=f"p(x_({i},{j})|z_({i},{j}))")
if i != 27:
bn.discrete(pa, out=[markov_random_field[0, i, j], markov_random_field[0, i + 1, j]], name=f"p(z_({i},{j}), z_({i+1},{j}))")
if j != 27:
bn.discrete(pa, out=[markov_random_field[0, i, j], markov_random_field[0, i, j + 1]], name=f"p(z_({i},{j}), z_({i},{j+1}))")
markov_random_field[1, i, j].observe(noisy_img[i, j], proprange=0)
for _ in range(10000):
i, j = np.random.choice(28, 2)
markov_random_field[1, i, j].send_message(proprange=3)
restored_img = np.zeros_like(noisy_img)
for i, j in itertools.product(range(28), range(28)):
restored_img[i, j] = np.argmax(markov_random_field[0, i, j].proba)
plt.imshow(restored_img, cmap="gray")
###Output
_____no_output_____
###Markdown
8. Graphical Models
###Code
%matplotlib inline
import itertools
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import fetch_mldata
from prml import bayesnet as bn
np.random.seed(1234)
b = bn.discrete([0.1, 0.9])
f = bn.discrete([0.1, 0.9])
g = bn.discrete([[[0.9, 0.8], [0.8, 0.2]], [[0.1, 0.2], [0.2, 0.8]]], b, f)
print("b:", b)
print("f:", f)
print("g:", g)
g.observe(0)
print("b:", b)
print("f:", f)
print("g:", g)
b.observe(0)
print("b:", b)
print("f:", f)
print("g:", g)
###Output
b: DiscreteVariable(observed=[1. 0.])
f: DiscreteVariable(proba=[0.11111111 0.88888889])
g: DiscreteVariable(observed=[1. 0.])
###Markdown
8.3.3 Illustration: Image de-noising
###Code
mnist = fetch_mldata("MNIST original")
x = mnist.data[0]
binarized_img = (x > 127).astype(np.int).reshape(28, 28)
plt.imshow(binarized_img, cmap="gray")
indices = np.random.choice(binarized_img.size, size=int(binarized_img.size * 0.1), replace=False)
noisy_img = np.copy(binarized_img)
noisy_img.ravel()[indices] = 1 - noisy_img.ravel()[indices]
plt.imshow(noisy_img, cmap="gray")
markov_random_field = np.array([
[[bn.discrete([0.5, 0.5], name=f"p(z_({i},{j}))") for j in range(28)] for i in range(28)],
[[bn.DiscreteVariable(2) for _ in range(28)] for _ in range(28)]])
a = 0.9
b = 0.9
pa = [[a, 1 - a], [1 - a, a]]
pb = [[b, 1 - b], [1 - b, b]]
for i, j in itertools.product(range(28), range(28)):
bn.discrete(pb, markov_random_field[0, i, j], out=markov_random_field[1, i, j], name=f"p(x_({i},{j})|z_({i},{j}))")
if i != 27:
bn.discrete(pa, out=[markov_random_field[0, i, j], markov_random_field[0, i + 1, j]], name=f"p(z_({i},{j}), z_({i+1},{j}))")
if j != 27:
bn.discrete(pa, out=[markov_random_field[0, i, j], markov_random_field[0, i, j + 1]], name=f"p(z_({i},{j}), z_({i},{j+1}))")
markov_random_field[1, i, j].observe(noisy_img[i, j], proprange=0)
for _ in range(10000):
i, j = np.random.choice(28, 2)
markov_random_field[1, i, j].send_message(proprange=3)
restored_img = np.zeros_like(noisy_img)
for i, j in itertools.product(range(28), range(28)):
restored_img[i, j] = np.argmax(markov_random_field[0, i, j].proba)
plt.imshow(restored_img, cmap="gray")
###Output
_____no_output_____
###Markdown
8. Graphical Models
###Code
%matplotlib inline
import itertools
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import fetch_openml
from prml import bayesnet as bn
np.random.seed(1234)
b = bn.discrete([0.1, 0.9])
f = bn.discrete([0.1, 0.9])
g = bn.discrete([[[0.9, 0.8], [0.8, 0.2]], [[0.1, 0.2], [0.2, 0.8]]], b, f)
print("b:", b)
print("f:", f)
print("g:", g)
g.observe(0)
print("b:", b)
print("f:", f)
print("g:", g)
b.observe(0)
print("b:", b)
print("f:", f)
print("g:", g)
###Output
b: DiscreteVariable(observed=[1. 0.])
f: DiscreteVariable(proba=[0.11111111 0.88888889])
g: DiscreteVariable(observed=[1. 0.])
###Markdown
8.3.3 Illustration: Image de-noising
###Code
x, _ = fetch_openml("mnist_784", return_X_y=True, as_frame=False)
x = x[0]
binarized_img = (x > 127).astype(np.int).reshape(28, 28)
plt.imshow(binarized_img, cmap="gray")
indices = np.random.choice(binarized_img.size, size=int(binarized_img.size * 0.1), replace=False)
noisy_img = np.copy(binarized_img)
noisy_img.ravel()[indices] = 1 - noisy_img.ravel()[indices]
plt.imshow(noisy_img, cmap="gray")
markov_random_field = np.array([
[[bn.discrete([0.5, 0.5], name=f"p(z_({i},{j}))") for j in range(28)] for i in range(28)],
[[bn.DiscreteVariable(2) for _ in range(28)] for _ in range(28)]])
a = 0.9
b = 0.9
pa = [[a, 1 - a], [1 - a, a]]
pb = [[b, 1 - b], [1 - b, b]]
for i, j in itertools.product(range(28), range(28)):
bn.discrete(pb, markov_random_field[0, i, j], out=markov_random_field[1, i, j], name=f"p(x_({i},{j})|z_({i},{j}))")
if i != 27:
bn.discrete(pa, out=[markov_random_field[0, i, j], markov_random_field[0, i + 1, j]], name=f"p(z_({i},{j}), z_({i+1},{j}))")
if j != 27:
bn.discrete(pa, out=[markov_random_field[0, i, j], markov_random_field[0, i, j + 1]], name=f"p(z_({i},{j}), z_({i},{j+1}))")
markov_random_field[1, i, j].observe(noisy_img[i, j], proprange=0)
for _ in range(10000):
i, j = np.random.choice(28, 2)
markov_random_field[1, i, j].send_message(proprange=3)
restored_img = np.zeros_like(noisy_img)
for i, j in itertools.product(range(28), range(28)):
restored_img[i, j] = np.argmax(markov_random_field[0, i, j].proba)
plt.imshow(restored_img, cmap="gray")
###Output
_____no_output_____
###Markdown
8. Graphical Models
###Code
%matplotlib inline
import itertools
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import fetch_mldata
from prml import bayesnet as bn
np.random.seed(1234)
b = bn.discrete([0.1, 0.9])
f = bn.discrete([0.1, 0.9])
g = bn.discrete([[[0.9, 0.8], [0.8, 0.2]], [[0.1, 0.2], [0.2, 0.8]]], b, f)
print("b:", b)
print("f:", f)
print("g:", g)
g.observe(0)
print("b:", b)
print("f:", f)
print("g:", g)
b.observe(0)
print("b:", b)
print("f:", f)
print("g:", g)
###Output
b: DiscreteVariable(observed=[1. 0.])
f: DiscreteVariable(proba=[0.11111111 0.88888889])
g: DiscreteVariable(observed=[1. 0.])
###Markdown
8.3.3 Illustration: Image de-noising
###Code
mnist = fetch_mldata("MNIST original")
x = mnist.data[0]
binarized_img = (x > 127).astype(np.int).reshape(28, 28)
plt.imshow(binarized_img, cmap="gray")
indices = np.random.choice(binarized_img.size, size=int(binarized_img.size * 0.1), replace=False)
noisy_img = np.copy(binarized_img)
noisy_img.ravel()[indices] = 1 - noisy_img.ravel()[indices]
plt.imshow(noisy_img, cmap="gray")
markov_random_field = np.array([
[[bn.discrete([0.5, 0.5], name=f"p(z_({i},{j}))") for j in range(28)] for i in range(28)],
[[bn.DiscreteVariable(2) for _ in range(28)] for _ in range(28)]])
a = 0.9
b = 0.9
pa = [[a, 1 - a], [1 - a, a]]
pb = [[b, 1 - b], [1 - b, b]]
for i, j in itertools.product(range(28), range(28)):
bn.discrete(pb, markov_random_field[0, i, j], out=markov_random_field[1, i, j], name=f"p(x_({i},{j})|z_({i},{j}))")
if i != 27:
bn.discrete(pa, out=[markov_random_field[0, i, j], markov_random_field[0, i + 1, j]], name=f"p(z_({i},{j}), z_({i+1},{j}))")
if j != 27:
bn.discrete(pa, out=[markov_random_field[0, i, j], markov_random_field[0, i, j + 1]], name=f"p(z_({i},{j}), z_({i},{j+1}))")
markov_random_field[1, i, j].observe(noisy_img[i, j], proprange=0)
for _ in range(10000):
i, j = np.random.choice(28, 2)
markov_random_field[1, i, j].send_message(proprange=3)
restored_img = np.zeros_like(noisy_img)
for i, j in itertools.product(range(28), range(28)):
restored_img[i, j] = np.argmax(markov_random_field[0, i, j].proba)
plt.imshow(restored_img, cmap="gray")
###Output
_____no_output_____
###Markdown
8. Graphical Models
###Code
%matplotlib inline
import itertools
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import fetch_openml
from prml import bayesnet as bn
np.random.seed(1234)
b = bn.discrete([0.1, 0.9])
f = bn.discrete([0.1, 0.9])
g = bn.discrete([[[0.9, 0.8], [0.8, 0.2]], [[0.1, 0.2], [0.2, 0.8]]], b, f)
print("b:", b)
print("f:", f)
print("g:", g)
g.observe(0)
print("b:", b)
print("f:", f)
print("g:", g)
b.observe(0)
print("b:", b)
print("f:", f)
print("g:", g)
###Output
b: DiscreteVariable(observed=[1. 0.])
f: DiscreteVariable(proba=[0.11111111 0.88888889])
g: DiscreteVariable(observed=[1. 0.])
###Markdown
8.3.3 Illustration: Image de-noising
###Code
mnist = fetch_openml("mnist_784")
x = mnist.data[0]
binarized_img = (x > 127).astype(np.int).reshape(28, 28)
plt.imshow(binarized_img, cmap="gray")
indices = np.random.choice(binarized_img.size, size=int(binarized_img.size * 0.1), replace=False)
noisy_img = np.copy(binarized_img)
noisy_img.ravel()[indices] = 1 - noisy_img.ravel()[indices]
plt.imshow(noisy_img, cmap="gray")
markov_random_field = np.array([
[[bn.discrete([0.5, 0.5], name=f"p(z_({i},{j}))") for j in range(28)] for i in range(28)],
[[bn.DiscreteVariable(2) for _ in range(28)] for _ in range(28)]])
a = 0.9
b = 0.9
pa = [[a, 1 - a], [1 - a, a]]
pb = [[b, 1 - b], [1 - b, b]]
for i, j in itertools.product(range(28), range(28)):
bn.discrete(pb, markov_random_field[0, i, j], out=markov_random_field[1, i, j], name=f"p(x_({i},{j})|z_({i},{j}))")
if i != 27:
bn.discrete(pa, out=[markov_random_field[0, i, j], markov_random_field[0, i + 1, j]], name=f"p(z_({i},{j}), z_({i+1},{j}))")
if j != 27:
bn.discrete(pa, out=[markov_random_field[0, i, j], markov_random_field[0, i, j + 1]], name=f"p(z_({i},{j}), z_({i},{j+1}))")
markov_random_field[1, i, j].observe(noisy_img[i, j], proprange=0)
for _ in range(10000):
i, j = np.random.choice(28, 2)
markov_random_field[1, i, j].send_message(proprange=3)
restored_img = np.zeros_like(noisy_img)
for i, j in itertools.product(range(28), range(28)):
restored_img[i, j] = np.argmax(markov_random_field[0, i, j].proba)
plt.imshow(restored_img, cmap="gray")
###Output
_____no_output_____ |
Copy_of_LS_DS_142_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb | ###Markdown
Lambda School Data Science Module 142 Sampling, Confidence Intervals, and Hypothesis Testing Prepare - examine other available hypothesis testsIf you had to pick a single hypothesis test in your toolbox, t-test would probably be the best choice - but the good news is you don't have to pick just one! Here's some of the others to be aware of:
###Code
import numpy as np
from scipy.stats import chisquare # One-way chi square test
# Chi square can take any crosstab/table and test the independence of rows/cols
# The null hypothesis is that the rows/cols are independent -> low chi square
# The alternative is that there is a dependence -> high chi square
# Be aware! Chi square does *not* tell you direction/causation
ind_obs = np.array([[1, 1], [2, 2]]).T
print(ind_obs)
print(chisquare(ind_obs, axis=None))
dep_obs = np.array([[16, 18, 16, 14, 12, 12], [32, 24, 16, 28, 20, 24]]).T
print(dep_obs)
print(chisquare(dep_obs, axis=None))
# Distribution tests:
# We often assume that something is normal, but it can be important to *check*
# For example, later on with predictive modeling, a typical assumption is that
# residuals (prediction errors) are normal - checking is a good diagnostic
from scipy.stats import normaltest
# Poisson models arrival times and is related to the binomial (coinflip)
sample = np.random.poisson(5, 1000)
print(normaltest(sample)) # Pretty clearly not normal
# Kruskal-Wallis H-test - compare the median rank between 2+ groups
# Can be applied to ranking decisions/outcomes/recommendations
# The underlying math comes from chi-square distribution, and is best for n>5
from scipy.stats import kruskal
x1 = [1, 3, 5, 7, 9]
y1 = [2, 4, 6, 8, 10]
print(kruskal(x1, y1)) # x1 is a little better, but not "significantly" so
x2 = [1, 1, 1]
y2 = [2, 2, 2]
z = [2, 2] # Hey, a third group, and of different size!
print(kruskal(x2, y2, z)) # x clearly dominates
###Output
KruskalResult(statistic=0.2727272727272734, pvalue=0.6015081344405895)
KruskalResult(statistic=7.0, pvalue=0.0301973834223185)
###Markdown
And there's many more! `scipy.stats` is fairly comprehensive, though there are even more available if you delve into the extended world of statistics packages. As tests get increasingly obscure and specialized, the importance of knowing them by heart becomes small - but being able to look them up and figure them out when they *are* relevant is still important. Live Lecture - let's explore some more of scipy.stats
###Code
# Taking requests! Come to lecture with a topic or problem and we'll try it.
from scipy import stats
b1 = stats.binom(n=100, p=0.6)
b1.mean()
b1.median()
import random
random.seed(100) #reproducibility! Nexty linr should give 2386
random.randint(0, 10000)
chi2 = stats.chi2(5)
chi2.mean()
chi2.median()
chi2 = stats.chi2(500)
chi2.mean()
chi2.median()
# Confidence intervals!
# Similar to hypothesis testing, but centered at sample mean
# Better than reporting the "point estimate" (sample mean)
# Why? Because point estimates aren't always perfect
import numpy as np
from scipy import stats
def confidence_interval(data, confidence=0.95):
"""
Calculate a confidence interval saround a sample mean for given data.
Using t-distribution and two-tailed test, default 95% confidence.
Arguments:
data - iterable (list or numpy array) of sample observations
confidence - level of confidence for the interval
Returns:
tuple of (mean, lower bound, upper bound)
"""
data = np.array(data)
mean = np.mean(data)
n = len(data)
stderr = stats.sem(data)
interval = stderr * stats.t.ppf((1 + confidence) / 2., n - 1)
return(mean, mean - interval, mean + interval)
pass #TODO code!
def report_confidence_interval(confidence_interval):
"""
Print a pretty report of a confidence interval.
Arguments:
confidnce_interval - tuple of (mean, lower bound, upper bound)
Returns:
None, but prints to screen the report
"""
#print('Mean: {}'.format(confidence_interval[0]))
#print('Lower Bound: {}'.format(confidence_interval[1]))
#print('Upper bound: {}'.format(confidence_interval[2]))
x = 2
print('x is: {}'.format(x))
import numpy as np
coinflips = np.random.binomial(n=1, p=0.5, size=100)
print(coinflips)
import pandas as pd
df = pd.DataFrame(coinflips)
df.describe()
confidence_interval(coinflips, confidence=0.95)
###Output
_____no_output_____
###Markdown
Assignment - Build a confidence intervalA confidence interval refers to a neighborhood around some point estimate, the size of which is determined by the desired p-value. For instance, we might say that 52% of Americans prefer tacos to burritos, with a 95% confidence interval of +/- 5%.52% (0.52) is the point estimate, and +/- 5% (the interval $[0.47, 0.57]$) is the confidence interval. "95% confidence" means a p-value $\leq 1 - 0.95 = 0.05$.In this case, the confidence interval includes $0.5$ - which is the natural null hypothesis (that half of Americans prefer tacos and half burritos, thus there is no clear favorite). So in this case, we could use the confidence interval to report that we've failed to reject the null hypothesis.But providing the full analysis with a confidence interval, including a graphical representation of it, can be a helpful and powerful way to tell your story. Done well, it is also more intuitive to a layperson than simply saying "fail to reject the null hypothesis" - it shows that in fact the data does *not* give a single clear result (the point estimate) but a whole range of possibilities.How is a confidence interval built, and how should it be interpreted? It does *not* mean that 95% of the data lies in that interval - instead, the frequentist interpretation is "if we were to repeat this experiment 100 times, we would expect the average result to lie in this interval ~95 times."For a 95% confidence interval and a normal(-ish) distribution, you can simply remember that +/-2 standard deviations contains 95% of the probability mass, and so the 95% confidence interval based on a given sample is centered at the mean (point estimate) and has a range of +/- 2 (or technically 1.96) standard deviations.Different distributions/assumptions (90% confidence, 99% confidence) will require different math, but the overall process and interpretation (with a frequentist approach) will be the same.Your assignment - using the data from the prior module ([congressional voting records](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records)):1. Generate and numerically represent a confidence interval2. Graphically (with a plot) represent the confidence interval3. Interpret the confidence interval - what does it tell you about the data and its distribution?Stretch goals:1. Write a summary of your findings, mixing prose and math/code/results. *Note* - yes, this is by definition a political topic. It is challenging but important to keep your writing voice *neutral* and stick to the facts of the data. Data science often involves considering controversial issues, so it's important to be sensitive about them (especially if you want to publish).2. Apply the techniques you learned today to your project data or other data of your choice, and write/discuss your findings here.
###Code
# TODO - your code!
#Getting started with drug data
# http://archive.ics.uci.edu/ml/machine-learning-databases/00462/drugsCom_raw.zip
!wget http://archive.ics.uci.edu/ml/machine-learning-databases/00462/drugsCom_raw.zip
!unzip drugsCom_raw.zip
!head drugsComTrain_raw.tsv
df = pd.read_table('drugsComTrain_raw.tsv')
df.head()
df_bipol = df[(df['condition'] == 'Bipolar Disorde')]
df_bipol['drugName'].value_counts()
df_bipol[df_bipol['drugName'] == 'Lamotrigine'].describe()
from scipy import stats
confidence = stats.norm.interval(0.95, loc=8.28, scale = 8.28 / np.sqrt(406))
confidence
import matplotlib.pyplot as plt
plt.plot(confidence)
plt.hist(confidence)
# For the specific drug named 'Lamotrigine', we are able to state with
# a 95% confidence that someone suffering from bipolar who attempted to
# take this drug would rate it between 7.47 and 9.09 roughly.
###Output
_____no_output_____
###Markdown
Lambda School Data Science Module 142 Sampling, Confidence Intervals, and Hypothesis Testing Prepare - examine other available hypothesis testsIf you had to pick a single hypothesis test in your toolbox, t-test would probably be the best choice - but the good news is you don't have to pick just one! Here's some of the others to be aware of:
###Code
import numpy as np
from scipy.stats import chisquare # One-way chi square test
# Chi square can take any crosstab/table and test the independence of rows/cols
# The null hypothesis is that the rows/cols are independent -> low chi square
# The alternative is that there is a dependence -> high chi square
# Be aware! Chi square does *not* tell you direction/causation
ind_obs = np.array([[1, 1], [2, 2]]).T
print(ind_obs)
print(chisquare(ind_obs, axis=None))
dep_obs = np.array([[16, 18, 16, 14, 12, 12], [32, 24, 16, 28, 20, 24]]).T
print(dep_obs)
print(chisquare(dep_obs, axis=None))
# Distribution tests:
# We often assume that something is normal, but it can be important to *check*
# For example, later on with predictive modeling, a typical assumption is that
# residuals (prediction errors) are normal - checking is a good diagnostic
from scipy.stats import normaltest
# Poisson models arrival times and is related to the binomial (coinflip)
sample = np.random.poisson(5, 1000)
print(normaltest(sample)) # Pretty clearly not normal
# Kruskal-Wallis H-test - compare the median rank between 2+ groups
# Can be applied to ranking decisions/outcomes/recommendations
# The underlying math comes from chi-square distribution, and is best for n>5
from scipy.stats import kruskal
x1 = [1, 3, 5, 7, 9]
y1 = [2, 4, 6, 8, 10]
print(kruskal(x1, y1)) # x1 is a little better, but not "significantly" so
x2 = [1, 1, 1]
y2 = [2, 2, 2]
z = [2, 2] # Hey, a third group, and of different size!
print(kruskal(x2, y2, z)) # x clearly dominates
###Output
KruskalResult(statistic=0.2727272727272734, pvalue=0.6015081344405895)
KruskalResult(statistic=7.0, pvalue=0.0301973834223185)
###Markdown
And there's many more! `scipy.stats` is fairly comprehensive, though there are even more available if you delve into the extended world of statistics packages. As tests get increasingly obscure and specialized, the importance of knowing them by heart becomes small - but being able to look them up and figure them out when they *are* relevant is still important. Live Lecture - let's explore some more of scipy.stats
###Code
# Taking requests! Come to lecture with a topic or problem and we'll try it.
from scipy import stats
b1 = stats.binom(n=100, p=0.6)
b1.mean()
b1.median()
import random
random.seed(100) # Reproducibility! Next line should give 2386
random.randint(0, 10000)
chi2 = stats.chi2(500)
chi2.mean()
chi2.median()
# Confidence intervals!
# Similar to hypothesis testing, but centered at sample mean
# Better than reporting the "point estimate" (sample mean)
# Why? Because point estimates aren't always perfect
import numpy as np
from scipy import stats
def confidence_interval(data, confidence=0.95):
"""
Calculate a confidence interval around a sample mean for given data.
Using t-distribution and two-tailed test, default 95% confidence.
Arguments:
data - iterable (list or numpy array) of sample observations
confidence - level of confidence for the interval
Returns:
tuple of (mean, lower bound, upper bound)
"""
data = np.array(data)
mean = np.mean(data)
n = len(data)
stderr = stats.sem(data)
interval = stderr * stats.t.ppf((1 + confidence) / 2., n - 1)
return (mean, mean - interval, mean + interval)
def report_confidence_interval(confidence_interval):
"""
Return a string with a pretty report of a confidence interval.
Arguments:
confidence_interval - tuple of (mean, lower bound, upper bound)
Returns:
None, but prints to screen the report
"""
#print('Mean: {}'.format(confidence_interval[0]))
#print('Lower bound: {}'.format(confidence_interval[1]))
#print('Upper bound: {}'.format(confidence_interval[2]))
s = "our mean lies in the interval ]{:.2}, {:.2}[".format(
confidence_interval[1], confidence_interval[2])
return s
x = 2
print('x is: {}'.format(x))
coinflips = np.random.binomial(n=1, p=0.5, size=100)
print(coinflips)
import pandas as pd
df = pd.DataFrame(coinflips)
df.describe()
coinflip_interval = confidence_interval(coinflips, confidence=0.95)
coinflip_interval
report_confidence_interval(coinflip_interval)
###Output
_____no_output_____
###Markdown
Assignment - Build a confidence intervalA confidence interval refers to a neighborhood around some point estimate, the size of which is determined by the desired p-value. For instance, we might say that 52% of Americans prefer tacos to burritos, with a 95% confidence interval of +/- 5%.52% (0.52) is the point estimate, and +/- 5% (the interval $[0.47, 0.57]$) is the confidence interval. "95% confidence" means a p-value $\leq 1 - 0.95 = 0.05$.In this case, the confidence interval includes $0.5$ - which is the natural null hypothesis (that half of Americans prefer tacos and half burritos, thus there is no clear favorite). So in this case, we could use the confidence interval to report that we've failed to reject the null hypothesis.But providing the full analysis with a confidence interval, including a graphical representation of it, can be a helpful and powerful way to tell your story. Done well, it is also more intuitive to a layperson than simply saying "fail to reject the null hypothesis" - it shows that in fact the data does *not* give a single clear result (the point estimate) but a whole range of possibilities.How is a confidence interval built, and how should it be interpreted? It does *not* mean that 95% of the data lies in that interval - instead, the frequentist interpretation is "if we were to repeat this experiment 100 times, we would expect the average result to lie in this interval ~95 times."For a 95% confidence interval and a normal(-ish) distribution, you can simply remember that +/-2 standard deviations contains 95% of the probability mass, and so the 95% confidence interval based on a given sample is centered at the mean (point estimate) and has a range of +/- 2 (or technically 1.96) standard deviations.Different distributions/assumptions (90% confidence, 99% confidence) will require different math, but the overall process and interpretation (with a frequentist approach) will be the same.Your assignment - using the data from the prior module ([congressional voting records](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records)):1. Generate and numerically represent a confidence interval2. Graphically (with a plot) represent the confidence interval3. Interpret the confidence interval - what does it tell you about the data and its distribution?Stretch goals:1. Write a summary of your findings, mixing prose and math/code/results. *Note* - yes, this is by definition a political topic. It is challenging but important to keep your writing voice *neutral* and stick to the facts of the data. Data science often involves considering controversial issues, so it's important to be sensitive about them (especially if you want to publish).2. Apply the techniques you learned today to your project data or other data of your choice, and write/discuss your findings here.
###Code
# Getting started with drug data
# http://archive.ics.uci.edu/ml/datasets/Drug+Review+Dataset+%28Drugs.com%29
!wget http://archive.ics.uci.edu/ml/machine-learning-databases/00462/drugsCom_raw.zip
!unzip drugsCom_raw.zip
!ls
!head drugsComTrain_raw.tsv
df = pd.read_table('drugsComTrain_raw.tsv')
df.head()
df.shape
df.dropna(inplace = True)
#dropping these seems ok because its only 800 entries out of 161,000 and only drops 5 drugs out of 3431, which is the immportant part
df['drugName'].nunique()
df.isnull().sum() #making 2x sure the data has no na
df['freq'] = df.groupby('drugName')['drugName'].transform('count')#adding a column for counting how often a drug has a review
df
df[df['drugName'].str.contains('Valsartan')].count()# checking to make sure freq works
df.drop(df[df['freq'] < 30].index, inplace=True)
'''dropping frequencies of less than 30 to make sure my sample is less skewed
by outliers'''
df
mean_rates =df.pivot_table(index='drugName',values='rating', aggfunc='mean')
gbc= df.groupby(df['condition']).count()#???
gbc
df.drop(df[df['condition'].str.contains('users')].index, inplace=True)
df['condition'].nunique()
gbc= df.groupby(df['condition']).count()
gbc
mean_rates2 =df.pivot_table(index='condition',values='rating', aggfunc='mean')
mean_rates2
drugratingCI = confidence_interval(mean_rates['rating'], confidence = .95)
drugCI = drugratingCI[0]
drugCI_lb = drugratingCI[1]
drugCI_ub = drugratingCI[2]
report_confidence_interval(drugratingCI)
conditionratingCI = confidence_interval(mean_rates2['rating'], confidence = .95)
sample_mean = conditionratingCI[0]
sample_mean_lb = conditionratingCI[1]
sample_mean_ub = conditionratingCI[2]
report_confidence_interval(conditionratingCI)
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
bp = ax.plot(mean_rates2, 'bo')
ax.axhline(sample_mean, color = 'orange')
ax.axhline(sample_mean_lb, color = 'green')
ax.axhline(sample_mean_ub, color = 'green')
fig, ax = plt.subplots()
bp = ax.plot(mean_rates, 'ro')
ax.axhline(drugCI, color = 'orange')
ax.axhline(drugCI_lb, color = 'green')
ax.axhline(drugCI_ub, color = 'green');
mean_rates.sort_values(by = 'rating', ascending = True, inplace = True)
mean_rates # yikes some of these are terrible for you D:
'''because the confidence intervals for both of our variable were not only quite
small but also pretty high rated,we can intuit that more people leave good
ratings than bad, which is born out our data. This means that people who get a
drug *and* leave a review generally like it.'''
###Output
_____no_output_____
###Markdown
Lambda School Data Science Module 142 Sampling, Confidence Intervals, and Hypothesis Testing Prepare - examine other available hypothesis testsIf you had to pick a single hypothesis test in your toolbox, t-test would probably be the best choice - but the good news is you don't have to pick just one! Here's some of the others to be aware of:
###Code
import numpy as np
from scipy.stats import chisquare # One-way chi square test
# Chi square can take any crosstab/table and test the independence of rows/cols
# The null hypothesis is that the rows/cols are independent -> low chi square
# The alternative is that there is a dependence -> high chi square
# Be aware! Chi square does *not* tell you direction/causation
ind_obs = np.array([[1, 1], [2, 2]]).T
print(ind_obs)
print(chisquare(ind_obs, axis=None))
dep_obs = np.array([[16, 18, 16, 14, 12, 12], [32, 24, 16, 28, 20, 24]]).T
print(dep_obs)
print(chisquare(dep_obs, axis=None))
# Distribution tests:
# We often assume that something is normal, but it can be important to *check*
# For example, later on with predictive modeling, a typical assumption is that
# residuals (prediction errors) are normal - checking is a good diagnostic
from scipy.stats import normaltest
# Poisson models arrival times and is related to the binomial (coinflip)
sample = np.random.poisson(5, 1000)
print(normaltest(sample)) # Pretty clearly not normal
# Kruskal-Wallis H-test - compare the median rank between 2+ groups
# Can be applied to ranking decisions/outcomes/recommendations
# The underlying math comes from chi-square distribution, and is best for n>5
from scipy.stats import kruskal
x1 = [1, 3, 5, 7, 9]
y1 = [2, 4, 6, 8, 10]
print(kruskal(x1, y1)) # x1 is a little better, but not "significantly" so
x2 = [1, 1, 1]
y2 = [2, 2, 2]
z = [2, 2] # Hey, a third group, and of different size!
print(kruskal(x2, y2, z)) # x clearly dominates
###Output
KruskalResult(statistic=0.2727272727272734, pvalue=0.6015081344405895)
KruskalResult(statistic=7.0, pvalue=0.0301973834223185)
###Markdown
And there's many more! `scipy.stats` is fairly comprehensive, though there are even more available if you delve into the extended world of statistics packages. As tests get increasingly obscure and specialized, the importance of knowing them by heart becomes small - but being able to look them up and figure them out when they *are* relevant is still important. Live Lecture - let's explore some more of scipy.statsCandidate topics to explore:- `scipy.stats.chi2` - the Chi-squared distribution, which we can use to reproduce the Chi-squared test- Calculate the Chi-Squared test statistic "by hand" (with code), and feed it into `chi2`- Build a confidence interval with `stats.t.ppf`, the t-distribution percentile point function (the inverse of the CDF) - we can write a function to return a tuple of `(mean, lower bound, upper bound)` that you can then use for the assignment (visualizing confidence intervals)
###Code
gender = ['male', 'male', 'male', 'female', 'female', 'female']
eats_outside = ['outside', 'inside', 'inside', 'inside', 'outside', 'outside']
import pandas as pd
df = pd.DataFrame({"gender": gender, "preference": eats_outside})
df.head(6)
pd.crosstab(df.gender, df.preference)
table = pd.crosstab(df.gender, df.preference, margins=True)
df = df.replace("male", 0)
df = df.replace("female", 1)
df = df.replace('outside', 0)
df = df.replace('inside',1)
df.head()
pd.crosstab(df.gender, df.preference, margins=True)
expected = [[1.5, 1.5],
[1.5, 1.5]]
# Lets think about marginal proportions
# Let's just type out/explain the margin counts
# Total number of males (first row) = 3
# Total number of females (second row) = 3
# Total number of people who prefer outside = 3
# Total number of people who prefer inside = 3
# Marginal Proportion of the first row
# obs / total = (3 males) / (6 humans)
pd.crosstab(df.gender, df.preference, margins=True, normalize='all')
observed = np.array([[.5,.5],
[.5,.5]])
deviation = numerator = observed - expected
print(numerator)
deviation_squared = deviation**2
print("deviation squared \n", deviation_squared)
fraction = (deviation_squared / expected)
print("fraction: \n", fraction)
chi2 = fraction.sum()
print(chi2/4)
expected_values = [[1.5, 1.5], [1.5, 1.5]]
deviation = (((.5)**2) / 1.5) * 4 # 0.5^2 deviation per cell, scaled and added
print(deviation)
chi_data = [[1,2],
[2,1]]
from scipy.stats import chisquare # One-way chi square test
chisquare(chi_data, axis=None)
from scipy.stats import chi2_contingency
# table = [[1,2],[2,4]]
chi2statistic, pvalue, dof, observed = chi2_contingency(table)
print("chi2 stat", chi2statistic)
print("p-value", pvalue)
print('degrees of freedom', dof)
print("Contingency Table: \n", observed)
def lazy_chisquare(observed, expected):
chisquare = 0
for row_obs, row_exp in zip(observed, expected):
for obs, exp in zip(row_obs, row_exp):
chisquare += (obs - exp)**2 / exp
return chisquare
chi_data = [[1, 2], [2, 1]]
expected_values = [[1.5, 1.5], [1.5, 1.5]]
chistat = lazy_chisquare(chi_data, expected_values)
chistat
###Output
_____no_output_____
###Markdown
Confidence Intervals
###Code
# Confidence intervals!
# Similar to hypothesis testing, but centered at sample mean
# Generally better than reporting the "point estimate" (sample mean)
# Why? Because point estimates aren't always perfect
import numpy as np
from scipy import stats
def confidence_interval(data, confidence=0.95):
"""
Calculate a confidence interval around a sample mean for given data.
Using t-distribution and two-tailed test, default 95% confidence.
Arguments:
data - iterable (list or numpy array) of sample observations
confidence - level of confidence for the interval
Returns:
tuple of (mean, lower bound, upper bound)
"""
data = np.array(data)
mean = np.mean(data)
n = len(data)
stderr = stats.sem(data)
interval = stderr * stats.t.ppf((1 + confidence) / 2., n - 1)
return (mean, mean - interval, mean + interval)
def report_confidence_interval(confidence_interval):
"""
Return a string with a pretty report of a confidence interval.
Arguments:
confidence_interval - tuple of (mean, lower bound, upper bound)
Returns:
None, but prints to screen the report
"""
#print('Mean: {}'.format(confidence_interval[0]))
#print('Lower bound: {}'.format(confidence_interval[1]))
#print('Upper bound: {}'.format(confidence_interval[2]))
s = "our mean lies in the interval [{:.2}, {:.2}]".format(
confidence_interval[1], confidence_interval[2])
return s
#conf int = [lower_bound]
coinflips = np.random.binomial(n=1,p=0.9, size = 100)
print(coinflips)
import scipy.stats as stats
stats.ttest_1samp(coinflips,0.5)
coinflip_interval = confidence_interval(coinflips) # Default 95% conf
coinflip_interval
###Output
_____no_output_____
###Markdown
Assignment - Build a confidence intervalA confidence interval refers to a neighborhood around some point estimate, the size of which is determined by the desired p-value. For instance, we might say that 52% of Americans prefer tacos to burritos, with a 95% confidence interval of +/- 5%.52% (0.52) is the point estimate, and +/- 5% (the interval $[0.47, 0.57]$) is the confidence interval. "95% confidence" means a p-value $\leq 1 - 0.95 = 0.05$.In this case, the confidence interval includes $0.5$ - which is the natural null hypothesis (that half of Americans prefer tacos and half burritos, thus there is no clear favorite). So in this case, we could use the confidence interval to report that we've failed to reject the null hypothesis.But providing the full analysis with a confidence interval, including a graphical representation of it, can be a helpful and powerful way to tell your story. Done well, it is also more intuitive to a layperson than simply saying "fail to reject the null hypothesis" - it shows that in fact the data does *not* give a single clear result (the point estimate) but a whole range of possibilities.How is a confidence interval built, and how should it be interpreted? It does *not* mean that 95% of the data lies in that interval - instead, the frequentist interpretation is "if we were to repeat this experiment 100 times, we would expect the average result to lie in this interval ~95 times."For a 95% confidence interval and a normal(-ish) distribution, you can simply remember that +/-2 standard deviations contains 95% of the probability mass, and so the 95% confidence interval based on a given sample is centered at the mean (point estimate) and has a range of +/- 2 (or technically 1.96) standard deviations.Different distributions/assumptions (90% confidence, 99% confidence) will require different math, but the overall process and interpretation (with a frequentist approach) will be the same.Your assignment - using the data from the prior module ([congressional voting records](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records)):1. Generate and numerically represent a confidence interval2. Graphically (with a plot) represent the confidence interval3. Interpret the confidence interval - what does it tell you about the data and its distribution?Stretch goals:1. Write a summary of your findings, mixing prose and math/code/results. *Note* - yes, this is by definition a political topic. It is challenging but important to keep your writing voice *neutral* and stick to the facts of the data. Data science often involves considering controversial issues, so it's important to be sensitive about them (especially if you want to publish).2. Apply the techniques you learned today to your project data or other data of your choice, and write/discuss your findings here.3. Refactor your code so it is elegant, readable, and can be easily run for all issues.
###Code
#stealing my code from yesterday
from google.colab import files
uploaded = files.upload()
import pandas as pd
import numpy as np
df = pd.read_csv('house-votes-84.txt',header = None)
df = df.rename(index=str,columns={0: "Party"})
for col in range(1,17):
df[col] = np.where(df[col]=='y',1,df[col])
df[col] = np.where(df[col]=='n',0,df[col])
# i had to use that replace so as not to change n in republican
df_rep = df[df['Party']=='republican'].copy()
df_dem = df[df['Party']=='democrat'].copy()
df_rep = df_rep.replace('?',np.nan)
df_dem = df_dem.replace('?',np.nan)
rep_vote_means = [round(df_rep[col].mean(),0) for col in range(1,17)]
dem_vote_means = [round(df_dem[col].mean(),0) for col in range(1,17)]
for i in range (1,17):
df_rep[i] = np.where(df_rep[i].isna(),rep_vote_means[i-1],df_rep[i])
for i in range (1,17):
df_dem[i] = np.where(df_dem[i].isna(),dem_vote_means[i-1],df_dem[i])
df_clean = df_rep.append(df_dem)
df_rep = df_rep.drop(['Party'],axis=1)
df_dem = df_dem.drop(['Party'],axis=1)
df_clean.head().drop(['Party'],axis=1)
#taking a look at combined voting record for one column confidence interval
confidence_interval(df_clean[1])
#now looking at confidence intervals for all combined voting record:
combined_ci = []
for col in df_clean.columns:
aaa = confidence_interval(df_clean[col])
combined_ci.append(aaa)
print(confidence_interval(df_clean[col]))
#now looking for individual party voting records
rep_ci = []
for col in df_rep.columns:
bbb = confidence_interval(df_rep[col])
rep_ci.append(bbb)
print(confidence_interval(df_rep[col]))
#now looking for individual party voting records
dem_ci = []
for col in df_dem.columns:
ccc = confidence_interval(df_dem[col])
dem_ci.append(ccc)
print(confidence_interval(df_dem[col]))
#this is basically saying for vote 4 for example if we picked 100 republican voters a large number of times, 95% of the time
# between 97 and 100 would vote for the bill
combined_ci[0]
#df_clean = df_clean.drop(['Party'],axis=1)
clean_mean = [combined_ci[i][0] for i in range(0,len(combined_ci))]
clean_err = [(combined_ci[i][0]-combined_ci[i][1]) for i in range(0,len(combined_ci))]
plt.errorbar(df_clean.columns, clean_mean, xerr=0.5, yerr=clean_err, linestyle='',color='g')
plt.show()
rep_mean = [rep_ci[i][0] for i in range(0,len(rep_ci))]
rep_err = [(rep_ci[i][0]-rep_ci[i][1]) for i in range(0,len(rep_ci))]
plt.errorbar(df_rep.columns, rep_mean, xerr=0.5, yerr=rep_err, linestyle='',color='r')
plt.show()
dem_mean = [dem_ci[i][0] for i in range(0,len(dem_ci))]
dem_err = [(dem_ci[i][0]-dem_ci[i][1]) for i in range(0,len(dem_ci))]
plt.errorbar(df_dem.columns, dem_mean, xerr=0.5, yerr=dem_err, linestyle='')
plt.show()
#you can see the voting record for each party indiviually is much more polarised on many votes with a tight confidence interval
# you can also see that the republicans only were more evenly split on 3 votes, while the democrats were evenly split on 7 votes
#though this sample size of 16 votes is not large enough to draw any conclusions from that
#the confidence intervals tell you for some votes there was very little intra-party divergence - i.e. vote 4 for both parties, vote 14 for the
#republicans and vote 16 for the dems
###Output
_____no_output_____
###Markdown
Resources- [Interactive visualize the Chi-Squared test](https://homepage.divms.uiowa.edu/~mbognar/applets/chisq.html)- [Calculation of Chi-Squared test statistic](https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test)- [Visualization of a confidence interval generated by R code](https://commons.wikimedia.org/wiki/File:Confidence-interval.svg)- [Expected value of a squared standard normal](https://math.stackexchange.com/questions/264061/expected-value-calculation-for-squared-normal-distribution) (it's 1 - which is why the expected value of a Chi-Squared with $n$ degrees of freedom is $n$, as it's the sum of $n$ squared standard normals)
###Code
# ignore the below - was just messing around with a few examples
#messing around witi stats examples
import numpy as np
from scipy import stats
N = 10000
a = np.random.normal(0, 1, N)
mean, sigma = a.mean(), a.std(ddof=1)
conf_int_a = stats.norm.interval(0.68, loc=mean, scale=sigma)
print('{:0.2%} of the single draws are in conf_int_a'
.format(((a >= conf_int_a[0]) & (a < conf_int_a[1])).sum() / float(N)))
M = 1000
b = np.random.normal(0, 1, (N, M)).mean(axis=1)
conf_int_b = stats.norm.interval(0.68, loc=0, scale=1 / np.sqrt(M))
print('{:0.2%} of the means are in conf_int_b'
.format(((b >= conf_int_b[0]) & (b < conf_int_b[1])).sum() / float(N)))
#trying binomial distribution
NN = 10000
aaa = np.random.binomial(100, 0.25,NN)
mean, sigma = aaa.mean(), aaa.std(ddof=1)
conf_int_aaa = stats.norm.interval(0.68, loc=mean, scale=sigma)
print('{:0.2%} of the single draws are in conf_int_a'
.format(((aaa >= conf_int_aaa[0]) & (aaa < conf_int_aaa[1])).sum() / float(NN)))
aaa
import matplotlib.pyplot as plt
df = pd.DataFrame()
df['category'] = np.random.choice(np.arange(10), 1000, replace=True)
df['number'] = np.random.normal(df['category'], 1)
mean = df.groupby('category')['number'].mean()
std = df.groupby('category')['number'].std()
plt.errorbar(mean.index, mean, xerr=0.5, yerr=2*std, linestyle='')
plt.show()
df.head()
###Output
_____no_output_____
###Markdown
Lambda School Data Science Module 142 Sampling, Confidence Intervals, and Hypothesis Testing Prepare - examine other available hypothesis testsIf you had to pick a single hypothesis test in your toolbox, t-test would probably be the best choice - but the good news is you don't have to pick just one! Here's some of the others to be aware of:
###Code
import numpy as np
from scipy.stats import chisquare # One-way chi square test
# Chi square can take any crosstab/table and test the independence of rows/cols
# The null hypothesis is that the rows/cols are independent -> low chi square
# The alternative is that there is a dependence -> high chi square
# Be aware! Chi square does *not* tell you direction/causation
ind_obs = np.array([[1, 1], [2, 2]]).T
print(ind_obs)
print(chisquare(ind_obs, axis=None))
dep_obs = np.array([[16, 18, 16, 14, 12, 12], [32, 24, 16, 28, 20, 24]]).T
print(dep_obs)
print(chisquare(dep_obs, axis=None))
# Distribution tests:
# We often assume that something is normal, but it can be important to *check*
# For example, later on with predictive modeling, a typical assumption is that
# residuals (prediction errors) are normal - checking is a good diagnostic
from scipy.stats import normaltest
# Poisson models arrival times and is related to the binomial (coinflip)
sample = np.random.poisson(5, 1000)
print(normaltest(sample)) # Pretty clearly not normal
# Kruskal-Wallis H-test - compare the median rank between 2+ groups
# Can be applied to ranking decisions/outcomes/recommendations
# The underlying math comes from chi-square distribution, and is best for n>5
from scipy.stats import kruskal
x1 = [1, 3, 5, 7, 9]
y1 = [2, 4, 6, 8, 10]
print(kruskal(x1, y1)) # x1 is a little better, but not "significantly" so
x2 = [1, 1, 1]
y2 = [2, 2, 2]
z = [2, 2] # Hey, a third group, and of different size!
print(kruskal(x2, y2, z)) # x clearly dominates
###Output
KruskalResult(statistic=0.2727272727272734, pvalue=0.6015081344405895)
KruskalResult(statistic=7.0, pvalue=0.0301973834223185)
###Markdown
And there's many more! `scipy.stats` is fairly comprehensive, though there are even more available if you delve into the extended world of statistics packages. As tests get increasingly obscure and specialized, the importance of knowing them by heart becomes small - but being able to look them up and figure them out when they *are* relevant is still important. Live Lecture - let's explore some more of scipy.statsCandidate topics to explore:- `scipy.stats.chi2` - the Chi-squared distribution, which we can use to reproduce the Chi-squared test- Calculate the Chi-Squared test statistic "by hand" (with code), and feed it into `chi2`- Build a confidence interval with `stats.t.ppf`, the t-distribution percentile point function (the inverse of the CDF) - we can write a function to return a tuple of `(mean, lower bound, upper bound)` that you can then use for the assignment (visualizing confidence intervals)
###Code
# Taking requests! Come to lecture with a topic or problem and we'll try it.
###Output
_____no_output_____
###Markdown
Assignment - Build a confidence intervalA confidence interval refers to a neighborhood around some point estimate, the size of which is determined by the desired p-value. For instance, we might say that 52% of Americans prefer tacos to burritos, with a 95% confidence interval of +/- 5%.52% (0.52) is the point estimate, and +/- 5% (the interval $[0.47, 0.57]$) is the confidence interval. "95% confidence" means a p-value $\leq 1 - 0.95 = 0.05$.In this case, the confidence interval includes $0.5$ - which is the natural null hypothesis (that half of Americans prefer tacos and half burritos, thus there is no clear favorite). So in this case, we could use the confidence interval to report that we've failed to reject the null hypothesis.But providing the full analysis with a confidence interval, including a graphical representation of it, can be a helpful and powerful way to tell your story. Done well, it is also more intuitive to a layperson than simply saying "fail to reject the null hypothesis" - it shows that in fact the data does *not* give a single clear result (the point estimate) but a whole range of possibilities.How is a confidence interval built, and how should it be interpreted? It does *not* mean that 95% of the data lies in that interval - instead, the frequentist interpretation is "if we were to repeat this experiment 100 times, we would expect the average result to lie in this interval ~95 times."For a 95% confidence interval and a normal(-ish) distribution, you can simply remember that +/-2 standard deviations contains 95% of the probability mass, and so the 95% confidence interval based on a given sample is centered at the mean (point estimate) and has a range of +/- 2 (or technically 1.96) standard deviations.Different distributions/assumptions (90% confidence, 99% confidence) will require different math, but the overall process and interpretation (with a frequentist approach) will be the same.Your assignment - using the data from the prior module ([congressional voting records](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records)):1. Generate and numerically represent a confidence interval2. Graphically (with a plot) represent the confidence interval3. Interpret the confidence interval - what does it tell you about the data and its distribution?Stretch goals:1. Write a summary of your findings, mixing prose and math/code/results. *Note* - yes, this is by definition a political topic. It is challenging but important to keep your writing voice *neutral* and stick to the facts of the data. Data science often involves considering controversial issues, so it's important to be sensitive about them (especially if you want to publish).2. Apply the techniques you learned today to your project data or other data of your choice, and write/discuss your findings here.3. Refactor your code so it is elegant, readable, and can be easily run for all issues.
###Code
import pandas as pd
import numpy as np
from scipy import stats
from scipy.stats import normaltest
from scipy.stats import kruskal
from random import randint
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
# the data file does not have a header so we'll need to create one
# attribute info copy and pasted from name file
attribute_info = '''1. Class-Name: 2 (democrat, republican)
2. handicapped-infants: 2 (y,n)
3. water-project-cost-sharing: 2 (y,n)
4. adoption-of-the-budget-resolution: 2 (y,n)
5. physician-fee-freeze: 2 (y,n)
6. el-salvador-aid: 2 (y,n)
7. religious-groups-in-schools: 2 (y,n)
8. anti-satellite-test-ban: 2 (y,n)
9. aid-to-nicaraguan-contras: 2 (y,n)
10. mx-missile: 2 (y,n)
11. immigration: 2 (y,n)
12. synfuels-corporation-cutback: 2 (y,n)
13. education-spending: 2 (y,n)
14. superfund-right-to-sue: 2 (y,n)
15. crime: 2 (y,n)
16. duty-free-exports: 2 (y,n)
17. export-administration-act-south-africa: 2 (y,n)'''
# clean up attribute info to use for column headers
names = (attribute_info.replace(': 2 (y,n)', ' ')
.replace(': 2 (democrat, republican)', ' ')
.replace('.', ' ')
.split())
# finish cleaning by getting rid of numbers
for x in names:
nums = [str(x) for x in range(0, 18)]
if x in nums:
names.remove(x)
# import the csv without the first row as a header
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.data', header=None)
# add header (names)
df.columns = names
# replace all 'y', 'n', and '?' values with python friendly values
# replaced '?' with random numbers to avoid NaNs
df = df.replace({'y': 1, 'n': 0, '?': randint(0,1)})
print(df.shape)
# create dataframes for each party
rep = df[df['Class-Name'] == 'republican']
dem = df[df['Class-Name'] == 'democrat']
# create a function to get mean, confidence interval, and the interval (for use in graphing)
def confidence_interval(data, confidence = 0.95):
data = np.array(data)
mean = np.mean(data)
n = len(data)
stderr = stats.sem(data)
interval = stderr * stats.t.ppf((1 + confidence) / 2.0, n - 1)
return (mean, mean - interval, mean + interval, interval)
# create a reporter for all of the values calculated with the above function
def report_confidence_interval(confidence_interval):
print('Mean: {}'.format(confidence_interval[0]))
print('Lower bound: {}'.format(confidence_interval[1]))
print('Upper bound: {}'.format(confidence_interval[2]))
s = "our mean lies in the interval [{:.5}, {:.5}]".format(confidence_interval[1], confidence_interval[2])
return s, confidence_interval[0]
dem_means = []
rep_means = []
dem_er = []
rep_er = []
for name in names[1:]:
print(name)
print('Democrats')
dem_means.append(confidence_interval(dem[name])[0])
dem_er.append(confidence_interval(dem[name])[3])
print(report_confidence_interval(confidence_interval(dem[name])))
print('Republicans')
rep_means.append(confidence_interval(rep[name])[0])
rep_er.append(confidence_interval(rep[name])[3])
print(report_confidence_interval(confidence_interval(rep[name])))
print(' ')
# bar heights (with a subset of the data)
part_dem_means = dem_means[:5]
part_rep_means = rep_means[:5]
# we need to cut down the names to fit
part_names = names [1:6]
# error bars (with a subset of the data)
part_dem_ers = dem_er[:5]
part_rep_ers = rep_er[:5]
# plot a bar graph
plt.style.use('fivethirtyeight')
barWidth = 0.4
r1 = np.arange(len(part_dem_means))
r2 = [x + barWidth for x in r1]
plt.bar(r1, part_dem_means, width = barWidth, color = 'blue', edgecolor = 'black', yerr = part_dem_ers, capsize = 4, label = 'Democrats')
plt.bar(r2, part_rep_means, width = barWidth, color = 'red', edgecolor = 'black', yerr = part_rep_ers, capsize = 4, label = 'Republicans')
plt.title('Support for bills by party')
plt.legend()
plt.xticks([r + barWidth for r in range(len(part_dem_means))], names[1:6], rotation = 45, ha="right");
###Output
_____no_output_____
###Markdown
InterpretationMost of the confidence intervals are pretty large. If you were trying to extrapolate this data to a population (sort of a nonsensical situation, because congress is the population), you might find a value much different from what you predicted. Using the handicapped infants bill as an example, the predicted outcome would be ~62%, but because the confidence interval is ~6%, the actual value could be expected to be anywhere between ~56% and ~68%.
###Code
print(dem_means[0])
print(dem_er[0])
###Output
0.6179775280898876
0.117313652657326
###Markdown
Lambda School Data Science Module 142 Sampling, Confidence Intervals, and Hypothesis Testing Prepare - examine other available hypothesis testsIf you had to pick a single hypothesis test in your toolbox, t-test would probably be the best choice - but the good news is you don't have to pick just one! Here's some of the others to be aware of:
###Code
import numpy as np
from scipy.stats import chisquare # One-way chi square test
# Chi square can take any crosstab/table and test the independence of rows/cols
# The null hypothesis is that the rows/cols are independent -> low chi square
# The alternative is that there is a dependence -> high chi square
# Be aware! Chi square does *not* tell you direction/causation
ind_obs = np.array([[1, 1], [2, 2]]).T
print(ind_obs)
print(chisquare(ind_obs, axis=None))
dep_obs = np.array([[16, 18, 16, 14, 12, 12], [32, 24, 16, 28, 20, 24]]).T
print(dep_obs)
print(chisquare(dep_obs, axis=None))
# Distribution tests:
# We often assume that something is normal, but it can be important to *check*
# For example, later on with predictive modeling, a typical assumption is that
# residuals (prediction errors) are normal - checking is a good diagnostic
from scipy.stats import normaltest
# Poisson models arrival times and is related to the binomial (coinflip)
sample = np.random.poisson(5, 1000)
print(normaltest(sample)) # Pretty clearly not normal
# Kruskal-Wallis H-test - compare the median rank between 2+ groups
# Can be applied to ranking decisions/outcomes/recommendations
# The underlying math comes from chi-square distribution, and is best for n>5
from scipy.stats import kruskal
x1 = [1, 3, 5, 7, 9]
y1 = [2, 4, 6, 8, 10]
print(kruskal(x1, y1)) # x1 is a little better, but not "significantly" so
x2 = [1, 1, 1]
y2 = [2, 2, 2]
z = [2, 2] # Hey, a third group, and of different size!
print(kruskal(x2, y2, z)) # x clearly dominates
###Output
KruskalResult(statistic=0.2727272727272734, pvalue=0.6015081344405895)
KruskalResult(statistic=7.0, pvalue=0.0301973834223185)
###Markdown
And there's many more! `scipy.stats` is fairly comprehensive, though there are even more available if you delve into the extended world of statistics packages. As tests get increasingly obscure and specialized, the importance of knowing them by heart becomes small - but being able to look them up and figure them out when they *are* relevant is still important. Live Lecture - let's explore some more of scipy.stats
###Code
# Taking requests! Come to lecture with a topic or problem and we'll try it.
from scipy import stats
b1 = stats.binom(n=100, p=0.6)
b1.mean() # Not randomized
b1.median()
chi2 = stats.chi2(5) # A look at the chi distribution
chi2.mean()
chi2.median() # Skew when median does not equal the mean, in the case of chi square a right skew
# Confidence Interval
# Similar to hypothesis testing, but centered at sample mean
# Better than reporting the "point estimate" (sample mean)
# why? Because point estimates aren't always perfect
import numpy as np
import pandas as pd
from scipy import stats
def confidence_interval(data, confidence=0.95):
"""
Calculate a confidence interval around a sample mean for given data
Using t-distribution and two-tailed test, default 95% confidence
Arguments:
data - iterable (list or numpy array) of sample observations
confidence - level of confidence for the interval
Returns:
tuple of (mean, lower bound, upper bound)
"""
data = np.array(data)
mean = np.mean(data)
n = len(data)
stderr = stats.sem(data)
interval = stderr * stats.t.ppf((1 + confidence) / 2., n - 1)
return (mean, mean - interval, mean + interval)
def report_confidence_interval(confidence_interval):
"""
Print a pretty report of a confidence interval
Arguments;
confidence_interval - tuple of (mean, lower bound, upper bound)
Returns:
none, but prints to screen report
"""
print('Mean: {:.3f}'.format(confidence_interval[0]))
print('Lower Bound: {:.3f}'.format(confidence_interval[1]))
print('Upper Bound: {:.3f}'.format(confidence_interval[2]))
coinflips = np.random.binomial(n=1, p=0.5, size=100)
print(coinflips)
import pandas as pd
df = pd.DataFrame(coinflips)
df.describe()
coinflip_interval = confidence_interval(coinflips, confidence=0.95)
coinflip_interval
report_confidence_interval(coinflip_interval)
###Output
Mean: 0.440
Lower Bound: 0.341
Upper Bound: 0.539
###Markdown
Assignment - Build a confidence intervalA confidence interval refers to a neighborhood around some point estimate, the size of which is determined by the desired p-value. For instance, we might say that 52% of Americans prefer tacos to burritos, with a 95% confidence interval of +/- 5%.52% (0.52) is the point estimate, and +/- 5% (the interval $[0.47, 0.57]$) is the confidence interval. "95% confidence" means a p-value $\leq 1 - 0.95 = 0.05$.In this case, the confidence interval includes $0.5$ - which is the natural null hypothesis (that half of Americans prefer tacos and half burritos, thus there is no clear favorite). So in this case, we could use the confidence interval to report that we've failed to reject the null hypothesis.But providing the full analysis with a confidence interval, including a graphical representation of it, can be a helpful and powerful way to tell your story. Done well, it is also more intuitive to a layperson than simply saying "fail to reject the null hypothesis" - it shows that in fact the data does *not* give a single clear result (the point estimate) but a whole range of possibilities.How is a confidence interval built, and how should it be interpreted? It does *not* mean that 95% of the data lies in that interval - instead, the frequentist interpretation is "if we were to repeat this experiment 100 times, we would expect the average result to lie in this interval ~95 times."For a 95% confidence interval and a normal(-ish) distribution, you can simply remember that +/-2 standard deviations contains 95% of the probability mass, and so the 95% confidence interval based on a given sample is centered at the mean (point estimate) and has a range of +/- 2 (or technically 1.96) standard deviations.Different distributions/assumptions (90% confidence, 99% confidence) will require different math, but the overall process and interpretation (with a frequentist approach) will be the same.Your assignment - using the data from the prior module ([congressional voting records](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records)):1. Generate and numerically represent a confidence interval2. Graphically (with a plot) represent the confidence interval3. Interpret the confidence interval - what does it tell you about the data and its distribution?Stretch goals:1. Write a summary of your findings, mixing prose and math/code/results. *Note* - yes, this is by definition a political topic. It is challenging but important to keep your writing voice *neutral* and stick to the facts of the data. Data science often involves considering controversial issues, so it's important to be sensitive about them (especially if you want to publish).2. Apply the techniques you learned today to your project data or other data of your choice, and write/discuss your findings here. Confidence Intervals for Drugs.com Data toward the end of this NoteBook
###Code
# TODO - your code!
import pandas as pd
import numpy as np
import scipy
# TODO - your code here!
names = ['Political_Party', 'handicapped_infants', 'water_project_cost_sharing', 'adoption_of_the_budget', 'physician_fee_freeze',
'el_salvadore_aid', 'religious_groups_in_schools', 'anti_satellite_test_ban', 'aid_to_contras', 'mx_missile',
'immigration', 'synfuels_corporation_cutback', 'education_spending', 'superfund_right_to_sue', 'crime',
'duty_free_exports', 'export_administration_act']
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.data', header=None, names=names, na_values='?')
df.head()
df.shape
# Replace y and n with 1s and 0s and replace NaNs with 0.5
df = df.replace({'y': 1, 'n': 0, np.nan: .5})
df.head()
# Change all floats to ints
df.loc[0:, 'handicapped_infants':'export_administration_act'] = df.loc[0:, 'handicapped_infants':'export_administration_act'].astype('int')
df.dtypes
df.head()
df[df['Political_Party']=='republican'].loc[0:, 'handicapped_infants':'export_administration_act']
###Output
_____no_output_____
###Markdown
t-test for equal means
###Code
# Compare democrat means against republican means for t-test
scipy.stats.ttest_ind(df[df['Political_Party']=='republican'].loc[0:, 'handicapped_infants':'export_administration_act'],
df[df['Political_Party']=='democrat'].loc[0:, 'handicapped_infants':'export_administration_act'], equal_var=False)
###Output
_____no_output_____
###Markdown
Republican Immigration vs. Democrat Immigration
###Code
scipy.stats.ttest_ind(df[df['Political_Party']=='republican'].loc[0:, 'immigration'],
df[df['Political_Party']=='democrat'].loc[0:, 'immigration'], equal_var=False)
###Output
_____no_output_____
###Markdown
Democrat handicapped_infants vs. Republican handicapped_infants
###Code
scipy.stats.ttest_ind(df[df['Political_Party']=='democrat'].loc[0:, 'handicapped_infants'],
df[df['Political_Party']=='republican'].loc[0:, 'handicapped_infants'], equal_var=False)
df_republican_df = df[df['Political_Party']=='republican'].loc[0:, 'handicapped_infants':'export_administration_act']
df_republican_df.describe()
df_democrat_df = df[df['Political_Party']=='democrat'].loc[0:, 'handicapped_infants':'export_administration_act']
df_democrat_df.describe()
###Output
_____no_output_____
###Markdown
95% confidence interval for Democrat handicapped_infants
###Code
demo_handicapped_mean = df_democrat_df['handicapped_infants'].mean()
se_demo_handicapped_infants = df_democrat_df.handicapped_infants.std()/(np.sqrt(len(df_democrat_df)))
se_demo_handicapped_infants
t_value = 1.96
print("95% Confidence Interval: ({:.4f}, {:.4f})".format(demo_handicapped_mean - t_value * se_demo_handicapped_infants, demo_handicapped_mean + t_value * se_demo_handicapped_infants))
###Output
95% Confidence Interval: (0.5250, 0.6435)
###Markdown
95% Confidence Interval for Republican handicapped_infants
###Code
repub_handicapped_mean = df_republican_df['handicapped_infants'].mean()
se_repub_handicapped_infants = df_republican_df.handicapped_infants.std()/(np.sqrt(len(df_republican_df)))
t_value = 1.96
print("95% Confidence Interval: ({:.4f}, {:.4f}))".format(repub_handicapped_mean - t_value * se_repub_handicapped_infants, repub_handicapped_mean + t_value * se_demo_handicapped_infants))
###Output
95% Confidence Interval: (0.1257, 0.2438))
###Markdown
Political Parties vs.handicapped_infants with 95% Confidence Intervals
###Code
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('poster');
sns.barplot(x=df.Political_Party, y=df.handicapped_infants, ci=95);
# Drugs.com Data
!wget http://archive.ics.uci.edu/ml/machine-learning-databases/00462/drugsCom_raw.zip
! unzip drugsCom_raw.zip
df = pd.read_table('drugsComTrain_raw.tsv')
df.head()
df.shape
df.dtypes
df['rating'].hist()
df.head()
df['rating'].describe()
df.describe()
df.corr()
df['drugName'].value_counts()
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
df['condition'].value_counts()
# Descriptive stats for Birth Control drubs
df[df['condition']=='Birth Control'].describe()
# birth control DF
birth_control = df[df['condition']=='Birth Control']
birth_control.head()
birth_control[birth_control['rating'] > 6].describe()
birth_control[birth_control['rating'] < 6].describe()
# Confidence Interval
# Similar to hypothesis testing, but centered at sample mean
# Better than reporting the "point estimate" (sample mean)
# why? Because point estimates aren't always perfect
import numpy as np
import pandas as pd
from scipy import stats
def confidence_interval(data, confidence=0.95):
"""
Calculate a confidence interval around a sample mean for given data
Using t-distribution and two-tailed test, default 95% confidence
Arguments:
data - iterable (list or numpy array) of sample observations
confidence - level of confidence for the interval
Returns:
tuple of (mean, lower bound, upper bound)
"""
data = np.array(data)
mean = np.mean(data)
n = len(data)
stderr = stats.sem(data)
interval = stderr * stats.t.ppf((1 + confidence) / 2., n - 1)
return (mean, mean - interval, mean + interval)
def report_confidence_interval(confidence_interval):
"""
Print a pretty report of a confidence interval
Arguments;
confidence_interval - tuple of (mean, lower bound, upper bound)
Returns:
none, but prints to screen report
"""
print('Mean: {:.3f}'.format(confidence_interval[0]))
print('Lower Bound: {:.3f}'.format(confidence_interval[1]))
print('Upper Bound: {:.3f}'.format(confidence_interval[2]))
"""
coinflip_interval = confidence_interval(coinflips, confidence=0.95)
coinflip_interval
"""
birth_control_rating = birth_control['rating']
birth_control_rating_interval = confidence_interval(birth_control_rating, confidence=0.95)
###Output
_____no_output_____
###Markdown
Confidence Interval of Birth Control Drugs
###Code
report_confidence_interval(birth_control_rating_interval)
erectile_dysfunction = df[df['condition']=='Erectile Dysfunction']
erectile_dysfunction.head()
hepatitis_C = df[df['condition']=='Hepatitis C']
hepatitis_C.head()
###Output
_____no_output_____
###Markdown
Confidence Intervals of Some Sexually Disease Related Drugs
###Code
erectile_dysfunction_interval = confidence_interval(erectile_dysfunction['rating'], confidence=0.95)
report_confidence_interval(erectile_dysfunction_interval)
hepatitis_C_interval = confidence_interval(hepatitis_C['rating'], confidence=0.95)
report_confidence_interval(hepatitis_C_interval)
###Output
Mean: 8.412
Lower Bound: 8.205
Upper Bound: 8.618
###Markdown
Confidence Interval for Prostate Cancer
###Code
prostate = df[df['condition']=='Prostate Cance']
prostate_confidence_interval = confidence_interval(prostate['rating'], confidence=0.95)
report_confidence_interval(prostate_confidence_interval)
df.groupby(df['condition']=='Prostate Cance').mean()
birth_control_versus_rest = df.groupby(df['condition']=='Birth Control')['rating'].mean()
birth_control_versus_rest
erectile_dysfunction.head()
everything_but_birth_control = df.groupby(df['condition']!='Birth Control')['rating'].mean()
everything_but_birth_control
hepatitis_C = df.groupby(df['condition']!='Hepatitis C')['rating'].mean()
hepatitis_C
hepatitis_C[1]
type(birth_control_versus_rest)
###Output
_____no_output_____
###Markdown
Confidence Interval Plots: Erectile Dysfunction vs. Birth Control
###Code
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('poster');
#sns.barplot(x=birth_control_versus_rest.rating, ci=95);
plt.figure(figsize=(20, 2))
plt.xlim(0, 10)
plt.title('Erectile Dysfunction Ratings')
sns.barplot(erectile_dysfunction['rating'], ci=95)
plt.figure(figsize=(20, 2))
plt.xlim(0, 10)
plt.title('Birth Control Ratings')
sns.barplot(birth_control_rating, ci=95)
###Output
/usr/local/lib/python3.6/dist-packages/seaborn/categorical.py:1428: FutureWarning: remove_na is deprecated and is a private function. Do not use.
stat_data = remove_na(group_data)
###Markdown
With respect to these features, the confidence interval says that if we were to repeatedly take more samples of ratings for these drugs over and over, 95 % of the time their means would land somewhere in the Confidence Interval. The Birth Control Ratings have a pretty tight range perhaps to the the abundance of ratings data compared to all the other drugs.
###Code
###Output
_____no_output_____
###Markdown
Lambda School Data Science Module 142 Sampling, Confidence Intervals, and Hypothesis Testing Prepare - examine other available hypothesis testsIf you had to pick a single hypothesis test in your toolbox, t-test would probably be the best choice - but the good news is you don't have to pick just one! Here's some of the others to be aware of:
###Code
import numpy as np
from scipy.stats import chisquare # One-way chi square test
# Chi square can take any crosstab/table and test the independence of rows/cols
# The null hypothesis is that the rows/cols are independent -> low chi square
# The alternative is that there is a dependence -> high chi square
# Be aware! Chi square does *not* tell you direction/causation
ind_obs = np.array([[1, 1], [2, 2]]).T
print(ind_obs)
print(chisquare(ind_obs, axis=None))
dep_obs = np.array([[16, 18, 16, 14, 12, 12], [32, 24, 16, 28, 20, 24]]).T
print(dep_obs)
print(chisquare(dep_obs, axis=None))
# Distribution tests:
# We often assume that something is normal, but it can be important to *check*
# For example, later on with predictive modeling, a typical assumption is that
# residuals (prediction errors) are normal - checking is a good diagnostic
from scipy.stats import normaltest
# Poisson models arrival times and is related to the binomial (coinflip)
sample = np.random.poisson(5, 1000)
print(normaltest(sample)) # Pretty clearly not normal
# Kruskal-Wallis H-test - compare the median rank between 2+ groups
# Can be applied to ranking decisions/outcomes/recommendations
# The underlying math comes from chi-square distribution, and is best for n>5
from scipy.stats import kruskal
x1 = [1, 3, 5, 7, 9]
y1 = [2, 4, 6, 8, 10]
print(kruskal(x1, y1)) # x1 is a little better, but not "significantly" so
x2 = [1, 1, 1]
y2 = [2, 2, 2]
z = [2, 2] # Hey, a third group, and of different size!
print(kruskal(x2, y2, z)) # x clearly dominates
###Output
KruskalResult(statistic=0.2727272727272734, pvalue=0.6015081344405895)
KruskalResult(statistic=7.0, pvalue=0.0301973834223185)
###Markdown
And there's many more! `scipy.stats` is fairly comprehensive, though there are even more available if you delve into the extended world of statistics packages. As tests get increasingly obscure and specialized, the importance of knowing them by heart becomes small - but being able to look them up and figure them out when they *are* relevant is still important. Live Lecture - let's explore some more of scipy.stats
###Code
# Taking requests! Come to lecture with a topic or problem and we'll try it.
from scipy import stats
b1 = stats.binom(n=100, p=0.6)
b1.mean()
b1.median()
import random
random.seed(100) # Reproducibility! Next line should give 2386
random.randint(0, 10000)
chi2 = stats.chi2(500)
chi2.mean()
chi2.median()
# Confidence intervals!
# Similar to hypothesis testing, but centered at sample mean
# Better than reporting the "point estimate" (sample mean)
# Why? Because point estimates aren't always perfect
import numpy as np
from scipy import stats
def confidence_interval(data, confidence=0.95):
"""
Calculate a confidence interval around a sample mean for given data.
Using t-distribution and two-tailed test, default 95% confidence.
Arguments:
data - iterable (list or numpy array) of sample observations
confidence - level of confidence for the interval
Returns:
tuple of (mean, lower bound, upper bound)
"""
data = np.array(data)
mean = np.mean(data)
n = len(data)
stderr = stats.sem(data)
interval = stderr * stats.t.ppf((1 + confidence) / 2., n - 1)
return (mean, mean - interval, mean + interval)
def report_confidence_interval(confidence_interval):
"""
Return a string with a pretty report of a confidence interval.
Arguments:
confidence_interval - tuple of (mean, lower bound, upper bound)
Returns:
None, but prints to screen the report
"""
#print('Mean: {}'.format(confidence_interval[0]))
#print('Lower bound: {}'.format(confidence_interval[1]))
#print('Upper bound: {}'.format(confidence_interval[2]))
s = "our mean lies in the interval ]{:.2}, {:.2}[".format(
confidence_interval[1], confidence_interval[2])
return s
x = 2
print('x is: {}'.format(x))
coinflips = np.random.binomial(n=1, p=0.5, size=100)
print(coinflips)
import pandas as pd
df = pd.DataFrame(coinflips)
df.describe()
coinflip_interval = confidence_interval(coinflips, confidence=0.95)
coinflip_interval
report_confidence_interval(coinflip_interval)
###Output
_____no_output_____
###Markdown
Assignment - Build a confidence intervalA confidence interval refers to a neighborhood around some point estimate, the size of which is determined by the desired p-value. For instance, we might say that 52% of Americans prefer tacos to burritos, with a 95% confidence interval of +/- 5%.52% (0.52) is the point estimate, and +/- 5% (the interval $[0.47, 0.57]$) is the confidence interval. "95% confidence" means a p-value $\leq 1 - 0.95 = 0.05$.In this case, the confidence interval includes $0.5$ - which is the natural null hypothesis (that half of Americans prefer tacos and half burritos, thus there is no clear favorite). So in this case, we could use the confidence interval to report that we've failed to reject the null hypothesis.But providing the full analysis with a confidence interval, including a graphical representation of it, can be a helpful and powerful way to tell your story. Done well, it is also more intuitive to a layperson than simply saying "fail to reject the null hypothesis" - it shows that in fact the data does *not* give a single clear result (the point estimate) but a whole range of possibilities.How is a confidence interval built, and how should it be interpreted? It does *not* mean that 95% of the data lies in that interval - instead, the frequentist interpretation is "if we were to repeat this experiment 100 times, we would expect the average result to lie in this interval ~95 times."For a 95% confidence interval and a normal(-ish) distribution, you can simply remember that +/-2 standard deviations contains 95% of the probability mass, and so the 95% confidence interval based on a given sample is centered at the mean (point estimate) and has a range of +/- 2 (or technically 1.96) standard deviations.Different distributions/assumptions (90% confidence, 99% confidence) will require different math, but the overall process and interpretation (with a frequentist approach) will be the same.Your assignment - using the data from the prior module ([congressional voting records](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records)):1. Generate and numerically represent a confidence interval2. Graphically (with a plot) represent the confidence interval3. Interpret the confidence interval - what does it tell you about the data and its distribution?Stretch goals:1. Write a summary of your findings, mixing prose and math/code/results. *Note* - yes, this is by definition a political topic. It is challenging but important to keep your writing voice *neutral* and stick to the facts of the data. Data science often involves considering controversial issues, so it's important to be sensitive about them (especially if you want to publish).2. Apply the techniques you learned today to your project data or other data of your choice, and write/discuss your findings here.
###Code
# Getting started with drug data
# http://archive.ics.uci.edu/ml/datasets/Drug+Review+Dataset+%28Drugs.com%29
!wget http://archive.ics.uci.edu/ml/machine-learning-databases/00462/drugsCom_raw.zip
!unzip drugsCom_raw.zip
!head drugsComTrain_raw.tsv
df = pd.read_table('drugsComTrain_raw.tsv')
df.head()
# Going to evaluate rating and usefulCount to see if there is any relationship between them:
rating_utility = df.drop(['Unnamed: 0', 'drugName', 'condition', 'review', 'date'], axis = 1)
rating_utility.head()
# Taking the means of ratings and useful counts
r_mean = rating_utility.rating.mean()
u_mean = rating_utility.usefulCount.mean()
import matplotlib.pyplot as plt
ax = plt.scatter(x="rating", y="usefulCount", data=rating_utility)
mean = rating_utility.mean(axis = 1)
std = rating_utility.std(axis = 1)
n= rating_utility.shape[1]
yerr = std / np.sqrt(n) * stats.t.ppf(1-0.05/2, n - 1)
plt.figure()
plt.bar(range(rating_utility.shape[0]), mean, yerr = yerr)
plt.show()
###Output
_____no_output_____
###Markdown
Lambda School Data Science Module 142 Sampling, Confidence Intervals, and Hypothesis Testing Prepare - examine other available hypothesis testsIf you had to pick a single hypothesis test in your toolbox, t-test would probably be the best choice - but the good news is you don't have to pick just one! Here's some of the others to be aware of:
###Code
import numpy as np
from scipy.stats import chisquare # One-way chi square test
# Chi square can take any crosstab/table and test the independence of rows/cols
# The null hypothesis is that the rows/cols are independent -> low chi square
# The alternative is that there is a dependence -> high chi square
# Be aware! Chi square does *not* tell you direction/causation
ind_obs = np.array([[1, 1], [2, 2]]).T
print(ind_obs)
print(chisquare(ind_obs, axis=None))
dep_obs = np.array([[16, 18, 16, 14, 12, 12], [32, 24, 16, 28, 20, 24]]).T
print(dep_obs)
print(chisquare(dep_obs, axis=None))
# Alternative to first table
'''
In Out
Male [[2 1]]
Female [[1 2]]
Females want to eat outside in this data, chi-square test would have low p-value/ not independent
'''
# Distribution tests:
# We often assume that something is normal, but it can be important to *check*
# For example, later on with predictive modeling, a typical assumption is that
# residuals (prediction errors) are normal - checking is a good diagnostic
from scipy.stats import normaltest
# Poisson models arrival times and is related to the binomial (coinflip)
sample = np.random.poisson(5, 1000)
print(normaltest(sample)) # Pretty clearly not normal
# Kruskal-Wallis H-test - compare the median rank between 2+ groups
# Can be applied to ranking decisions/outcomes/recommendations
# The underlying math comes from chi-square distribution, and is best for n>5
from scipy.stats import kruskal
x1 = [1, 3, 5, 7, 9]
y1 = [2, 4, 6, 8, 10]
print(kruskal(x1, y1)) # x1 is a little better, but not "significantly" so
x2 = [1, 1, 1]
y2 = [2, 2, 2]
z = [2, 2] # Hey, a third group, and of different size!
print(kruskal(x2, y2, z)) # x clearly dominates
###Output
KruskalResult(statistic=0.2727272727272734, pvalue=0.6015081344405895)
KruskalResult(statistic=7.0, pvalue=0.0301973834223185)
###Markdown
And there's many more! `scipy.stats` is fairly comprehensive, though there are even more available if you delve into the extended world of statistics packages. As tests get increasingly obscure and specialized, the importance of knowing them by heart becomes small - but being able to look them up and figure them out when they *are* relevant is still important. Live Lecture - let's explore some more of scipy.stats
###Code
# Taking requests! Come to lecture with a topic or problem and we'll try it.
# Play with distributions
from scipy.stats import chi2
chi2_5 = chi2(5)
chi2_5
chi2_5.mean()
chi2_5.median()
chi2_500 = chi2(500)
print(chi2_500.mean())
print(chi2_500.median())
# From Cole
import scipy
import numpy as np
import matplotlib.pyplot as plt
data = scipy.stats.norm.rvs(size=100000, loc=0, scale=1.5, random_state=123)
X = np.linspace(-5.0, 5.0, 100)
hist = np.histogram(data, bins=100)
hist_dist = scipy.stats.rv_histogram(hist)
plt.plot(X, hist_dist.pdf(X), label='PDF')
from scipy.stats import normaltest
normaltest(chi2_500.rvs(10000000))
# Calculating chi square from hand
# 1 male wants to eat outside, 2 inside
# 2 females want to eat outside, 1 inside
chi_data = [[1, 2], [2, 1]]
import pandas as pd
chi_data = pd.DataFrame(chi_data, columns=('Outside', 'Inside'))
chi_data
# Explaining margins
# Total number of males (first row) = 3
# Total number of females (second row) = 3
# Total mumber of peopl who prefer outside = 3
# Total number of peopl who prefer inside = 3
# Explaning margin proportions
# Proportion of first row = obs / total = (3 males) / (3 males + 3 females)
# = 3/6 = 0.5
# All the other rows/cols also have 0.5 proportion margins
# Expected value for top left cell ( males who want to eat outside)
# (0.5(proportion of males) * 0.5(proportion of outside eaters)) * 6 = 1.5
# Because of symmetry of this little examples., we kow the expected value of
# all cells is 1.5 (i.e. the same, becuase margins are all the same)
# chi square test statisic is the sum of square deviation from these expected vales
expected_values = [[1.5, 1.5], [1.5, 1.5]]
deviation = (((0.5) ** 2) / 1.5) * 4 # 0.5^2 deviation per cell
print(deviation)
# Close but not same as scipy
# a little more properly, but not fully from scratch
def lazy_chisquare(observed, expected):
chisquare = 0
for row_obs, row_exp in zip(observed, expected):
for obs, exp in zip(row_obs, row_exp):
chisquare += (obs - exp)**2 / exp
return chisquare
chi_data = [[1, 2], [2, 1]]
expected_values = [[1.5, 1.5], [1.5, 1.5]]
lazy_chisquare(chi_data, expected_values)
# Three degrees of freedom (n - 1)
# Running above with scipy library
from scipy.stats import chisquare
chisquare(chi_data, axis=None)
# Confidence intervals!
# Similar to hypothesis testing, but centered at sample mean
# Generally better than reporting the "point estimate" (sample mean)
# Why? Because point estimates aren't always perfect
import numpy as np
from scipy import stats
def confidence_interval(data, confidence=0.95):
"""
Calculate a confidence interval around a sample mean for given data.
Using t-distribution and two-tailed test, default 95% confidence.
Arguments:
data - iterable (list or numpy array) of sample observations
confidence - level of confidence for the interval
Returns:
tuple of (mean, lower bound, upper bound)
"""
data = np.array(data)
mean = np.mean(data)
n = len(data)
stderr = stats.sem(data)
interval = stderr * stats.t.ppf((1 + confidence) / 2., n - 1)
return (mean, mean - interval, mean + interval)
def report_confidence_interval(confidence_interval):
"""
Return a string with a pretty report of a confidence interval.
Arguments:
confidence_interval - tuple of (mean, lower bound, upper bound)
Returns:
None, but prints to screen the report
"""
#print('Mean: {}'.format(confidence_interval[0]))
#print('Lower bound: {}'.format(confidence_interval[1]))
#print('Upper bound: {}'.format(confidence_interval[2]))
s = "our mean lies in the interval [{:.2}, {:.2}]".format(
confidence_interval[1], confidence_interval[2])
return s
stats.t.ppf??
x = 2
print('x is: {}'.format(x))
coinflips = np.random.binomial(n=1, p=0.5, size=100)
print(coinflips)
stats.ttest_1samp(coinflips, 0.5)
df = pd.DataFrame(coinflips)
df.describe()
coinflip_interval = confidence_interval(coinflips) # Default 95% conf
coinflip_interval
report_confidence_interval(coinflip_interval)
###Output
_____no_output_____
###Markdown
Assignment - Build a confidence intervalA confidence interval refers to a neighborhood around some point estimate, the size of which is determined by the desired p-value. For instance, we might say that 52% of Americans prefer tacos to burritos, with a 95% confidence interval of +/- 5%.52% (0.52) is the point estimate, and +/- 5% (the interval $[0.47, 0.57]$) is the confidence interval. "95% confidence" means a p-value $\leq 1 - 0.95 = 0.05$.In this case, the confidence interval includes $0.5$ - which is the natural null hypothesis (that half of Americans prefer tacos and half burritos, thus there is no clear favorite). So in this case, we could use the confidence interval to report that we've failed to reject the null hypothesis.But providing the full analysis with a confidence interval, including a graphical representation of it, can be a helpful and powerful way to tell your story. Done well, it is also more intuitive to a layperson than simply saying "fail to reject the null hypothesis" - it shows that in fact the data does *not* give a single clear result (the point estimate) but a whole range of possibilities.How is a confidence interval built, and how should it be interpreted? It does *not* mean that 95% of the data lies in that interval - instead, the frequentist interpretation is "if we were to repeat this experiment 100 times, we would expect the average result to lie in this interval ~95 times."For a 95% confidence interval and a normal(-ish) distribution, you can simply remember that +/-2 standard deviations contains 95% of the probability mass, and so the 95% confidence interval based on a given sample is centered at the mean (point estimate) and has a range of +/- 2 (or technically 1.96) standard deviations.Different distributions/assumptions (90% confidence, 99% confidence) will require different math, but the overall process and interpretation (with a frequentist approach) will be the same.Your assignment - using the data from the prior module ([congressional voting records](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records)):1. Generate and numerically represent a confidence interval2. Graphically (with a plot) represent the confidence interval3. Interpret the confidence interval - what does it tell you about the data and its distribution?Stretch goals:1. Write a summary of your findings, mixing prose and math/code/results. *Note* - yes, this is by definition a political topic. It is challenging but important to keep your writing voice *neutral* and stick to the facts of the data. Data science often involves considering controversial issues, so it's important to be sensitive about them (especially if you want to publish).2. Apply the techniques you learned today to your project data or other data of your choice, and write/discuss your findings here.
###Code
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.data'
cols = [
'Class Name',
'handicapped-infants',
'water-project-cost-sharing',
'adoption-of-the-budget-resolution',
'physician-fee-freeze',
'el-salvador-aid',
'religious-groups-in-schools',
'anti-satellite-test-ban',
'aid-to-nicaraguan-contras',
'mx-missile',
'immigration',
'synfuels-corporation-cutback',
'education-spending',
'superfund-right-to-sue',
'crime',
'duty-free-exports',
'export-administration-act-south-africa'
]
df = pd.read_csv(url, names=cols)
df.head()
df = df.replace({'?': np.nan, 'n': 0, 'y': 1})
df.head()
ct = pd.crosstab(df['Class Name'], df['immigration'], normalize='index')
ct
dems, repubs = df[df['Class Name'] == 'democrat'], df[df['Class Name'] == 'republican']
dems.head(5)
repubs.head(5)
dems_immigration, repubs_immigration = dems['immigration'].dropna(), repubs['immigration'].dropna()
# Confidence interval for democrats' vote on immigration
dems_immigration_interval = confidence_interval(dems_immigration, confidence=0.95)
dems_immigration_interval
report_dems = report_confidence_interval(dems_immigration_interval)
report_dems
repubs_immigration_interval = confidence_interval(repubs_immigration, confidence=0.95)
repubs_immigration_interval
report_repubs = report_confidence_interval(repubs_immigration_interval)
report_repubs
!pip install --upgrade seaborn
import seaborn as sns
sns.__version__
#sns.catplot(dems['immigration'], data=dems, kind='bar')
#sns.catplot??
import random
sample_list = []
# Calculated the mean of a (n=100) sample 500 times
for _ in range(500):
random_sample = [dems_immigration.sample(100).mean()]
sample_list.append(random_sample)
#sample_list
# Made the sample list into a dataframe
dem_imm_means = pd.DataFrame(sample_list)
# Plotted with 'density'
dem_imm_means.plot.density()
# Made vertical lines with lower and upper confidence limits
plt.axvline(dems_immigration_interval[1])
plt.axvline(dems_immigration_interval[2])
# Made red vertical line with another random sample. The higher n is, the more likely
# this line will stay within the confidence interval
plt.axvline([dems_immigration.sample(150).mean()], color='r');
# Theoretically, you could run this cell 100 times, and the red line would
# fall within the confidence interval 95 times
###Output
_____no_output_____
###Markdown
Assignment Summary:For the assignment, I specifically focused on the votes for republicans and democats on the immigration issue. The democrats mostly voted 'no', but only by a small margin. The republicans mostly voted 'yes', but, again, only by a small margin. As a result, the confidence intervals for both republicans and democrats, on the immigration issue, were overlapping. This indicates a similarity between republican and democrats on this issue in 1984. Today, the two parties are far from similar on immigration.
###Code
###Output
_____no_output_____ |
EFSM_uCT.ipynb | ###Markdown
Import Libraries
###Code
#Standard
import os
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib import cm
import numpy as np
import math as m
import pandas as pd
import scipy.stats as stats
from scipy.stats import iqr, kurtosis, skew
from tqdm import tnrange, tqdm_notebook
from statannot import add_stat_annotation
#import pillow (PIL) to allow for image cropping
import PIL
from PIL import Image, ImageChops
from io import BytesIO
#image simplification and priming
#Convolution libraries
from scipy import signal
from skimage.measure import label, regionprops
from sklearn.preprocessing import Binarizer
#from sklearn.preprocessing import Binarizer
from scipy import ndimage
#Skimage used for direct detection ellipse
from skimage import io
from skimage import data, color, img_as_ubyte
from skimage.color import rgb2gray
from skimage.feature import canny
from skimage.transform import hough_ellipse
from skimage.draw import ellipse_perimeter
from skimage.transform import rescale, resize, downscale_local_mean
#Skimage used for direct detection circles
from skimage.transform import hough_circle, hough_circle_peaks
from skimage.feature import canny
from skimage.draw import circle_perimeter
#OpenCV
import cv2
###Output
_____no_output_____
###Markdown
Common Functions
###Code
#Define function for cutting of blank space from uCT image
def trim2(im,padding,offset):
#selecting the outermost pixels
bg = Image.new(im.mode, im.size, im.getpixel((0,0)))
diff = ImageChops.difference(im, bg)
diff = ImageChops.add(diff, diff, 2.0, offset)
bbox = diff.getbbox()
#adding small boarder to each image
bbox = np.array(bbox).reshape(2,2)
bbox[0] -= padding
bbox[1] += padding
bbox = bbox.flatten()
bbox = tuple(bbox)
if bbox:
return bbox,padding
#Define function obscure which convolutes 2D arrays with a (x,y) sized screen screen and then binarizes them
def obscure(image_array,x,y,invert):
screen = np.ones((x,y), dtype=int)
image_array = signal.convolve2d(image_array,screen, mode='same') #,mode='same')
#convert image into binary
#image_array = np.where(image_array > 127.5, 1, 0)
if invert == 'yes':
image_array = np.where(image_array > 127.5, 0, 1)
elif invert == 'no':
image_array = np.where(image_array > 127.5, 1, 0)
return image_array
#This allows for the additon of a padding to numpy array - useful for adding boarders to images
#found: https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html
def pad_with(vector, pad_width, iaxis, kwargs):
pad_value = kwargs.get('padder', 10)
vector[:pad_width[0]] = pad_value
vector[-pad_width[1]:] = pad_value
return vector
#Define function to remove outlires from dataset
def reject_outliers(data, m = 2):
d = np.abs(data - np.median(data))
mdev = np.median(d)
s = d/int(mdev) if mdev else 0
return data[s<m]
##Create file if does not exist
def checkdir(dir):
#First check if directory exists
if os.path.isdir(dir) == False:
os.makedirs(dir)
else:
pass
###Output
_____no_output_____
###Markdown
Load Image Data
###Code
#Initialisation of data read
#current location
location = os.getcwd()
#What is the root source of the data to be processed?
loc = '/Volumes/RISTO_EXHDD/uCT'
# loc = '/Users/ristomartin/OneDrive/Dropbox/UniStuff/DPhil/Experimental/python_analysis/uCT/hollow_fibre'
# loc = '/Volumes/Ristos_SSD/uCT'
#What is the name of the data set?
data_set = 'S4_50PPM_8HRS_5PX'#'S4_10PPM_03_5PX_1_Rec'
#Where is the exact data location
data_loc = loc+'/'+data_set+'/'+data_set+'_Rec2'
#location for saved data
save_loc = '/Users/ristomartin/OneDrive/Dropbox/UniStuff/DPhil/Experimental/python_analysis/uCT/flat_sheet/output/'
#Check that the save location exists
checkdir(save_loc)
#what to name to save files
savename = data_set
###Output
_____no_output_____
###Markdown
Processing Images
###Code
#Define Image processing script as function
def fibrefeature(dat_loc,filename,pxum,fibre_pad,fibre_scale,img_no,rotate,debug,debug_print,save_pic):
#check whether to save figures out or not
save_pic = save_pic
#Open the image
im = Image.open(dat_loc+'/'+filename)
#check if file needs converting
if im.mode == 'I;16':
#specify the image sampling mode
im.mode = 'I'
#convert the mode into 'L' (8-bit pixels, black and white) and save as temporary file
im = im.point(lambda i:i*(1./256)).convert('L')
#If already in RGB or RGBA then can convert directly
elif im.mode == 'RGB' or im.mode == 'RGBA':
im = im.convert('L')
#If image type is not an issue then just continue
else:
pass
#Once image is opened make copies of unedited image and array to use later on
#make copy of original unadultorated image
im_orig = im.copy()
#Make an array of the unadultorate image
im_orig_array = np.array(im_orig)
#Set the number formatting to unit8 for compatability with other packages
im_orig_array = im_orig_array.astype("uint8")
#create plot of convoluted and binarised image
if debug == True:
fig, ax = plt.subplots()
ax.imshow(im_orig_array, 'gray')
if save_pic == True:
ax.figure.savefig(save_loc+filename+'raw_image.png', dpi=300)
#As there may be a fair amount of noise in the background image select a region from the image which has nothing of interest in it
#This selected region is then used to find an average pixel value in the background region which will be subtracted from the image array
bg_x1 = round((0.90*im_orig_array.shape[1]))
bg_x2 = round((0.99*im_orig_array.shape[1]))
bg_y1 = round((0.90*im_orig_array.shape[0]))
bg_y2 = round((0.99*im_orig_array.shape[0]))
#Print out background region idecies that are to be used for background noise reduction
# print(bg_x1)
# print(bg_x2)
# print(bg_y1)
# print(bg_y2)
#Slice out the selected region
bg_select = im_orig_array[bg_y1:bg_y2,bg_x1:bg_x2]
#create plot selected image background
if debug == True:
fig, ax = plt.subplots()
ax.imshow(bg_select, 'gray')
if save_pic == True:
ax.figure.savefig(save_loc+filename+'_bg_select.png', dpi=300)
#Having cut out the background region convert the 2D array into a 1D
bg_med = bg_select.flatten()
#Calculate the median and mean average values of the background region
bg_select_med = np.median(bg_med)+2*np.std(bg_med)
bg_select_mean = np.mean(bg_med)+2*np.std(bg_med)
#Make a copy of the original image array
im_orig_array_c = im_orig_array.copy()
#Any pixel in the image which is less than the mean pixel value of the background region is to be set to zero i.e. made blank
im_orig_array_c[im_orig_array_c < bg_select_mean] = 0
#create plot selected image background
if debug == True:
fig, ax = plt.subplots()
ax.imshow(im_orig_array_c, 'gray')
if save_pic == True:
ax.figure.savefig(save_loc+filename+'_thresh_bg_select.png', dpi=300)
#Convert the background subtracted array back into an image that may be evaluated with pillow package
im = Image.fromarray(im_orig_array_c)
#create plot selected image background
if debug == True:
fig, ax = plt.subplots()
ax.imshow(im_orig_array_c, 'gray')
if save_pic == True:
ax.figure.savefig(save_loc+filename+'im_debug.png', dpi=300)
#trim image to just pixels of interest using trim2 as defined above
fibre_box,fpadding = trim2(im,(fibre_pad*2),trim_offset)
im = im.crop(fibre_box)
#convert trimmed image into array from pillow image
nim = np.array(im)
#make copy of trimmed image to be used later on if needed
nim_copy = nim.copy()
#create plot of convoluted and binarised image
if debug == True:
fig, ax = plt.subplots()
ax.imshow(nim_copy, 'gray')
if save_pic == True:
ax.figure.savefig(save_loc+filename+'cropped_image.png', dpi=300)
#Reduce image size to minimise the time associated with the processing of each image
nim = rescale(nim, fibre_scale, anti_aliasing=False)
#Convert the number format of all values in array to that of unit8 with range from 0-255 to conform with package standards
nim = np.uint8(nim * 255)
############################################################################################################################################################
### OUTER WALL DETECTION ###
############################################################################################################################################################
##As want to find the length of connected pixels first blur the image so that missed regions otherwise lost from binarisation may be considered
#Set the size of the filter to 7x7 pixels
x = 7
y = 7
#Applying GayssuanBlur to fill gaps in image caused by binarisation
nim = cv2.GaussianBlur(nim,(x,y),0)
#Apply OTSU's binarisation method to strip away as much noise as possible and convert image into binary
ret,fibre_thresh = cv2.threshold(nim,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
#Set the size of the screen used when convolving image
x = 2
y = 2
#Mark all pixels such that there are at least 1 pixels in their 2x2 neighborhood
nim = signal.convolve2d(fibre_thresh,np.ones((x,y), dtype=int), mode='same')
#create plot of convoluted and binarised image
if debug == True:
fig, ax = plt.subplots()
ax.imshow(nim, 'gray')
if save_pic == True:
ax.figure.savefig(save_loc+filename+'convolve2d_image.png', dpi=300)
#Need to find overall orientation of image and rotate to make horizontal
#To initially find orientation to trim image to leave behind only narrow region of interest this must be in both the x and y axis
#to ensure this is only done for first image make if gate to prevent multi runs
#Make place holder for the whether the image needs to be rotated or not
coords = 0
if img_no == 0:
#get location of all detected true pixels
coords = np.column_stack(np.where(nim == 255))
#consider the spread of data in each direction, as considering flat membranes expect smaller spread in direction of normal to the face of the membrane
#find the IQR in x_axis
iqr_x = iqr(coords[:,0])
#find the median in the x-axis
median_x = np.median(coords[:,0])
#find the IQR in y_axis
iqr_y = iqr(coords[:,1])
#If there is more of a spread in the number of filled pixels one direction of the other rotate the image to set the minimum spread in the x axis
if iqr_x < iqr_y:
#make note to rotate
rotate = 1
#If the image is to be rotated then
if rotate == 1:
#rotate the image
nim = nim.swapaxes(-2,-1)[...,::-1]
#get location of all detected true pixels
coords = np.column_stack(np.where(nim == 255))
#reconsier the median of the x axis as that previously of the y axis due to rotation
median_x = np.median(coords[:,0])
#reconsider IQR as well
iqr_x = iqr(coords[:,0])
#create plot of convoluted and binarised image
if debug == True:
fig, ax = plt.subplots()
ax.imshow(nim, 'gray')
if save_pic == True:
ax.figure.savefig(save_loc+filename+'rotated_convolve2d_image.png', dpi=300)
#Covert list of coordinates into pandas dataframe
coords = pd.DataFrame(coords)
#get unique y-axis points at which pixels are detected
unique_vals = pd.unique(coords[0].values)
#Make list to hold all of the membrane thicknesses
thicknesses = []
#Itterating through each of the unique y-axis points
for i in unique_vals:
#isolate only the data associated with y-axis
temp = coords.loc[coords[0] == i][1]
#Find the median and IQR of each line
Q1 = temp.quantile(0.25)
Q3 = temp.quantile(0.75)
IQR = Q3 - Q1
median = temp.quantile(0.5)
#convert temp from series to list
temp = temp.tolist()
#remove any values from temp which are more than 2 IQR from median
temp = [x if abs(x-median)<(2*IQR) else median for x in temp]
#find how thick membrane is at each y axis point
if max(temp)-min(temp) == 0:
pass
else:
thicknesses.append(max(temp)-min(temp))
#Convert list of thicknesses to an array so that stats may be determined
thicknesses = np.array(thicknesses)
#Calculate the stats associated with membrane thicknesses
thick_mean = np.mean(thicknesses)*(1/fibre_scale)*pxum
thick_med = (np.median(thicknesses)/fibre_scale)*pxum
q75, q25 = (np.percentile(thicknesses, [75 ,25])/fibre_scale)*pxum
thick_IQR = q75 - q25
############################################################################################################################################################
### Return membrane stats ###
############################################################################################################################################################
return (thick_mean,thick_med,thick_IQR,rotate)
###Output
_____no_output_____
###Markdown
Evaluate membrane properties
###Code
#If you want to run the code make sure this is true (this is here just to prevent accidently starting the code)
Run = False
#Through out the code there are the options to save out figures of each of the steps taken. These may be used for illustration purposes or to debug
show_all = False
#Debug prints will return various metrics from throughout the code to help understand why errors may be occouring
debug_print = False
#Set constant values used within the image analysis -
#How many um does each pixel represent?
pxum = 5
#How many pixels should be added as a buffer to the image after carrying out the initial trim
fibre_pad = 50
#What scale should be used when reducing image size. Note that lower scale is faster but may increase error
fibre_scale = 0.25
#What off set value should be subtracted from the background to minimise noise?
trim_offset = -80
#If you want to run the code
if Run == True:
#Set column names of the output dataframe
columns = ['filename','thick_IQR','thick_mean','thick_med']
#Make dataframe to hold all calculated metrics of memebranes evaluated
cfp = pd.DataFrame(columns = columns)
#Generate list of all images that are to be evaluated in data location
files = [x for x in os.listdir(data_loc) if x.endswith(('.tif','.jpg','.png','.bmp'))==True and x.startswith('._')==False]
#Make counter for file number
img_no = -1
#Place holder as to if the image should be rotated or not? 0 is no 1 is yes
rotate = 0
#Itterating through files in data location with tqdm you get a nice progress bar and estimate of how long the program will take to run
for filename in tqdm_notebook(files):
#proceed image count
img_no = img_no+1
#acertain fibre properties as defined above
if show_all == True:
print(filename) #This is good to be kept on incase crashes on particular image - can then debug on that specific image
flatmem_properties = fibrefeature(data_loc,filename,pxum,fibre_pad,fibre_scale,img_no,rotate,True,True,True)
else:
# print(filename)
flatmem_properties = fibrefeature(data_loc,filename,pxum,fibre_pad,fibre_scale,img_no,rotate,False,False,False)
#If so some reason the program bugs and does not return data for a given image allow the program to continue to run rather than crash.
if flatmem_properties is None:
pass
else:
#Bring update the rotate value so that it does not have to be evaluated each time as this should be the same for all images within a set
rotate = flatmem_properties[3]
#Copy evaluated membrane metrics to the summary dataframe
cfp = cfp.append({'filename':filename,'thick_mean':flatmem_properties[0],'thick_med':flatmem_properties[1],'thick_IQR':flatmem_properties[2]}, ignore_index=True)
#Once has completed print out the top of the summary dataframe as this can be used as a quick sanity check before continuing
print(cfp.head())
#Save the summary dataframe out as a CSV to be processed further later one - important so you dont have to run the image processing each time
cfp.to_csv(save_loc+savename+'.csv')
###Output
_____no_output_____
###Markdown
Adding metadata
###Code
Run = False
if Run == True:
#Initially open processed image data csv file
processed_flat = pd.read_csv(save_loc + 'processed_flat.csv',index_col = 0)
#For each of the rows in the processed data csv file match the corresponding sample file to associated metadata
for file, row in processed_flat.iterrows():
#For each of row of data add the associated metadata
#Get pyridine concentration used
processed_flat.loc[file, 'pyridine_conc'] = sample_key.loc[sample_key['uCT_filename'] == file, 'pyridine_conc'].iloc[0]
#Get rotation speed used
processed_flat.loc[file, 'rotation_speed'] = sample_key.loc[sample_key['uCT_filename'] == file, 'rotation_speed'].iloc[0]
#Get the name of the polymer solution used
processed_flat.loc[file, 'solution_name'] = sample_key.loc[sample_key['uCT_filename'] == file, 'solution_name'].iloc[0]
#Get the amount of time used to spin each of the membranes
processed_flat.loc[file, 'time_spun'] = sample_key.loc[sample_key['uCT_filename'] == file, 'time_spun'].iloc[0]
#Get the voltage used
voltage = sample_key.loc[sample_key['uCT_filename'] == file, 'voltage'].iloc[0]
#Get the minimum voltage at which a taylor cone would form on the day of the spin
min_voltage = sample_key.loc[sample_key['uCT_filename'] == file, 'min_voltage'].iloc[0]
#Get the maximum voltage at which a taylor cone would form on the day of the spin
max_voltage = sample_key.loc[sample_key['uCT_filename'] == file, 'max_voltage'].iloc[0]
#Evaluate the range of the voltage at whicht a taylor cone would form on the day of the spin
processed_flat.loc[file, 'Voltage Range'] = (((voltage-min_voltage)/(max_voltage-min_voltage))*100).round(0)
#Having collated all the meta data check correctly recorded
print(processed_flat.head())
#save pandas data frame of all processed image data with assciated metadata as CSV
processed_flat.to_csv(save_loc + 'processed_flat.csv')
###Output
_____no_output_____
###Markdown
Plotting
###Code
Run = False
if Run == True:
#Initially import processed flat sheet membrane data along with metadata
processed_flat = pd.read_csv(save_loc + 'processed_flat.csv',index_col = 0)
#FCeate figure for new plot
fig, ax = plt.subplots()
#Before able to plot need to catagorise data by variable e.g by pyridine conc
#as all data is in a single column and are only plotting a line graph can separate series using pandas groupby
for key, grp in processed_flat.sort_values(['time_spun']).groupby(['pyridine_conc']):
#set the data in each axis
x = grp['time_spun']
y = grp['median_thickness_um']
ax.plot(x,y, label = key)
#add precalculated IQR bands for each graph for force/extension line graph
ax.fill_between(grp['time_spun'], grp['median_thickness_um'] - grp['thickness_IQR_um'],grp['median_thickness_um'] + grp['thickness_IQR_um'], alpha=0.35)
#adding formatting into each graph
xlabel = 'Time Spun (Hrs)'
ylabel = 'Median Membrane Thickness ($\mu$m)'
ax.legend()
ax.set(xlabel=xlabel, ylabel= ylabel) #(xlabel=x, ylabel='Fibre Diameter ($\mu$m)')
#save figure out
fig.savefig(save_loc+'flat_thickness.png',bbox_inches='tight', dpi=300)
###Output
_____no_output_____ |
challenge1/model_full.ipynb | ###Markdown
Import the necessary librariesjob ID ascending = time series??? coba di urutkan by job ID dan pake KougamiNet
###Code
# Arrays and datasets
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Machine learning models
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
# Feature engineering and model evaluation
from sklearn.pipeline import Pipeline
from sklearn.decomposition import PCA
from sklearn.preprocessing import RobustScaler
from sklearn.metrics import classification_report
from sklearn.metrics import plot_confusion_matrix
from sklearn.model_selection import cross_validate
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
from sklearn.feature_selection import mutual_info_classif
from imblearn.over_sampling import RandomOverSampler
# Use Intel's optimized sklearn
from sklearnex import patch_sklearn
patch_sklearn()
###Output
Intel(R) Extension for Scikit-learn* enabled (https://github.com/intel/scikit-learn-intelex)
###Markdown
Functions to simplify some processes
###Code
# get the model algorithm using it's default parameters
def get_model(name):
if name == "knn":
return KNeighborsClassifier()
elif name == "svm":
return SVC(gamma="auto", random_state=42)
elif name == "logistic":
return LogisticRegression(random_state=42)
elif name == "tree":
return DecisionTreeClassifier(random_state=42)
elif name == "forest":
return RandomForestClassifier(random_state=42)
elif name == "mlp":
return MLPClassifier(random_state=42)
# helper function to run predictions
def run_predictions(data, model_name, oversample=False, scale=False, pca=False, cv=10, test_size=0.33):
print("Running preprocessing...")
# for reproducible results
np.random.seed(42)
# split features and label
X = data.iloc[:, 1:-1].values
y = data.iloc[:, -1].values
# do oversampling to handle imbalanced class
sampler = RandomOverSampler(random_state=42)
X_resampled, y_resampled = sampler.fit_resample(X, y) if oversample else (X, y)
# split train and test (33% test, 67% train)
X_train, X_test, y_train, y_test = train_test_split(X_resampled, y_resampled, stratify=y_resampled, test_size=test_size, random_state=42)
# create classification pipeline
pipeline_elements = []
if scale:
pipeline_elements.append(('scaler', RobustScaler()))
if pca:
pipeline_elements.append(('reduce_dimension', PCA(n_components=3)))
pipeline_elements.append(('classifier', get_model(model_name)))
# make pipeline
clf = Pipeline(pipeline_elements)
print("Pipeline: " + " -> ".join(clf.named_steps.keys()))
# cross validation
print("\n--- Cross Validation ---")
scores = cross_validate(clf, X_resampled, y_resampled, cv=cv)
print(pd.DataFrame.from_dict(scores))
print("Average score: ", np.mean(scores["test_score"]))
# fit the model and evaluate
print("\n--- Train/Test Split ---")
clf.fit(X_train, y_train) # force retrain the model
y_pred = clf.predict(X_test)
print(classification_report(y_test, y_pred))
# plot the confusion matrix from train/test split
plot_confusion_matrix(clf, X_test, y_test)
###Output
_____no_output_____
###Markdown
Load the dataset
###Code
# load dataset
df = pd.read_csv('dataset/train_data.csv')
# sample top 5 data
df.head()
# split to features and label
X = df.iloc[:, 1:-1].values
y = df.iloc[:, -1].values
###Output
_____no_output_____
###Markdown
Baseline Models
###Code
# K-Nearest Neighbor
run_predictions(df, "knn")
# Logistic Regression
run_predictions(df, "logistic")
# Decision Tree
run_predictions(df, "tree")
# Random Forest
run_predictions(df, "forest")
# Multilayer Perceptron
run_predictions(df, "mlp")
# Support Vector Machine
run_predictions(df, "svm")
###Output
Running preprocessing...
Pipeline: classifier
--- Cross Validation ---
fit_time score_time test_score
0 5.229378 0.626500 0.9430
1 4.720030 0.610470 0.9390
2 4.797000 0.622500 0.9400
3 4.798999 0.617500 0.9390
4 4.651500 0.619500 0.9420
5 4.776001 0.618502 0.9365
6 4.826999 0.643497 0.9420
7 4.962998 0.627501 0.9375
8 4.901000 0.630501 0.9360
9 4.846000 0.623001 0.9405
Average score: 0.93955
--- Train/Test Split ---
precision recall f1-score support
0 0.94 1.00 0.97 6063
1 0.87 0.33 0.47 537
accuracy 0.94 6600
macro avg 0.91 0.66 0.72 6600
weighted avg 0.94 0.94 0.93 6600
###Markdown
Using Oversampling
###Code
run_predictions(df, "knn", oversample=True)
run_predictions(df, "logistic", oversample=True)
run_predictions(df, "tree", oversample=True)
run_predictions(df, "forest", oversample=True)
run_predictions(df, "mlp", oversample=True)
run_predictions(df, "svm", oversample=True)
###Output
Running preprocessing...
Pipeline: classifier
--- Cross Validation ---
fit_time score_time test_score
0 31.060997 4.504497 0.847075
1 28.791500 4.470505 0.842177
2 31.232500 4.499495 0.837551
3 31.261002 4.502501 0.848707
4 28.768999 4.527498 0.841361
5 31.424001 4.501998 0.848163
6 28.566500 4.466500 0.840272
7 31.187999 4.506500 0.847619
8 27.560000 4.508500 0.850844
9 28.803504 4.532499 0.848666
Average score: 0.8452435240835584
--- Train/Test Split ---
precision recall f1-score support
0 0.86 0.82 0.84 6064
1 0.83 0.87 0.85 6063
accuracy 0.84 12127
macro avg 0.84 0.84 0.84 12127
weighted avg 0.84 0.84 0.84 12127
###Markdown
Using Feature Scaling
###Code
run_predictions(df, "knn", scale=True)
run_predictions(df, "logistic", scale=True)
run_predictions(df, "tree", scale=True)
run_predictions(df, "forest", scale=True)
run_predictions(df, "mlp", scale=True)
run_predictions(df, "svm", scale=True)
###Output
Running preprocessing...
Pipeline: scaler -> classifier
--- Cross Validation ---
fit_time score_time test_score
0 5.425501 0.677999 0.9190
1 5.555000 0.661497 0.9190
2 5.631501 0.665999 0.9190
3 5.927501 0.673500 0.9190
4 5.371001 0.655499 0.9185
5 5.410000 0.649501 0.9185
6 5.584004 0.662497 0.9185
7 5.782000 0.654500 0.9185
8 5.522999 0.645000 0.9185
9 5.258500 0.661500 0.9185
Average score: 0.9187
--- Train/Test Split ---
E:\app-store\bin\miniconda3\envs\ml\lib\site-packages\sklearn\metrics\_classification.py:1245: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
E:\app-store\bin\miniconda3\envs\ml\lib\site-packages\sklearn\metrics\_classification.py:1245: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
E:\app-store\bin\miniconda3\envs\ml\lib\site-packages\sklearn\metrics\_classification.py:1245: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
precision recall f1-score support
0 0.92 1.00 0.96 6063
1 0.00 0.00 0.00 537
accuracy 0.92 6600
macro avg 0.46 0.50 0.48 6600
weighted avg 0.84 0.92 0.88 6600
###Markdown
Using Oversampling and Feature Scaling
###Code
run_predictions(df, "knn", scale=True, oversample=True)
run_predictions(df, "logistic", scale=True, oversample=True)
run_predictions(df, "tree", scale=True, oversample=True)
run_predictions(df, "forest", scale=True, oversample=True)
run_predictions(df, "mlp", scale=True, oversample=True)
run_predictions(df, "svm", scale=True, oversample=True)
###Output
Running preprocessing...
Pipeline: scaler -> classifier
--- Cross Validation ---
fit_time score_time test_score
0 31.775999 5.902000 0.750476
1 32.076500 6.057000 0.753469
2 32.257499 6.064000 0.750748
3 32.042501 5.927999 0.752925
4 31.819000 5.881000 0.742857
5 32.462000 5.939500 0.749932
6 32.201501 5.948998 0.736327
7 32.786498 6.064145 0.746939
8 32.484534 6.014003 0.766195
9 33.029840 6.129500 0.754763
Average score: 0.750463155322009
--- Train/Test Split ---
precision recall f1-score support
0 0.77 0.71 0.74 6064
1 0.73 0.79 0.76 6063
accuracy 0.75 12127
macro avg 0.75 0.75 0.75 12127
weighted avg 0.75 0.75 0.75 12127
###Markdown
Use the model to perform predictions
###Code
# --- start of model parameters ---
# dataset used to train the model
df_train = pd.read_csv('dataset/train_data.csv')
# dataset used to test the model
df_test = pd.read_csv('dataset/test_data_unlabeled.csv')
# --- end of model parameters ---
# split features and label, use all data in the training set (not splitting it as we do in above code)
X_train_real = df_train.iloc[:, 1:-1].values
y_train_real = df_train.iloc[:, -1].values
# do oversampling to handle imbalanced class
sampler = RandomOverSampler()
X_resampled, y_resampled = sampler.fit_resample(X_train_real, y_train_real)
#X_resampled, y_resampled = X_train_real, y_train_real
# select the same column from test dataset as the train dataset
X_new = df_test[TEST_COLUMNS].values
# fit the model
clf = MLPClassifier(random_state=42) #RandomForestClassifier()
clf.fit(X_resampled, y_resampled)
# run predictions, the result will be saved in y_pred as numpy array
y_pred = clf.predict(X_new)
np.unique(y_pred, return_counts=True)
df_test["failed"] = y_pred
df_test[["job_id", "failed"]].to_csv('result.csv', index=None)
df_test["failed"].value_counts() / len(df_test) * 100
###Output
_____no_output_____ |
ipynb-tools/000_svcode.ipynb | ###Markdown
Creating a new notebook.border { display: inline-block; border: solid 1px rgba(204, 204, 204, 0.4); border-bottom-color: rgba(187, 187, 187, 0.4); border-radius: 3px; box-shadow: inset 0 -1px 0 rgba(187, 187, 187, 0.4); background-color: inherit !important; vertical-align: middle; color: inherit !important; font-size: 11px; padding: 3px 5px; margin: 0 2px; }1. Open the command palette with the shortcut: Ctrl/Command + Shift + P 2. Search for the command Create New Blank Jupyter Notebook --- How to get back to the start page1. Open the command palette with the shortcut: Ctrl/Command + Shift + P 2. Search for the command Python: Open Start Page --- Getting startedYou are currently viewing what we call our Notebook Editor. It is an interactive document based on Jupyter Notebooks that supports the intermixing of code, outputs and markdown documentation. This cell is a markdown cell. To edit the text in this cell, simply click on the cell to change it into edit mode.The next cell below is a code cell. You can switch a cell between code and markdown by clicking on the code /markdown icons or using the keyboard shortcut M and Y respectively.
###Code
print('hello world')
###Output
_____no_output_____ |
Models/English Models/English Sub-task A.ipynb | ###Markdown
Dataset Reading
###Code
import pandas as pd
data = pd.read_excel('drive/My Drive/HASOC Competition Data/hasoc_2020_en_train_new.xlsx')
pd.set_option('display.max_colwidth',150)
data.head(10)
data.shape
print(data.dtypes)
###Output
tweet_id int64
text object
task1 object
task2 object
ID object
dtype: object
###Markdown
Making of "label" Variable
###Code
label = data['task1']
label.head()
###Output
_____no_output_____
###Markdown
Checking Dataset Balancing
###Code
print(label.value_counts())
import matplotlib.pyplot as plt
label.value_counts().plot(kind='bar', color='blue')
###Output
HOF 1856
NOT 1852
Name: task1, dtype: int64
###Markdown
Convering label into "0" or "1"
###Code
import numpy as np
classes_list = ["HOF","NOT"]
label_index = data['task1'].apply(classes_list.index)
final_label = np.asarray(label_index)
print(final_label[:10])
from keras.utils.np_utils import to_categorical
label_twoDimension = to_categorical(final_label, num_classes=2)
print(label_twoDimension[:10])
###Output
[[1. 0.]
[1. 0.]
[0. 1.]
[1. 0.]
[0. 1.]
[1. 0.]
[1. 0.]
[1. 0.]
[1. 0.]
[0. 1.]]
###Markdown
Making of "text" Variable
###Code
text = data['text']
text.head(10)
###Output
_____no_output_____
###Markdown
Dataset Pre-processing
###Code
import re
def text_clean(text):
''' Pre process and convert texts to a list of words '''
text=text.lower()
# Clean the text
text = re.sub(r"[^A-Za-z0-9^,!.\/'+-=]", " ", text)
text = re.sub(r"what's", "what is ", text)
text = re.sub(r"I'm", "I am ", text)
text = re.sub(r"\'s", " ", text)
text = re.sub(r"\'ve", " have ", text)
text = re.sub(r"can't", "cannot ", text)
text = re.sub(r"wouldn't", "would not ", text)
text = re.sub(r"shouldn't", "should not ", text)
text = re.sub(r"shouldn", "should not ", text)
text = re.sub(r"didn", "did not ", text)
text = re.sub(r"n't", " not ", text)
text = re.sub(r"i'm", "i am ", text)
text = re.sub(r"\'re", " are ", text)
text = re.sub(r"\'d", " would ", text)
text = re.sub(r"\'ll", " will ", text)
text = re.sub('https?://\S+|www\.\S+', "", text)
text = re.sub(r",", " ", text)
text = re.sub(r"\.", " ", text)
text = re.sub(r"!", " ! ", text)
text = re.sub(r"\/", " ", text)
text = re.sub(r"\^", " ^ ", text)
text = re.sub(r"\+", " + ", text)
text = re.sub(r"\-", " - ", text)
text = re.sub(r"\=", " = ", text)
text = re.sub(r"'", " ", text)
text = re.sub(r"(\d+)(k)", r"\g<1>000", text)
text = re.sub(r":", " : ", text)
text = re.sub(r" e g ", " eg ", text)
text = re.sub(r" b g ", " bg ", text)
text = re.sub(r" u s ", " american ", text)
text = re.sub(r"\0s", "0", text)
text = re.sub(r" 9 11 ", "911", text)
text = re.sub(r"e - mail", "email", text)
text = re.sub(r"j k", "jk", text)
text = re.sub(r"\s{2,}", " ", text)
text = re.sub(r"rt", " ", text)
return text
clean_text = text.apply(lambda x:text_clean(x))
clean_text.head(10)
###Output
_____no_output_____
###Markdown
Removing stopwords
###Code
import nltk
from nltk.corpus import stopwords
nltk.download('stopwords')
def stop_words_removal(text1):
text1=[w for w in text1.split(" ") if w not in stopwords.words('english')]
return " ".join(text1)
clean_text_ns=clean_text.apply(lambda x: stop_words_removal(x))
print(clean_text_ns.head(10))
###Output
0 hate wen females hit ah nigga tht bro tryna make u la sweety fuck ah bro
1 airjunebug : bay really ny nigga hea w suppo caleon
2 donaldjtrumpjr : dear democrats : american people stupid know spying amount gaslighting change th
3 shelovetimothy : drugs bored shit bored
4 tavianjordan : summer 19 coming ! boring shit ! beach days road trips kickbacks hot days ! ready ready
5 hermescxbin turn shit
6 spaceboykenny : know fuck bout cel shading horny instead
7 polo ts ones feeeling fly fly like bitch touch
8 fucking love life ! ! !
9 nig bmt newspaper weak bro ending pissed
Name: text, dtype: object
###Markdown
Stemming
###Code
# Stemming
from nltk.stem import PorterStemmer
stemmer = PorterStemmer()
def word_stemmer(text):
stem_text = "".join([stemmer.stem(i) for i in text])
return stem_text
clean_text_stem = clean_text_ns.apply(lambda x : word_stemmer(x))
print(clean_text_stem.head(10))
###Output
0 hate wen females hit ah nigga tht bro tryna make u la sweety fuck ah bro
1 airjunebug : bay really ny nigga hea w suppo caleon
2 donaldjtrumpjr : dear democrats : american people stupid know spying amount gaslighting change th
3 shelovetimothy : drugs bored shit bored
4 tavianjordan : summer 19 coming ! boring shit ! beach days road trips kickbacks hot days ! ready ready
5 hermescxbin turn shit
6 spaceboykenny : know fuck bout cel shading horny instead
7 polo ts ones feeeling fly fly like bitch touch
8 fucking love life ! ! !
9 nig bmt newspaper weak bro ending pissed
Name: text, dtype: object
###Markdown
Tokenization using "keras"
###Code
import keras
import tensorflow
from keras.preprocessing.text import Tokenizer
tok_all = Tokenizer(filters='!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~', lower=True, char_level = False)
tok_all.fit_on_texts(clean_text_stem)
###Output
_____no_output_____
###Markdown
Making Vocab for words
###Code
vocabulary_all = len(tok_all.word_counts)
print(vocabulary_all)
l = tok_all.word_index
print(l)
###Output
{'fuck': 1, 'shit': 2, 'fucking': 3, 'like': 4, 'get': 5, 'ass': 6, 'go': 7, 'people': 8, 'need': 9, 'know': 10, 'think': 11, 'want': 12, 'one': 13, 'bitch': 14, 'never': 15, 'ever': 16, 'going': 17, 'damn': 18, 'would': 19, 'got': 20, 'tell': 21, 'u': 22, 'realdonaldtrump': 23, 'president': 24, 'trump': 25, 'stupid': 26, 'said': 27, 'look': 28, 'see': 29, 'really': 30, 'bts': 31, 'amp': 32, 'say': 33, 'getting': 34, 'good': 35, 'sick': 36, 'even': 37, 'away': 38, 'still': 39, 'come': 40, 'stop': 41, 'two': 42, 'work': 43, 'today': 44, 'big': 45, 'better': 46, '2': 47, 'little': 48, 'sta': 49, 'gonna': 50, '2019': 51, 'love': 52, 'help': 53, 'well': 54, 'everything': 55, 'years': 56, 'man': 57, 'lol': 58, 'way': 59, 'make': 60, 'thought': 61, 'time': 62, 'give': 63, '3': 64, 'could': 65, 'keep': 66, 'america': 67, 'right': 68, 'h': 69, '1': 70, 'hate': 71, 'found': 72, 'probably': 73, 'white': 74, 'w': 75, 'life': 76, 'oh': 77, 'always': 78, 'day': 79, 'someone': 80, 'show': 81, 'im': 82, 'suck': 83, 'nobody': 84, 'dumb': 85, 'left': 86, 'fine': 87, 'cannot': 88, 'father': 89, 'bad': 90, 'much': 91, 'rest': 92, 'guy': 93, 'please': 94, 'yo': 95, 'back': 96, 'coming': 97, 'video': 98, 'world': 99, 'us': 100, 'wanna': 101, 'god': 102, 'tonight': 103, '5': 104, 'live': 105, 'put': 106, 'face': 107, 'everyone': 108, 'twt': 109, 'old': 110, 'house': 111, 'trash': 112, 'die': 113, 'nigga': 114, 'bi': 115, 'sorry': 116, 'tired': 117, 'bit': 118, 'okay': 119, 'mother': 120, 'idiot': 121, 'country': 122, 'saying': 123, '4': 124, 'lt': 125, '19': 126, 'days': 127, 'may': 128, 'ok': 129, 'gt': 130, 'might': 131, 'around': 132, 'ya': 133, 'school': 134, 'vote': 135, 'b': 136, 'another': 137, 'pa': 138, 'something': 139, 'hand': 140, 'racist': 141, 'literally': 142, 'hard': 143, 'sex': 144, 'game': 145, 'hell': 146, 'liar': 147, 'hope': 148, 'actually': 149, 'full': 150, 'th': 151, 'dont': 152, 'dad': 153, 'n': 154, 'gone': 155, 'cause': 156, 'gotta': 157, 'making': 158, 'let': 159, '20': 160, 'ah': 161, 'whole': 162, 'morning': 163, 'talking': 164, 'james': 165, 'long': 166, 'mom': 167, 'done': 168, 'feel': 169, 'women': 170, 'instead': 171, 'holy': 172, 'https': 173, 'wow': 174, '10': 175, 'great': 176, 'checked': 177, 'lying': 178, 'mouth': 179, 'person': 180, 'barr': 181, 'mean': 182, 'looking': 183, 'followed': 184, 'abo': 185, 'thank': 186, 'words': 187, 'mr': 188, 'money': 189, 'anyone': 190, 'office': 191, 'hea': 192, 'ur': 193, 'post': 194, 'close': 195, 'cut': 196, 'next': 197, 'friend': 198, 'wo': 199, 'kids': 200, 'ed': 201, 'suppo': 202, 'ready': 203, 'dick': 204, 'yet': 205, 'told': 206, 'things': 207, 'question': 208, 'date': 209, 'ago': 210, 'automatically': 211, 'r': 212, 'girls': 213, 'take': 214, 'line': 215, 'nothing': 216, 'call': 217, 'also': 218, 'lot': 219, 'son': 220, 'hday': 221, 'ion': 222, 'able': 223, 'glad': 224, 'spend': 225, 'job': 226, 'thing': 227, 'problem': 228, 'games': 229, 'made': 230, 'niggas': 231, 'trying': 232, 'f': 233, 'bro': 234, 'best': 235, 'watch': 236, 'sleep': 237, 'place': 238, 'worst': 239, 'use': 240, 'says': 241, 'talk': 242, 'needs': 243, 'charles': 244, 'many': 245, 'every': 246, 'year': 247, 'free': 248, 'music': 249, 'cute': 250, 'calling': 251, 'means': 252, 'find': 253, 'head': 254, 'baby': 255, 'clear': 256, 'hot': 257, 'men': 258, 'lmao': 259, 'rather': 260, 'business': 261, 'jobs': 262, 'since': 263, 'mental': 264, 'pretty': 265, 'health': 266, '6': 267, 'calls': 268, 'went': 269, 'took': 270, 'name': 271, 'bbmastopsocial': 272, 'young': 273, 'piece': 274, 'scroll': 275, 'kill': 276, 'card': 277, 'hu': 278, 'tryna': 279, 'real': 280, 'fat': 281, 'total': 282, 'control': 283, 'shut': 284, 'fact': 285, 'hi': 286, 'yall': 287, 'wtf': 288, 'scrolling': 289, 'yes': 290, 'gets': 291, 'conce': 292, 'c': 293, 'p': 294, 'future': 295, 'l': 296, 'finally': 297, 'theyre': 298, 'less': 299, 'dead': 300, 'agree': 301, 'yeah': 302, 'amazing': 303, 'x': 304, '11': 305, 'mad': 306, 'girl': 307, 'tweet': 308, 'friends': 309, 'late': 310, 'play': 311, 'orange': 312, 'public': 313, 'omg': 314, 'e': 315, 'nice': 316, 'crying': 317, 'fans': 318, 'hear': 319, 'tomorrow': 320, 'thinking': 321, 'ppl': 322, 'obligated': 323, 'single': 324, 'bruh': 325, 'called': 326, 'states': 327, 'boy': 328, 'traitor': 329, 'fucked': 330, 'miss': 331, 'hey': 332, 'american': 333, 'turn': 334, 'government': 335, 'pm': 336, 'seen': 337, 'dog': 338, 'eat': 339, 'jon': 340, 'k': 341, 'news': 342, 'modi': 343, 'jesus': 344, 'case': 345, 'water': 346, 'remember': 347, 'welcome': 348, 'af': 349, 'beat': 350, 'bye': 351, 'drop': 352, 'almost': 353, 'past': 354, 'crazy': 355, 'bullshit': 356, 'else': 357, 'gay': 358, 'sho': 359, 'g': 360, 'lost': 361, 'saw': 362, 'first': 363, 'weak': 364, 'looks': 365, 'bitches': 366, 'months': 367, 'congress': 368, 'enough': 369, 'tickets': 370, 'proud': 371, 'exactly': 372, 'guys': 373, 'idk': 374, 'half': 375, 'state': 376, 'fo': 377, 'class': 378, 'graham': 379, 'home': 380, 'sold': 381, 'asked': 382, 'anything': 383, 'wants': 384, 'ko': 385, 'facts': 386, 'tried': 387, 'ones': 388, 'funny': 389, 'youre': 390, 'truth': 391, 'care': 392, 'room': 393, 'telling': 394, 'bet': 395, 'de': 396, 'update': 397, 'minute': 398, 'followers': 399, 'understand': 400, '8': 401, 'names': 402, 'used': 403, 'super': 404, 'happy': 405, 'without': 406, 'forget': 407, 'chicken': 408, 'food': 409, 'least': 410, '12': 411, 'taken': 412, 'running': 413, 'messi': 414, 'open': 415, 'dems': 416, 'weekend': 417, 'honestly': 418, 'fake': 419, 'point': 420, 'children': 421, 'shame': 422, 'wit': 423, 'hour': 424, 'minutes': 425, 'number': 426, 'together': 427, 'feeling': 428, 'happen': 429, 'ce': 430, 'dance': 431, 'info': 432, 'jail': 433, 'believe': 434, 'waiting': 435, 'sister': 436, 'ugly': 437, 'lindsey': 438, 'terrorist': 439, 'kiss': 440, 'cuz': 441, 'wife': 442, 'apparently': 443, 'lock': 444, 'wrote': 445, 'fuckin': 446, 'fool': 447, 'ist': 448, 'reading': 449, 'simple': 450, 'jimin': 451, 'high': 452, 'cou': 453, 'run': 454, 'behind': 455, 'jungkook': 456, 'soon': 457, 'ban': 458, 'joke': 459, 'united': 460, 'sexy': 461, 'mentally': 462, 'change': 463, '25': 464, 'movie': 465, 'king': 466, 'thats': 467, 'small': 468, 'paul': 469, 'buy': 470, 'moron': 471, 'eyes': 472, 'night': 473, 'sure': 474, 'using': 475, 'tbh': 476, 'absolutely': 477, 'dude': 478, 'came': 479, 'top': 480, 'seems': 481, 'awesome': 482, 'arya': 483, 'black': 484, 'rep': 485, 'either': 486, 'bbmas': 487, 'working': 488, 'watching': 489, 'age': 490, 'tour': 491, 'bc': 492, 'rape': 493, 'nah': 494, 'second': 495, 'acting': 496, 'club': 497, 'po': 498, '15': 499, 'reply': 500, 'trust': 501, 'retweet': 502, 'lose': 503, 'respect': 504, 'course': 505, 'skin': 506, 'bed': 507, 'la': 508, 'democrats': 509, 'bout': 510, 'em': 511, 'record': 512, 'breaking': 513, 'end': 514, 'new': 515, 'pussy': 516, 'forgot': 517, 'totally': 518, 'chris': 519, 'brain': 520, 'move': 521, 'dreams': 522, 'truly': 523, 'illegal': 524, 'simply': 525, 'works': 526, 'blood': 527, 'putting': 528, 'sucks': 529, 'sexual': 530, 'ive': 531, 'test': 532, 'wanted': 533, 'pregnant': 534, 'word': 535, 'kicked': 536, 'yoongi': 537, 'rich': 538, 'criminal': 539, 'basically': 540, 'mood': 541, 'americans': 542, 'feels': 543, 'sometimes': 544, 'along': 545, 'biggest': 546, 'cool': 547, 'bring': 548, 'already': 549, 'knew': 550, 'steve': 551, 'unless': 552, 'sit': 553, 'met': 554, 'oppo': 555, 'goal': 556, 'english': 557, 'pic': 558, 'muslim': 559, 'ignorant': 560, 'allowed': 561, 'power': 562, 'rights': 563, 'pay': 564, 'entire': 565, 'felt': 566, 'text': 567, '000': 568, 'worse': 569, 'catch': 570, 'giving': 571, '7': 572, 'boyfriend': 573, 'hole': 574, 'last': 575, 'husband': 576, 'swear': 577, 'finna': 578, 'bag': 579, 'attorney': 580, 'sma': 581, 'learn': 582, 'tati': 583, 'hit': 584, 'road': 585, 'touch': 586, 'letter': 587, 'twitter': 588, 'reminder': 589, 'mine': 590, 'boys': 591, 'ask': 592, 'thinks': 593, 'chance': 594, 'listen': 595, 'impo': 596, 'er': 597, 'billion': 598, 'muslims': 599, 'complete': 600, 'republicans': 601, 'media': 602, 'pray': 603, '14': 604, 'former': 605, 'gameofthrones': 606, 'wait': 607, 'wonder': 608, 'far': 609, 'caught': 610, 'ea': 611, 'whatever': 612, 'wish': 613, 'upset': 614, 'di': 615, 'thanks': 616, 'dany': 617, 'john': 618, 'gop': 619, 'wrong': 620, 'brother': 621, 'excuse': 622, 'song': 623, 'group': 624, 'mf': 625, 'double': 626, 'boss': 627, 'fight': 628, 'disgusting': 629, 'planning': 630, 'act': 631, 'drink': 632, 'horrible': 633, 'planet': 634, 'anymore': 635, 'thehill': 636, 'cock': 637, 'golden': 638, 'tho': 639, 'ring': 640, 'gave': 641, 'cant': 642, 'walk': 643, 'eating': 644, 'series': 645, 'try': 646, 'peace': 647, 'reason': 648, 'fun': 649, 'bottom': 650, 'actual': 651, 'month': 652, 'sir': 653, 'definitely': 654, 'saved': 655, 'hair': 656, 'leave': 657, 'dear': 658, 'boring': 659, 'ending': 660, 'sending': 661, 'smoke': 662, 'election': 663, 'child': 664, 'bran': 665, 'light': 666, 'dream': 667, 'deal': 668, 'history': 669, 'outside': 670, 'utter': 671, 'treasures': 672, 'inner': 673, 'virgo': 674, 'space': 675, 'knowing': 676, 'ers': 677, 'gameofthronesfinale': 678, 'private': 679, 'pe': 680, 'cheated': 681, 'sweet': 682, 'throwing': 683, 'j': 684, 'choose': 685, 'kid': 686, 'cat': 687, 'died': 688, 'playing': 689, 'loser': 690, 'beauty': 691, 'videos': 692, 'stand': 693, 'wall': 694, 'bread': 695, 'pro': 696, 'international': 697, 'queen': 698, 'level': 699, 'personal': 700, 'funder': 701, 'annoying': 702, 'cersei': 703, 'duck': 704, 'unity': 705, 'heard': 706, 'defend': 707, 'keeps': 708, 'middle': 709, 'forever': 710, 'picture': 711, 'lame': 712, 'happens': 713, 'idiots': 714, 'immediately': 715, 'position': 716, 'british': 717, 'asf': 718, 'isnt': 719, 'round': 720, '0': 721, 'goes': 722, 'showing': 723, 'twerking': 724, 'straight': 725, 'finale': 726, 'wear': 727, 'dangerous': 728, 'sign': 729, 'hoe': 730, '30': 731, 'cody': 732, 'matter': 733, 'shoot': 734, 'fr': 735, 'side': 736, 'kinda': 737, '9': 738, 'armys': 739, 'co': 740, 'period': 741, 'corrupt': 742, 'following': 743, 'tear': 744, 'general': 745, 'jack': 746, 'beautiful': 747, 'mind': 748, 'gorgeous': 749, 'sad': 750, 'lies': 751, 'dr': 752, 'amount': 753, 'bigger': 754, 'lie': 755, 'story': 756, 'decided': 757, 'daughter': 758, 'ente': 759, 'ima': 760, 'law': 761, 'sale': 762, 'comey': 763, 'laws': 764, 'extra': 765, 'donald': 766, 'speech': 767, 'plan': 768, 'sifting': 769, 'shooter': 770, 'final': 771, 'officially': 772, 'guilty': 773, 'random': 774, 'repo': 775, 'instagram': 776, 'week': 777, 'beyond': 778, 'dumbass': 779, 'enjoy': 780, 'cutest': 781, 'strong': 782, 'seeing': 783, 'stress': 784, 'energy': 785, 'jin': 786, 'hoseok': 787, 'likes': 788, 'happened': 789, 'wild': 790, 'yup': 791, 'blind': 792, 'yesterday': 793, 'hold': 794, 'forgive': 795, 'everybody': 796, 'favorite': 797, 'mins': 798, 'realize': 799, 'season': 800, 'pathetic': 801, 'become': 802, 'local': 803, 'list': 804, 'conservatives': 805, 'though': 806, 'loki': 807, 'true': 808, 'con': 809, 'bull': 810, 'lit': 811, 'check': 812, '17': 813, 'remain': 814, 'born': 815, 'z': 816, 'supposed': 817, 'insane': 818, 'woman': 819, 'kidding': 820, 'sum': 821, 'stark': 822, 'asshole': 823, 'weird': 824, 'knows': 825, 'idc': 826, 'kind': 827, 'taylor': 828, 'beyonc': 829, 'economy': 830, 'ions': 831, 'car': 832, 'drive': 833, 'obama': 834, 'da': 835, 'store': 836, 'http': 837, 'due': 838, 'poor': 839, 'lovely': 840, 'kick': 841, 'jake': 842, 'turned': 843, 'selling': 844, 'stuff': 845, 'win': 846, 'healthy': 847, 'industry': 848, 'halsey': 849, '21': 850, 'manager': 851, 'lindseygrahamsc': 852, 'rawstory': 853, 'tony': 854, 'smh': 855, 'gas': 856, 'ill': 857, 'till': 858, 'ig': 859, 'mostly': 860, 'haha': 861, 'three': 862, 'ing': 863, 'harris': 864, 'follow': 865, 'shot': 866, 'huh': 867, 'player': 868, 'taking': 869, 'slut': 870, 'iam': 871, 'quit': 872, 'listening': 873, 'thread': 874, 'south': 875, 'understood': 876, 'holiday': 877, 'dbongino': 878, 'asking': 879, 'drugs': 880, 'horny': 881, 'pissed': 882, 'vile': 883, 'missing': 884, 'congrats': 885, 'daddy': 886, 'different': 887, 'self': 888, 'family': 889, 'idea': 890, 'hands': 891, 'buying': 892, 'played': 893, 'must': 894, 'tax': 895, '2014': 896, 'predator': 897, 'mark': 898, 'spending': 899, 'regardless': 900, 'charliekirk11': 901, 'legs': 902, 'blue': 903, 'empty': 904, 'six': 905, '45': 906, 'dying': 907, 'concerned': 908, 'guess': 909, 'tweets': 910, 'save': 911, 'nite': 912, 'ugh': 913, 'anyway': 914, 'alone': 915, 'tf': 916, 'evil': 917, 'ground': 918, 'ht': 919, 'namjoon': 920, 'ashamed': 921, 'tv': 922, 'global': 923, 'cry': 924, 'lady': 925, 'ridiculous': 926, 'weed': 927, 'daily': 928, 'stan': 929, 'wayv': 930, 'angry': 931, 'hearing': 932, 'mama': 933, 'early': 934, 'democratic': 935, 'beef': 936, '00': 937, 'senkamalaharris': 938, 'clown': 939, 'makeup': 940, 'handle': 941, 'blow': 942, 'leaving': 943, 'perfectly': 944, 'friday': 945, 'honor': 946, 'non': 947, 'bangtan': 948, 'cunt': 949, 'scared': 950, 'shitty': 951, 'ex': 952, 'islam': 953, 'fu': 954, 'clearly': 955, 'remove': 956, 'curious': 957, 'read': 958, 'consent': 959, 'eye': 960, 'towards': 961, 'steph': 962, 'west': 963, 'attention': 964, 'whoever': 965, 'train': 966, 'stuck': 967, 'nap': 968, 'animals': 969, 'liverpool': 970, 'tr': 971, 'cancel': 972, '40': 973, 'edition': 974, 'sansa': 975, 'senate': 976, 'workers': 977, 'maybe': 978, 'dem': 979, 'skills': 980, 'st': 981, 'swift': 982, 'nudes': 983, 'send': 984, 'broke': 985, 'labour': 986, 'online': 987, 'mo': 988, 'blah': 989, 'bill': 990, 'gives': 991, 'inside': 992, 'research': 993, 'paper': 994, 'fall': 995, 'pr': 996, 'worry': 997, 'sis': 998, 'fox': 999, 'mtv': 1000, 'couple': 1001, 'shes': 1002, '13': 1003, 'pizza': 1004, 'slow': 1005, 'confused': 1006, 'staying': 1007, 'nasty': 1008, 'grown': 1009, 'join': 1010, 'ca': 1011, 'figure': 1012, 'interested': 1013, 'page': 1014, 'star': 1015, 'fit': 1016, 'asap': 1017, 'deserve': 1018, 'characters': 1019, 'angel': 1020, 'meanwhile': 1021, 'smile': 1022, 'changed': 1023, '2016': 1024, 'clean': 1025, 'singing': 1026, 'milk': 1027, 'career': 1028, 'religion': 1029, 'bunch': 1030, 'knock': 1031, 'june': 1032, 'hours': 1033, 'mate': 1034, 'snapchat': 1035, 'red': 1036, 'exo': 1037, 'type': 1038, 'usa': 1039, 'answer': 1040, 'ant': 1041, 'dawg': 1042, 'anti': 1043, 'gon': 1044, 'reuters': 1045, 'bought': 1046, 'worked': 1047, 'imagine': 1048, 'cheat': 1049, 'low': 1050, 'christ': 1051, 'makes': 1052, 'neck': 1053, 'force': 1054, 'legit': 1055, 'base': 1056, 'present': 1057, 'throat': 1058, 'cnn': 1059, 'xxx': 1060, 'puts': 1061, 'passed': 1062, 'thrown': 1063, 'term': 1064, 'weeks': 1065, 'size': 1066, 'paid': 1067, 'turning': 1068, 'meet': 1069, 'pimple': 1070, 'females': 1071, 'fly': 1072, 'cus': 1073, 'giuliani': 1074, 'foreign': 1075, 'bird': 1076, 'obvious': 1077, 'dumbest': 1078, 'somebody': 1079, 'vs': 1080, 'meeting': 1081, 'india': 1082, 'parents': 1083, 'mum': 1084, 'relationship': 1085, 'comes': 1086, 'filter': 1087, 'pick': 1088, 'lets': 1089, 'posted': 1090, 'usually': 1091, 'nazis': 1092, 'eu': 1093, 'hang': 1094, 'disgrace': 1095, 'fbi': 1096, 'director': 1097, 'rockets': 1098, 'album': 1099, 'perfect': 1100, '50': 1101, 'eventually': 1102, 'colorado': 1103, 'dress': 1104, 'board': 1105, 'closed': 1106, 'shite': 1107, 'ai': 1108, 'legendary': 1109, '00pm': 1110, 'issue': 1111, 'breath': 1112, 'field': 1113, 'internet': 1114, 'terrible': 1115, 'third': 1116, 'benshapiro': 1117, 'trouble': 1118, 'fighting': 1119, 'siblings': 1120, 'living': 1121, 'problematic': 1122, 'ariana': 1123, 'kamalaharris': 1124, 'college': 1125, 'graduate': 1126, 'step': 1127, 'stick': 1128, 'lord': 1129, 'deserves': 1130, 'drake': 1131, 'kills': 1132, 'phone': 1133, 'whore': 1134, 'ticket': 1135, 'taehyung': 1136, 'gays': 1137, 'drunk': 1138, 'hy': 1139, 'wi': 1140, 'curry': 1141, 'cheating': 1142, 'solid': 1143, 'pretending': 1144, 'massive': 1145, 'sucked': 1146, 'farage': 1147, 'expose': 1148, 'fix': 1149, 'plus': 1150, 'tory': 1151, 'lunch': 1152, 'btsone': 1153, 'katsuki': 1154, 'jeffree': 1155, 'posting': 1156, 'lil': 1157, 'waste': 1158, 'booty': 1159, 'times': 1160, 'robe': 1161, 'female': 1162, 'bar': 1163, 'msnbc': 1164, 'su': 1165, 'sun': 1166, 'sing': 1167, 'ghetto': 1168, 'carpet': 1169, '2017': 1170, 'stay': 1171, 'yea': 1172, 'warriors': 1173, 'le': 1174, 'elections': 1175, 'gopchairwoman': 1176, 'nights': 1177, 'awards': 1178, 'content': 1179, 'master': 1180, 'htt': 1181, 'write': 1182, 'raise': 1183, 'rid': 1184, 'hitting': 1185, 'perform': 1186, 'igbo': 1187, 'william': 1188, 'message': 1189, 'council': 1190, 'possible': 1191, 'asian': 1192, 'luck': 1193, 'town': 1194, 'ag': 1195, 'pain': 1196, 'treasonous': 1197, 'rate': 1198, 'carry': 1199, 'fan': 1200, 'dragon': 1201, 'track': 1202, 'crimes': 1203, 'congratulations': 1204, 'plot': 1205, 'across': 1206, 'saving': 1207, 'caring': 1208, 'whether': 1209, 'ists': 1210, 'proven': 1211, 'signed': 1212, 'collab': 1213, 'ay': 1214, 'performance': 1215, 'wa': 1216, 'thug': 1217, 'chest': 1218, 'alarm': 1219, 'goals': 1220, 'songs': 1221, 'momma': 1222, 'snow': 1223, 'goin': 1224, 'laughing': 1225, 'human': 1226, 'however': 1227, 'daenerys': 1228, 'hbo': 1229, 'drinking': 1230, 'ove': 1231, 'looked': 1232, 'happening': 1233, 'lmfao': 1234, 'quote': 1235, 'shopping': 1236, 'filming': 1237, 'quite': 1238, 'talented': 1239, 'brotherhood': 1240, 'ilhan': 1241, 'omar': 1242, 'apa': 1243, 'switch': 1244, 'political': 1245, 'social': 1246, 'fraud': 1247, 'center': 1248, 'tells': 1249, 'btspublicity': 1250, 'metrics': 1251, 'estimated': 1252, 'babies': 1253, 'aw': 1254, 'expect': 1255, 'survive': 1256, '22': 1257, 'continue': 1258, 'effect': 1259, 'body': 1260, 'emotional': 1261, 'green': 1262, 'nba': 1263, 'talkin': 1264, 'fed': 1265, 'hella': 1266, 'break': 1267, 'exam': 1268, 'married': 1269, 'throw': 1270, 'greatest': 1271, 'character': 1272, 'joe': 1273, 'morons': 1274, 'throne': 1275, 'becomes': 1276, 'shout': 1277, 'banana': 1278, 'sugar': 1279, 'block': 1280, 'dry': 1281, 'v': 1282, 'korea': 1283, 'doubt': 1284, 'jewish': 1285, 'asses': 1286, 'est': 1287, 'karma': 1288, 'officials': 1289, 'data': 1290, 'ride': 1291, 'prove': 1292, 'quotes': 1293, 'tune': 1294, 'university': 1295, 'repadamschiff': 1296, 'spo': 1297, 'marvel': 1298, 'tall': 1299, 'garbage': 1300, 'sake': 1301, 'thrones': 1302, 'hating': 1303, 'explain': 1304, 'vice': 1305, 'pete': 1306, 'pl': 1307, 'delusional': 1308, 'balls': 1309, 'orgyfucker': 1310, 'en': 1311, 'sue': 1312, 'locked': 1313, 'paying': 1314, 'turns': 1315, 'prison': 1316, 'basic': 1317, 'westbrook': 1318, 'bee': 1319, 'jhope': 1320, 'manage': 1321, 'mi': 1322, 'mans': 1323, 'opinion': 1324, 'chan': 1325, 'vegas': 1326, 'ajax': 1327, 'wage': 1328, 'leadership': 1329, 'dating': 1330, 'wins': 1331, 'toilet': 1332, 'shop': 1333, 'offer': 1334, 'secretary': 1335, 'ramadan': 1336, '18': 1337, 'btsatmetlife': 1338, 'br': 1339, 'bottle': 1340, 'korean': 1341, 'harder': 1342, 'piss': 1343, 'ben': 1344, 'streaming': 1345, 'natural': 1346, 'australia': 1347, 'nazi': 1348, 'victim': 1349, 'places': 1350, 'chase': 1351, 'dinner': 1352, 'situation': 1353, 'ad': 1354, 'ty': 1355, 'killed': 1356, 'copies': 1357, 'stomach': 1358, 'profile': 1359, 'innocent': 1360, 'bjp': 1361, 'inc': 1362, 'dc': 1363, 'handsome': 1364, 'valverde': 1365, 'wen': 1366, 'ny': 1367, 'donaldjtrumpjr': 1368, 'rudy': 1369, 'booked': 1370, 'drug': 1371, 'losing': 1372, 'serious': 1373, 'longer': 1374, 'messenger': 1375, 'matters': 1376, 'georgia': 1377, 'reasons': 1378, 'cos': 1379, 'charge': 1380, 'code': 1381, 'tariffs': 1382, 'funding': 1383, 'bo': 1384, 'banned': 1385, 'account': 1386, 'represent': 1387, 'dogs': 1388, 'beer': 1389, 'society': 1390, 'ignore': 1391, 'immigrant': 1392, '09': 1393, 'staring': 1394, 'compliment': 1395, 'highest': 1396, 'fish': 1397, 'embarrassing': 1398, 'excuses': 1399, 'smell': 1400, 'peedekaf': 1401, 'somewhere': 1402, 'intelligence': 1403, 'sisters': 1404, 'acc': 1405, 'vid': 1406, 'ned': 1407, 'absolute': 1408, 'ps': 1409, 'math': 1410, 'bio': 1411, 'milf': 1412, 'choice': 1413, 'blackpink': 1414, 'chill': 1415, 'becoming': 1416, 'fire': 1417, 'treating': 1418, 'exposed': 1419, 'seem': 1420, 'destroying': 1421, 'jim': 1422, 'tears': 1423, 'band': 1424, 'grow': 1425, 'seokjin': 1426, 'rm': 1427, 'teacher': 1428, 'reveals': 1429, 'trial': 1430, 'lead': 1431, 'gtconway3d': 1432, 'picked': 1433, 'everytime': 1434, 'chief': 1435, 'negative': 1436, 'boris': 1437, 'johnson': 1438, 'ultra': 1439, 'cunts': 1440, 'thick': 1441, 'lakers': 1442, 'protest': 1443, 'wing': 1444, 'doesnt': 1445, 'fck': 1446, 'loyal': 1447, 'forgetting': 1448, 'transphobic': 1449, 'purple': 1450, 'hello': 1451, 'tries': 1452, 'tough': 1453, 'bitter': 1454, 'surprise': 1455, 'gift': 1456, 'fucker': 1457, 'bucky': 1458, 'based': 1459, 'texting': 1460, 'reneecarrollaz': 1461, 'billy': 1462, 'ruin': 1463, 'promised': 1464, 'except': 1465, 'loving': 1466, 'china': 1467, 'spineless': 1468, 'sent': 1469, '23': 1470, 'suggested': 1471, 'kissing': 1472, 'thi': 1473, 'fucks': 1474, 'released': 1475, 'yr': 1476, 'react': 1477, 'language': 1478, 'emo': 1479, 'tag': 1480, 'zero': 1481, 'older': 1482, 'tu': 1483, 'dan': 1484, 'million': 1485, 'wages': 1486, 'rise': 1487, 'everywhere': 1488, 'thor': 1489, 'walked': 1490, 'facebook': 1491, 'friendly': 1492, 'anxiety': 1493, 'nearly': 1494, 'investigation': 1495, 'treat': 1496, 'tight': 1497, 'rough': 1498, 'doin': 1499, 'rolls': 1500, 'wont': 1501, 'shawn': 1502, 'moved': 1503, 'bent': 1504, 'putin': 1505, 'killing': 1506, 'draw': 1507, 'claim': 1508, 'door': 1509, 'russos': 1510, 'lookin': 1511, 'depressed': 1512, 'notice': 1513, 'era': 1514, 'crowd': 1515, 'plain': 1516, 'cloth': 1517, 'museum': 1518, 'conclusion': 1519, 'consensual': 1520, 'site': 1521, 'replaced': 1522, 'cheering': 1523, 'personally': 1524, 'especially': 1525, 'easy': 1526, 'sh': 1527, 'behaviour': 1528, 'bias': 1529, 'pressed': 1530, 'fetus': 1531, 'contact': 1532, 'theory': 1533, 'collaborate': 1534, 'lives': 1535, 'replied': 1536, 'judging': 1537, 'outfits': 1538, 'festival': 1539, 'cold': 1540, 'leaves': 1541, 'van': 1542, 'thicc': 1543, 'scene': 1544, 'satisfying': 1545, 'ronaldo': 1546, 'rock': 1547, 'damage': 1548, 'flowers': 1549, 'anybody': 1550, 'unprotected': 1551, 'nut': 1552, 'blame': 1553, '100': 1554, 'thru': 1555, 'foxnews': 1556, 'blocked': 1557, 'seated': 1558, 'standing': 1559, 'company': 1560, 'polling': 1561, 'travel': 1562, 'flow': 1563, 'q': 1564, 'member': 1565, 'hero': 1566, 'iud': 1567, 'pakistani': 1568, 'teaching': 1569, 'atrupar': 1570, 'leg': 1571, 'apologize': 1572, 'breakdown': 1573, 'billionaires': 1574, 'appropriate': 1575, 'expensive': 1576, 'complex': 1577, 'kno': 1578, 'anxious': 1579, 'useless': 1580, 'restaurant': 1581, 'study': 1582, 'foxandfriends': 1583, 'uses': 1584, 'quickly': 1585, 'flight': 1586, 'gallagher': 1587, 'kings': 1588, 'erotik': 1589, 'bra': 1590, '1m': 1591, 'bomb': 1592, 'royal': 1593, 'vibes': 1594, 'wealth': 1595, 'silent': 1596, 'demand': 1597, 'dollars': 1598, 'chocolate': 1599, 'contempt': 1600, 'jared': 1601, 'inability': 1602, 'moving': 1603, 'jennifer': 1604, 'canada': 1605, 'leader': 1606, 'gun': 1607, 'senior': 1608, 'main': 1609, 'chosen': 1610, 'hillary': 1611, 'collusion': 1612, 'joebiden': 1613, 'kept': 1614, 'asse': 1615, 'reality': 1616, 'comment': 1617, 'visit': 1618, 'team': 1619, 'helping': 1620, 'homophobic': 1621, 'performing': 1622, 'marry': 1623, 'ev': 1624, 'tits': 1625, 'front': 1626, 'image': 1627, 'wanting': 1628, 'ultimate': 1629, 'alyssa': 1630, 'milano': 1631, 'cum': 1632, 'hateful': 1633, 'cam': 1634, 'aining': 1635, 'nowthisnews': 1636, 'involved': 1637, 'arrested': 1638, 'product': 1639, '1000': 1640, 'kicking': 1641, 'winning': 1642, 'worried': 1643, 'crap': 1644, 'pink': 1645, 'jumped': 1646, 'imsadloi': 1647, 'mexico': 1648, 'salad': 1649, 'aint': 1650, 'interviewed': 1651, 'filipina': 1652, 'ar': 1653, 'richard': 1654, 'jacket': 1655, '9pm': 1656, 'ba': 1657, 'loved': 1658, 'nu': 1659, 'proof': 1660, 'copy': 1661, 'style': 1662, 'superior': 1663, 'nj': 1664, 'wiping': 1665, 'spanishcvndy': 1666, 'kamala': 1667, 'tea': 1668, '24': 1669, 'unfollowing': 1670, 'weareoneexo': 1671, 'kenya': 1672, 'comfo': 1673, 'stormydaniels': 1674, 'cried': 1675, 'holding': 1676, 'fingers': 1677, 'justice': 1678, 'ice': 1679, 'cup': 1680, 'btsvotingteam': 1681, 'feelings': 1682, 'writing': 1683, 'eric': 1684, 'seriously': 1685, 'liked': 1686, 'apple': 1687, 'qua': 1688, 'captain': 1689, 'crooked': 1690, 'describe': 1691, 'sober': 1692, 'scott': 1693, 'defending': 1694, 'sides': 1695, 'raya': 1696, 'color': 1697, '31': 1698, 'cook': 1699, 'battle': 1700, 'crackhead': 1701, 'connorhannigan2': 1702, 'offense': 1703, 'creative': 1704, 'alive': 1705, 'watched': 1706, 'eve': 1707, 'broken': 1708, 'unbelievable': 1709, 'none': 1710, 'bathroom': 1711, 'wet': 1712, 'butter': 1713, 'liberal': 1714, 'doll': 1715, 'apply': 1716, 'federal': 1717, 'administration': 1718, 'allies': 1719, 'considering': 1720, 'tweeting': 1721, 'flaw': 1722, '26': 1723, 'gross': 1724, 'facing': 1725, 'blaming': 1726, 'development': 1727, 'hiring': 1728, 'kluber': 1729, 'waited': 1730, 'infinity': 1731, 'legend': 1732, 'trains': 1733, 'cash': 1734, 'bang': 1735, 'unfollower': 1736, 'tracked': 1737, 'review': 1738, 'talent': 1739, 'nixon': 1740, 'waking': 1741, 'erictrump': 1742, 'rubber': 1743, 'trip': 1744, 'nicole': 1745, 'strange': 1746, 'failed': 1747, 'air': 1748, 'abuse': 1749, 'murder': 1750, 'unfollow': 1751, 'became': 1752, 'youtube': 1753, 'roy': 1754, 'keane': 1755, 'matthijs': 1756, 'ligt': 1757, 'captaining': 1758, 'earning': 1759, 'modest': 1760, 'screaming': 1761, 'stays': 1762, 'skinny': 1763, 'cares': 1764, 'later': 1765, 'babbelusa': 1766, 'dicks': 1767, 'haters': 1768, 'busy': 1769, 'moment': 1770, 'lips': 1771, 'kpop': 1772, '5m': 1773, 'dust': 1774, 'poorly': 1775, 'folks': 1776, 'thumb': 1777, 'lowkey': 1778, 'jung': 1779, 'tl': 1780, 'thanos': 1781, 'stopped': 1782, 'outta': 1783, 'drogon': 1784, 'sees': 1785, 'candidate': 1786, 'mrbeastyt': 1787, 'impeach': 1788, 'lang': 1789, 'incredibly': 1790, 'fuc': 1791, 'fully': 1792, 'teeth': 1793, 'impeachment': 1794, 'jerk': 1795, 'newt': 1796, 'al': 1797, 'committed': 1798, 'txt': 1799, 'spent': 1800, 'letting': 1801, 'bighit': 1802, 'trained': 1803, 'uwu': 1804, 'wor': 1805, 'mary': 1806, 'toxic': 1807, 'alot': 1808, 'judge': 1809, '3rd': 1810, 'quality': 1811, 'improve': 1812, 'clinton': 1813, 'petty': 1814, 'juiceworlddd': 1815, 'impeached': 1816, 'annoyed': 1817, 'flew': 1818, '2020': 1819, 'marine': 1820, 'luffy': 1821, 'zoro': 1822, 'brexit': 1823, 'mike': 1824, 'zendaya': 1825, 'fascism': 1826, 'attempted': 1827, 'grandmother': 1828, 'finished': 1829, 'monster': 1830, 'marriage': 1831, 'disney': 1832, 'twice': 1833, 'digital': 1834, 'republican': 1835, 'politics': 1836, 'invite': 1837, 'jerry': 1838, 'russia': 1839, '6th': 1840, 'spot': 1841, 'wallace': 1842, 'voted': 1843, 'speak': 1844, 'puppet': 1845, 'drama': 1846, 'needed': 1847, 'jypetwice': 1848, 'twicelights': 1849, 'genuinely': 1850, 'happily': 1851, 'terrified': 1852, 'teach': 1853, 'science': 1854, 'spoilers': 1855, 'loudobbs': 1856, 'ceo': 1857, 'doyou': 1858, 'threaten': 1859, 'girlfriend': 1860, 'happiness': 1861, 'walking': 1862, 'wearing': 1863, 'xd': 1864, 'beg': 1865, 'mj': 1866, 'weather': 1867, 'nursing': 1868, 'fried': 1869, 'form': 1870, 'hes': 1871, 'hats': 1872, 'maziehirono': 1873, 'israel': 1874, 'bank': 1875, 'got7': 1876, 'mane': 1877, 'agent': 1878, 'truelifeinitiativebydss': 1879, 'noel': 1880, 'delta': 1881, 'tiffany': 1882, 'radical': 1883, 'mute': 1884, 'forced': 1885, 'kim': 1886, 'talked': 1887, 'publicly': 1888, 'kissed': 1889, 'tae': 1890, 'fi': 1891, 'miserable': 1892, 'receiving': 1893, 'silly': 1894, 'pu': 1895, 'washington': 1896, 'scar': 1897, 'loud': 1898, 'spying': 1899, 'bored': 1900, 'beach': 1901, 'polo': 1902, 'nig': 1903, 'angie': 1904, 'seek': 1905, 'suggest': 1906, 'tracks': 1907, 'keitholbermann': 1908, 'ale': 1909, 'bags': 1910, 'students': 1911, 'dealer': 1912, 'lfc': 1913, 'fam': 1914, 'rahul': 1915, 'gandhi': 1916, 'min': 1917, 'raw': 1918, 'fest': 1919, 'andrea': 1920, 'pass': 1921, 'americas': 1922, 'ballot': 1923, 'earlier': 1924, 'hondadeal4vets': 1925, 'snap': 1926, 'rlly': 1927, 'showed': 1928, 'id': 1929, 'btsworld': 1930, 'suited': 1931, 'cardi': 1932, 'missed': 1933, 'garage': 1934, 'add': 1935, 'collection': 1936, 'mitchellvii': 1937, 'expectations': 1938, 'wonde': 1939, 'author': 1940, 'kay': 1941, 'uste': 1942, 'warning': 1943, 'thunder': 1944, 'sales': 1945, 'citizens': 1946, 'realized': 1947, 'pentagon': 1948, 'division': 1949, 'retweets': 1950, 'idgaf': 1951, 'learning': 1952, 'danger': 1953, 'vi': 1954, 'genius': 1955, 'taekookmemories': 1956, 'taekook': 1957, 'nine': 1958, 'tech': 1959, 'constantly': 1960, 'demetriusharmon': 1961, 'ni': 1962, 'jackposobiec': 1963, 'alec': 1964, 'mckinney': 1965, 'serial': 1966, 'felon': 1967, 'felony': 1968, 'loves': 1969, 'engraved': 1970, 'ainment': 1971, 'brag': 1972, 'daesangs': 1973, 'gucci': 1974, 'honest': 1975, 'deep': 1976, 'gold': 1977, 'yucky': 1978, 'ooh': 1979, 'walma': 1980, 'foul': 1981, 'film': 1982, 'officer': 1983, 'pled': 1984, 'ew': 1985, 'sub': 1986, 'realsaavedra': 1987, 'brothers': 1988, 'draymond': 1989, 'digging': 1990, 'bums': 1991, 'lashes': 1992, 'ified': 1993, 'pouring': 1994, 'manny': 1995, 'landing': 1996, 'rooted': 1997, 'noahcrothman': 1998, 'fandom': 1999, 'secret': 2000, 'netflix': 2001, 'corruption': 2002, 'jose': 2003, 'mic': 2004, 'graduating': 2005, 'majors': 2006, 'hub2': 2007, 'slutty': 2008, 'busty': 2009, 'click': 2010, 'defends': 2011, 'uh': 2012, 'gu': 2013, 'rapper': 2014, 'classic': 2015, 'adve': 2016, 'awful': 2017, 'glory': 2018, 'byte': 2019, 'justintrudeau': 2020, 'dis': 2021, 'smiles': 2022, 'sets': 2023, 'shambles': 2024, 'visual': 2025, 'mfers': 2026, 'asl': 2027, 'taboo': 2028, 'penis': 2029, 'sitting': 2030, 'slick': 2031, 'needy': 2032, 'selfish': 2033, 'yahoo': 2034, 'plays': 2035, 'espn': 2036, 'projared': 2037, 'hilarious': 2038, 'schwarzenegger': 2039, 'reminds': 2040, 'funniest': 2041, 'army': 2042, 'asks': 2043, 'somehow': 2044, 'tomfitton': 2045, '16': 2046, 'nuts': 2047, 'wedding': 2048, 'chapter': 2049, 'screen': 2050, 'fell': 2051, '2015': 2052, 'thatsdax': 2053, 'trauma': 2054, 'changing': 2055, 'bouta': 2056, 'eachother': 2057, 'malcolm': 2058, 'homie': 2059, 'plate': 2060, 'garden': 2061, 'violated': 2062, 'gavin': 2063, 'costume': 2064, 'gamer': 2065, 'nerd': 2066, 'grabs': 2067, 'jisung': 2068, 'wyd': 2069, 'emma': 2070, 'exclude': 2071, 'soul': 2072, 'mcu': 2073, 'mutuals': 2074, 'percent': 2075, 'homework': 2076, 'wake': 2077, 'fast': 2078, 'saraca': 2079, 'erdc': 2080, 'nadler': 2081, 'accurate': 2082, 'boyband': 2083, 'grande': 2084, 'johnbrennan': 2085, 'smelling': 2086, 'bec': 2087, 'philsadelphia': 2088, 'lovin': 2089, 'batman': 2090, 'albe': 2091, 'demthrones': 2092, 'anyways': 2093, 'topic': 2094, 'couch': 2095, 'special': 2096, 'joel': 2097, 'fry': 2098, 'gifts': 2099, 'farmers': 2100, 'rarely': 2101, 'mighty': 2102, '0900': 2103, 'investigate': 2104, 'rushing': 2105, 'kimpetras': 2106, 'dudes': 2107, 'alcohol': 2108, 'memories': 2109, 'writers': 2110, 'cast': 2111, 'accused': 2112, 'usual': 2113, 'stage': 2114, 'fro': 2115, 'vegan': 2116, 'stir': 2117, 'mistake': 2118, 'often': 2119, 'kd': 2120, 'relate': 2121, 'behavior': 2122, 'hosting': 2123, 'snl': 2124, 'role': 2125, 'aka': 2126, 'direction': 2127, 'sleepless': 2128, 'claustrophobic': 2129, 'afraid': 2130, 'release': 2131, 'yeri': 2132, 'woke': 2133, 'hippuscampo': 2134, 'generally': 2135, 'meddling': 2136, 'meat': 2137, 'royce': 2138, 'whats': 2139, 'shitt': 2140, 'immature': 2141, 'ayo': 2142, 'lagos': 2143, 'keeping': 2144, 'owns': 2145, 'thousands': 2146, 'bully': 2147, 'glove': 2148, 'marshals': 2149, 'creator': 2150, 'newborn': 2151, 'twins': 2152, 'cancer': 2153, 'teachers': 2154, 'billie': 2155, 'quick': 2156, 'unhx': 2157, 'foster': 2158, 'owner': 2159, 'london': 2160, 'borough': 2161, 'salty': 2162, 'pitch': 2163, 'poked': 2164, 'wendy': 2165, 'pushed': 2166, 'jokes': 2167, 'adventures': 2168, 'westeros': 2169, 'prick': 2170, 'nerves': 2171, 'consumers': 2172, 'increase': 2173, 'sometime': 2174, 'catturd2': 2175, 'attacked': 2176, 'balance': 2177, 'crypt': 2178, 'saltydkdan': 2179, 'hulk': 2180, 'greedy': 2181, 'prettyindie': 2182, 'baddest': 2183, 'fury': 2184, 'finish': 2185, 'incase': 2186, 'goddard': 2187, 'ball': 2188, 'guilt': 2189, 'published': 2190, 'mollyjongfast': 2191, 'mumbling': 2192, 'nikitadragun': 2193, 'davidhogg111': 2194, 'ab': 2195, 'bf': 2196, 'books': 2197, 'houses': 2198, 'awake': 2199, 'asleep': 2200, 'rude': 2201, 'winterfell': 2202, 'kong': 2203, 'aye': 2204, 'dese': 2205, 'poster': 2206, 'misogynistic': 2207, 'lazy': 2208, 'hahaha': 2209, 'uhhh': 2210, 'nails': 2211, 'award': 2212, 'theres': 2213, 'burger': 2214, 'mobile': 2215, 'shoes': 2216, 'barely': 2217, 'dms': 2218, 'stixilfox': 2219, 'horrifying': 2220, 'cleaning': 2221, '38': 2222, 'groups': 2223, 'barcelona': 2224, 'scored': 2225, 'dijk': 2226, 'sleeping': 2227, 'suspicious': 2228, 'screamed': 2229, 'cristiano': 2230, 'football': 2231, 'ga': 2232, 'maker7': 2233, 'caused': 2234, 'mariaba': 2235, 'iromo': 2236, 'built': 2237, 'lmfaooo': 2238, 'tarekfatah': 2239, 'hypocrisy': 2240, 'praising': 2241, 'reflects': 2242, 'islamist': 2243, 'loool': 2244, 'ate': 2245, 'jonas': 2246, 'guns': 2247, 'stfu': 2248, 'images': 2249, 'ghost': 2250, 'deepstateexpose': 2251, 'plz': 2252, 'blunt': 2253, 'trumps': 2254, 'war': 2255, 'offf': 2256, 'hinking': 2257, 'palmer': 2258, 'cards': 2259, 'rush': 2260, 'decision': 2261, 'ideas': 2262, 'drinks': 2263, 'sammy': 2264, 'ish': 2265, 'comparing': 2266, 'twist': 2267, 'contract': 2268, '839': 2269, '90sym': 2270, 'grocery': 2271, 'blessed': 2272, 'ure': 2273, 'appreciate': 2274, 'ms': 2275, 'edit': 2276, 'photos': 2277, 'jo': 2278, 'scenes': 2279, 'omnipotentlexy': 2280, 'quran': 2281, 'muffin': 2282, 'whose': 2283, 'trade': 2284, 'creamy': 2285, 'accept': 2286, 'joking': 2287, 'lotta': 2288, 'bisexual': 2289, 'eggs': 2290, 'nev': 2291, 'signs': 2292, 'looney': 2293, 'produce': 2294, 'shoulder': 2295, 'feat': 2296, 'antifa': 2297, 'mango': 2298, 'moms': 2299, 'ment': 2300, 'leash': 2301, 'lawn': 2302, 'design': 2303, 'drove': 2304, 'amd': 2305, 'uk': 2306, 'mates': 2307, 'drew': 2308, 'eyebrows': 2309, 'lmaoooo': 2310, 'box': 2311, 'arab': 2312, 'conditions': 2313, 'saintsfan5348': 2314, 'motive': 2315, 'guts': 2316, 'attack': 2317, 'pace': 2318, 'iran': 2319, 'winner': 2320, 'announced': 2321, 'chair': 2322, 'lucky': 2323, 'huening': 2324, 'kai': 2325, 'islamic': 2326, 'itsscoop': 2327, 'lmfaooooo': 2328, 'order': 2329, 'bbc': 2330, 'brexitpa': 2331, '89000': 2332, 'jordan': 2333, 'nalu': 2334, 'banging': 2335, 'pics': 2336, 'unicorn': 2337, 'nationals': 2338, 'knee': 2339, 'entirely': 2340, 'mimirocah1': 2341, 'africa': 2342, 'sits': 2343, 'shades': 2344, '1st': 2345, 'interesting': 2346, 'user': 2347, 'humanity': 2348, 'therickwilson': 2349, 'retweeted': 2350, 'attacks': 2351, 'terrorism': 2352, 'mistakes': 2353, 'leaders': 2354, 'coach': 2355, 'rly': 2356, 'hug': 2357, 'league': 2358, 'hirono': 2359, 'protecting': 2360, 'currently': 2361, 'wh': 2362, 'fever': 2363, 'affect': 2364, 'cases': 2365, 'pads': 2366, 'biggie': 2367, 'waist': 2368, 'puff': 2369, 'understands': 2370, 'embarrassment': 2371, 'nail': 2372, 'amazon': 2373, 'clutch': 2374, 'ps4': 2375, 'purges': 2376, 'juanlovescock': 2377, 'nobody2': 2378, 'kekeslime': 2379, 'suddenly': 2380, 'ups': 2381, 'death': 2382, 'livepd': 2383, 'addition': 2384, 'documents': 2385, 'shade': 2386, 'openly': 2387, 'washing': 2388, 'television': 2389, 'created': 2390, 'magat': 2391, 'richie': 2392, 'obviously': 2393, 'thankful': 2394, 'solomon': 2395, 'tyler': 2396, 'adulting': 2397, 'dads': 2398, 'cooking': 2399, 'andrew': 2400, 'known': 2401, 'egg': 2402, 'dril': 2403, 'asherxloren': 2404, 'announcement': 2405, 'legitimate': 2406, 'tommy': 2407, 'babyb0ybangtan': 2408, 'searching': 2409, 'lucifer': 2410, 'bear': 2411, 'august': 2412, '2018': 2413, 'somethin': 2414, 'episode': 2415, 'disease': 2416, 'mmm': 2417, 'format': 2418, 'pit': 2419, 'goddamn': 2420, 'lrihendry': 2421, 'chatbycc': 2422, 'ik': 2423, 'taeil': 2424, 'possibly': 2425, 'rack': 2426, 'sudden': 2427, 'creepy': 2428, 'bru': 2429, 'largest': 2430, 'brienne': 2431, 'trans': 2432, 'mod': 2433, 'potts': 2434, 'respond': 2435, 'apology': 2436, 'meg': 2437, 'finds': 2438, 'wheel': 2439, 'porn': 2440, 'thanking': 2441, 'racing': 2442, 'chanhun': 2443, 'unit': 2444, 'sm': 2445, 'senator': 2446, 'dey': 2447, 'kaya': 2448, 'dropped': 2449, 'pushing': 2450, 'robreiner': 2451, 'hilton': 2452, 'abeg': 2453, 'sissoko': 2454, 'tote': 2455, 'kook': 2456, 'ner': 2457, 'numb': 2458, 'edkrassen': 2459, 'williams': 2460, 'someday': 2461, 'dlmpleskz': 2462, 'jr': 2463, 'raped': 2464, 'others': 2465, 'slave': 2466, 'clip': 2467, 'spoon': 2468, 'shutting': 2469, 'leak': 2470, 'progress': 2471, 'confirming': 2472, 'pauljasonklein': 2473, 'benefit': 2474, 'earphones': 2475, 'elivalley': 2476, 'drained': 2477, 'ingrahamangle': 2478, 'crime': 2479, 'runners': 2480, 'retweeting': 2481, 'yank': 2482, 'cape': 2483, 'marks': 2484, 'snake': 2485, 'area': 2486, 'ability': 2487, 'icle': 2488, 'cheap': 2489, 'suit': 2490, 'schiff': 2491, 'petttyy': 2492, 'xx': 2493, 'alike': 2494, 'clicked': 2495, 'picks': 2496, 'boomin': 2497, 'eternal': 2498, 'price': 2499, 'bop': 2500, 'begins': 2501, 'drops': 2502, 'sp': 2503, 'kili': 2504, 'hat': 2505, 'ju': 2506, 'build': 2507, 'camera': 2508, 'belief': 2509, 'stops': 2510, 'bilingual': 2511, 'illiterate': 2512, 'patriots': 2513, 'military': 2514, 'ligue': 2515, 'coupe': 2516, 'france': 2517, 'promise': 2518, 'disrespectful': 2519, 'replies': 2520, '30cm': 2521, 'doggintrump': 2522, 'upon': 2523, 'grinding': 2524, 'easily': 2525, 'heck': 2526, 'safe': 2527, 'option': 2528, 'pair': 2529, 'holder': 2530, 'scrape': 2531, 'covered': 2532, 'h3h3productions': 2533, 'headphones': 2534, 'albums': 2535, 'deserved': 2536, 'mmpadellan': 2537, 'covering': 2538, 'windows': 2539, 'book': 2540, 'dare': 2541, '150': 2542, 'giveaway': 2543, 'alexivenegas': 2544, 'terror': 2545, 'bones': 2546, 'corbyn': 2547, 'marie': 2548, 'brownsuga': 2549, 'situations': 2550, 'kawhi': 2551, 'iced': 2552, 'coffee': 2553, 'starbucks': 2554, 'reusable': 2555, 'awareness': 2556, 'mannymua733': 2557, 'shining': 2558, 'growing': 2559, 'organization': 2560, 'meditation': 2561, 'excellent': 2562, 'regional': 2563, 'timing': 2564, 'gotfinale': 2565, 'lunatic': 2566, 'sandrawocs': 2567, 'mactroll5': 2568, 'rachelstarrxxx': 2569, 'michaelavenatti': 2570, 'noticed': 2571, 'dickhead': 2572, 'shits': 2573, 'felix': 2574, 'adult': 2575, 'sat': 2576, 'incredible': 2577, 'beings': 2578, 'navy': 2579, 'blessing': 2580, 'aynrandpaulryan': 2581, 'loop': 2582, 'barrlied': 2583, 'rub': 2584, 'b52malmet': 2585, 'johnpavlovitz': 2586, 'christian': 2587, 'prosecutor': 2588, 'biological': 2589, 'relationships': 2590, 'cream': 2591, 'brie': 2592, 'professional': 2593, 'ainly': 2594, 'younger': 2595, 'key': 2596, 'gangbang': 2597, 'sloppy': 2598, 'extr': 2599, 'mix': 2600, 'taemin': 2601, 'ina': 2602, 'submit': 2603, 'management': 2604, 'pon': 2605, 'formal': 2606, 'mining': 2607, 'sense': 2608, 'enews': 2609, 'righteousdem': 2610, 'jukazi2r': 2611, 'lining': 2612, 'snitch': 2613, 'rice': 2614, 'david': 2615, 'false': 2616, 'feed': 2617, 'protect': 2618, 'ship': 2619, 'ruby': 2620, 'hannahrodgers': 2621, 'lori': 2622, 'laughlin': 2623, 'brock': 2624, 'turner': 2625, 'anywhere': 2626, 'jay': 2627, 'mens': 2628, 'laugh': 2629, '50000': 2630, 'manila': 2631, 'klay': 2632, 'thompson': 2633, '247jimin': 2634, 'jamie': 2635, 'tolerate': 2636, 'crewcrew': 2637, 'attempting': 2638, 'vax': 2639, 'inna': 2640, 'gauntlet': 2641, 'selfies': 2642, 'bff': 2643, 'playlist': 2644, 'mentality': 2645, 'equally': 2646, 'yappie': 2647, 'millennial': 2648, 'inspiration': 2649, '120': 2650, '365': 2651, 'starks': 2652, 'sings': 2653, '100000': 2654, 'icymi': 2655, 'coup': 2656, 'raised': 2657, 'brought': 2658, 'peanut': 2659, 'motherfucking': 2660, 'lolol': 2661, 'pjhughes45': 2662, 'etc': 2663, 'buddy': 2664, 'rec': 2665, 'aiko': 2666, 'houston': 2667, 'fresh': 2668, 'saddest': 2669, 'kennedy': 2670, 'complain': 2671, 'vids': 2672, 'millionaires': 2673, 'worldstar': 2674, 'wifi': 2675, 'stood': 2676, 'seat': 2677, 'bretmanrock': 2678, 'gospel': 2679, 'print': 2680, 'qu': 2681, 'unusual': 2682, 'elect': 2683, 'access': 2684, 'patience': 2685, 'set': 2686, 'spotlightbts': 2687, 'audience': 2688, 'nevacoblan': 2689, '2012': 2690, 'ruined': 2691, 'trend': 2692, 'scotland': 2693, 'scientists': 2694, 'warn': 2695, 'questions': 2696, 'creature': 2697, 'invoke': 2698, 'insurrection': 2699, 'rocks': 2700, 'explosion': 2701, 'scream': 2702, 'culture': 2703, '08': 2704, 'photo': 2705, 'climate': 2706, 'designate': 2707, 'venezuela': 2708, 'russians': 2709, 'cubans': 2710, 'ivankatrump': 2711, 'boat': 2712, 'roasted': 2713, 'braids': 2714, 'trailer': 2715, 'evening': 2716, 'avengers': 2717, 'race': 2718, 'slurpee': 2719, 'machine': 2720, 'multi': 2721, 'angle': 2722, 'clips': 2723, 'vps': 2724, 'education': 2725, 'session': 2726, 'besides': 2727, 'dm': 2728, 'pan': 2729, 'hyunjin': 2730, 'heejin': 2731, 'dragged': 2732, 'whitehouse': 2733, 'alphabet': 2734, 'rodgers': 2735, 'convenient': 2736, 'boi': 2737, 'wheres': 2738, 'sneak': 2739, 'willing': 2740, 'slap': 2741, 'hanna': 2742, 'stolen': 2743, 'table': 2744, 'kerry': 2745, 'hap': 2746, 'brush': 2747, 'completely': 2748, 'hoping': 2749, 'japan': 2750, 'krassenstein': 2751, 'juicy': 2752, 'staples': 2753, 'girlsreallyrule': 2754, 'assholes': 2755, 'enablers': 2756, 'glasses': 2757, 'garza': 2758, 'alexmcl': 2759, 'wash': 2760, 'jane': 2761, 'ph': 2762, 'scum': 2763, 'toes': 2764, 'tina': 2765, 'taste': 2766, 'grey': 2767, 'overrated': 2768, 'unimpo': 2769, 'dragons': 2770, 'vulgar': 2771, 'valid': 2772, 'crank': 2773, 'hype': 2774, 'informed': 2775, 'campaign': 2776, 'abu': 2777, 'sunrise': 2778, 'invent': 2779, 'learned': 2780, 'prepare': 2781, 'rules': 2782, 'ge': 2783, 'offices': 2784, 'parallel': 2785, 'smoking': 2786, 'carolina': 2787, 'rn': 2788, 'mickeyknox': 2789, 'honesty': 2790, 'unfriend': 2791, 'sink': 2792, 'convince': 2793, 'freely': 2794, 'tattoo': 2795, 'universe': 2796, 'rage': 2797, 'pop': 2798, '34': 2799, 'personality': 2800, 'city': 2801, 'nug': 2802, 'recent': 2803, 'childcare': 2804, 'initial': 2805, 'labor': 2806, 'defence': 2807, 'ho': 2808, 'explaining': 2809, 'puppies': 2810, 'el': 2811, 'mon': 2812, 'maintain': 2813, 'prope': 2814, 'exposing': 2815, 'shattawalegh': 2816, 'causing': 2817, 'panic': 2818, 'sam': 2819, 'billratchet': 2820, 'chances': 2821, 'alt': 2822, 'bu': 2823, 'approved': 2824, 'additional': 2825, '80': 2826, 'miles': 2827, 'gah': 2828, 'mystery': 2829, 'event': 2830, 'jimmfelton': 2831, 'bruce': 2832, 'rashford': 2833, 'stock': 2834, 'church': 2835, 'pains': 2836, 'festivals': 2837, 'indeed': 2838, 'google': 2839, 'battymamzelle': 2840, 'trout': 2841, 'lmfaoooo': 2842, 'keys': 2843, 'lmaooo': 2844, 'subdeliveryzone': 2845, 'edet': 2846, 'shapiro': 2847, '5sos': 2848, 'almostjingo': 2849, 'brains': 2850, 'masterpiece': 2851, 'memory': 2852, 'btsanalytics': 2853, 'acceptable': 2854, 'compliments': 2855, 'los': 2856, 'murdered': 2857, 'stfuiol': 2858, 'pixie': 2859, 'aware': 2860, 'national': 2861, 'coward': 2862, 'bucks': 2863, 'community': 2864, 'turkey': 2865, 'prime': 2866, 'smaller': 2867, 'urge': 2868, 'masculine': 2869, 'gif': 2870, 'declare': 2871, 'cha': 2872, 'shape': 2873, 'dif': 2874, 'fired': 2875, 'stars': 2876, 'constitutional': 2877, 'ethics': 2878, 'dates': 2879, 'centre': 2880, 'humans': 2881, 'taxes': 2882, 'speaks': 2883, 'shelter': 2884, 'punch': 2885, 'staffing': 2886, 'clea': 2887, 'carer': 2888, 'spare': 2889, 'christmas': 2890, 'park': 2891, 'jk': 2892, 'wildin': 2893, 'rematch': 2894, 'senatedems': 2895, 'bullies': 2896, 'mess': 2897, 'oil': 2898, 'bonny': 2899, 'committee': 2900, 'realmattcouch': 2901, 'madder': 2902, 'rupaul': 2903, 'dy': 2904, 'adorable': 2905, 'rubbing': 2906, 'ken': 2907, 'kindness': 2908, 'confront': 2909, 'insecurity': 2910, 'terrifying': 2911, 'mandatory': 2912, 'attending': 2913, 'lived': 2914, 'soyeon': 2915, 'cheeks': 2916, 'ss': 2917, 'express': 2918, 'ole': 2919, 'expecting': 2920, 'awkward': 2921, 'cindtrillella': 2922, 'himsel': 2923, 'prom': 2924, 'poc': 2925, 'asos': 2926, 'beau': 2927, 'bailey': 2928, 'champ': 2929, 'unlv': 2930, 'livenationkpop': 2931, 'bugiinillusion': 2932, 'rhetoric': 2933, 'potus': 2934, '5000': 2935, 'milestone': 2936, 'delivery': 2937, 'whiny': 2938, 'nigel': 2939, 'oneseoulph': 2940, 'defbabybird': 2941, 'badass': 2942, '06': 2943, 'chicago': 2944, 'jm': 2945, 'coolest': 2946, 'ksiolajidebt': 2947, 'dax': 2948, 'proceeding': 2949, 'anticipated': 2950, 'mu': 2951, 'gap': 2952, 'grateful': 2953, 'spell': 2954, 'subconscious': 2955, 'ana': 2956, 'pulled': 2957, 'imbecile': 2958, 'vf': 2959, 'ne': 2960, 'speed': 2961, 'prod': 2962, 'results': 2963, 'threatened': 2964, 'lat': 2965, 'interview': 2966, 'awek': 2967, 'tak': 2968, 'standard': 2969, 'veteran': 2970, 'tinder': 2971, 'switched': 2972, 'cnnpolitics': 2973, 'umm': 2974, 'benn': 2975, 'queens': 2976, 'deranged': 2977, 'ou': 2978, 'd2': 2979, 'unfair': 2980, 'trynna': 2981, '1972': 2982, 'grab': 2983, 'onto': 2984, 'presidency': 2985, 'finding': 2986, 'lovers': 2987, 'strangers': 2988, 'pts': 2989, 'homies': 2990, 'risk': 2991, 'therefore': 2992, 'sippurified': 2993, 'hitler': 2994, 'counties': 2995, 'sounds': 2996, 'billboard': 2997, 'rihanna': 2998, 'bodies': 2999, 'got7official': 3000, 'sleeves': 3001, 'hide': 3002, 'title': 3003, 'jamiroquai': 3004, 'picking': 3005, 'miami': 3006, 'imaginable': 3007, 'gurmeetramrahim': 3008, 'derasachasauda': 3009, 'dera': 3010, 'sacha': 3011, 'sauda': 3012, 'uncomfo': 3013, 'rip': 3014, 'strzok': 3015, 'iq': 3016, 'impressed': 3017, 'forms': 3018, 'believes': 3019, 'arabs': 3020, 'yellow': 3021, 'sc': 3022, 'powerful': 3023, 'plenty': 3024, 'enjoying': 3025, 'wings': 3026, 'jamierodr14': 3027, 'chamber': 3028, 'applies': 3029, 'selfie': 3030, 'crack': 3031, 'wrinkly': 3032, 'evans': 3033, 'immigration': 3034, 'shawty': 3035, 'cbsnews': 3036, 'causes': 3037, 'li': 3038, 'anc': 3039, 'cops': 3040, 'lick': 3041, 'lmaooooo': 3042, 'sayang': 3043, 'tw': 3044, 'productive': 3045, 'crooks': 3046, 'answers': 3047, 'vlive': 3048, 'capable': 3049, 'pictures': 3050, 'phones': 3051, 'unwhitewashed': 3052, 'unities': 3053, 'fade': 3054, 'shocking': 3055, 'february': 3056, 'boom': 3057, 'fill': 3058, 'prep': 3059, 'commission': 3060, 'locking': 3061, 'promo': 3062, 'overwhelmed': 3063, 'airplane': 3064, 'response': 3065, 'heading': 3066, 'bullet': 3067, 'pack': 3068, 'ignorance': 3069, 'sized': 3070, 'takes': 3071, 'naw': 3072, '99': 3073, 'mothers': 3074, 'grew': 3075, 'milkshake': 3076, 'ashes': 3077, 'excited': 3078, 'cross': 3079, 'merch': 3080, 'alex': 3081, 'rachel': 3082, 'day6': 3083, 'bare': 3084, 'minimum': 3085, 'foreverashlyn': 3086, 'ayoovik': 3087, 'prucenter': 3088, 'mothersday': 3089, 'lacey': 3090, 'lay': 3091, 'thot': 3092, 'competitive': 3093, 'statistics': 3094, 'strongly': 3095, 'bambam': 3096, 'gautamgambhir': 3097, 'playoffs': 3098, 'mountain': 3099, 'german': 3100, 'qt': 3101, 'victory': 3102, 'adam': 3103, 'previous': 3104, 'sight': 3105, 'omggg': 3106, 'platform': 3107, 'runs': 3108, 'hoarsewisperer': 3109, 'covera': 3110, 'belt': 3111, 'assume': 3112, 'mariah': 3113, 'smiling': 3114, 'remembered': 3115, 'mission': 3116, 'values': 3117, 'duo': 3118, 'footyhumour': 3119, 'nct': 3120, 'bluegrass': 3121, 'surpassed': 3122, 'nightmare': 3123, 'resign': 3124, 'stealing': 3125, 'skz': 3126, 'drowning': 3127, 'cats': 3128, 'alena': 3129, 'neither': 3130, 'struggling': 3131, 'revrrlewis': 3132, 'pod': 3133, 'separate': 3134, 'citizen': 3135, 'alll': 3136, 'naughty': 3137, 'beth': 3138, 'lack': 3139, 'un': 3140, 'na': 3141, 'tetris': 3142, 'donated': 3143, 'reeseyola': 3144, 'peter': 3145, 'communication': 3146, 'gn': 3147, 'unstable': 3148, 'jen': 3149, 'direct': 3150, 'coolpadind': 3151, 'sapper': 3152, '87': 3153, 'entendre': 3154, 'reach': 3155, 'dodgers': 3156, 'twitch': 3157, 'youu': 3158, 'prospect': 3159, 'abe': 3160, 'ucl': 3161, 'tht': 3162, 'sweety': 3163, 'airjunebug': 3164, 'bay': 3165, 'caleon': 3166, 'gaslighting': 3167, 'shelovetimothy': 3168, 'tavianjordan': 3169, 'summer': 3170, 'trips': 3171, 'kickbacks': 3172, 'hermescxbin': 3173, 'spaceboykenny': 3174, 'cel': 3175, 'shading': 3176, 'ts': 3177, 'feeeling': 3178, 'bmt': 3179, 'newspaper': 3180, 'tedlieu': 3181, 'actively': 3182, 'assistance': 3183, 'tikkkii': 3184, 'thekoyostore': 3185, 'rainbow6game': 3186, 'hnnnng': 3187, 'beginning': 3188, 'regarded': 3189, 'maverick': 3190, 'surgical': 3191, 'labeled': 3192, 'beast': 3193, 'strangementle': 3194, 'deltarunes': 3195, 'bangers': 3196, 'natalianoyes': 3197, 'melissafumeros': 3198, 'emily': 3199, 'nyc': 3200, '28th': 3201, 'aewrestling': 3202, 'hangman': 3203, 'cassidy': 3204, 'viddywel2': 3205, 'zipamoney': 3206, 'subs': 3207, 'louis': 3208, 'vuitton': 3209, 'medical': 3210, 'lawyers': 3211, 'nurses': 3212, 'brewer383': 3213, 'heard00': 3214, 'daishatatianna': 3215, 'tyousb': 3216, 'edibles': 3217, 'christiansymoh': 3218, 'incmp': 3219, 'addresses': 3220, 'hoshangabad': 3221, 'madhya': 3222, 'pradesh': 3223, 'abhoganyay': 3224, 'realmenswallow1': 3225, 'hhhoottttt': 3226, 'atlas': 3227, 'grant': 3228, 'avatar': 3229, 'akyia': 3230, 'ryan': 3231, 'powers': 3232, 'theknights': 3233, 'lilheli': 3234, 'sid': 3235, 'lambe': 3236, 'pirlo': 3237, '40th': 3238, 'ping': 3239, 'bastard': 3240, 'himse': 3241, 'mrdane1982': 3242, 'voring': 3243, 'hindusinuk': 3244, 'gujarati': 3245, 'hindu': 3246, 'compassion': 3247, 'sanskaras': 3248, 'maa': 3249, 'mohanajtweet': 3250, 'ainer': 3251, 'uhm': 3252, 'kaymu12': 3253, 'gym': 3254, 'charmander': 3255, 'winsome3005': 3256, 'tunnel': 3257, 'doink': 3258, 'chat': 3259, 'chick': 3260, 'blktoppa': 3261, 'boogie2988': 3262, 'assuming': 3263, 'honeyjamnamjoon': 3264, '2am': 3265, 'mystic': 3266, 'rushh801': 3267, 'parasiteahcf': 3268, 'embroidery': 3269, 'needlework': 3270, 'bakerbitchbakes': 3271, 'makers': 3272, 'ultems': 3273, 'clothes': 3274, 'trademarck': 3275, 'neil': 3276, 'zee': 3277, 'knifewear': 3278, 'sudoutatsuya': 3279, 'grammar': 3280, 'dreddxn': 3281, 'stresses': 3282, 'julietmevans': 3283, 'ruled': 3284, 'freepspo': 3285, 'exiled': 3286, 'nikeismylogo23': 3287, 'facetime': 3288, 'reconnecting': 3289, 'broug': 3290, 'datniggabooty': 3291, 'exceeded': 3292, 'stellar': 3293, 'adivasismatter': 3294, 'rulers': 3295, 'looted': 3296, 'adivasi': 3297, 'gladson': 3298, 'dungdung': 3299, 'wr': 3300, 'championships': 3301, 'harden': 3302, 'nonku': 3303, 'umuntu': 3304, 'wahamba': 3305, 'dealing': 3306, 'babuya': 3307, 'beshintshile': 3308, 'aight': 3309, 'imm': 3310, 'igor': 3311, 'softsadsatan': 3312, 'counted': 3313, 'skulltrendy': 3314, 'mosquitos': 3315, 'ruining': 3316, 'new10': 3317, 'kaminaman': 3318, 'within': 3319, 'stupiest': 3320, 'deaf': 3321, 'nws': 3322, 'sohmer': 3323, 'pays': 3324, 'levied': 3325, 'bjeleren': 3326, 'transfer': 3327, 'afghanistan': 3328, 'readkropotkin': 3329, 'nc': 3330, 'prisons': 3331, 'bilalfqi': 3332, 'shias': 3333, 'protesting': 3334, 'enforced': 3335, 'disappearances': 3336, 'arifalvi': 3337, 'sectarian': 3338, 'outfit': 3339, 'aswj': 3340, 'antigopmvmt': 3341, 'changes': 3342, 'yourrdesire': 3343, 'smstains007': 3344, 'myriad': 3345, 'aspirations': 3346, 'messiah': 3347, 'ushering': 3348, 'jazzlaro': 3349, 'bacon': 3350, 'bikinis': 3351, 'freedom': 3352, 'deixem': 3353, 'estar': 3354, 'bracarollo': 3355, 'spanish': 3356, 'difficult': 3357, 'inclusive': 3358, 'adderalldaddy': 3359, 'rapharmarinho': 3360, 'makayla': 3361, '3305': 3362, 'shanedawson': 3363, 'spontaneous': 3364, 'planner': 3365, 'spur': 3366, 'chimpsinsocks': 3367, 'spouts': 3368, 'ually': 3369, 'strawberry': 3370, 'kiwi': 3371, 'koolaid': 3372, 'taekookers': 3373, 'nation': 3374, 'breathedt': 3375, 'eighteen': 3376, 'twenty': 3377, 'clarajeffery': 3378, 'billions': 3379, 'marshall': 3380, 'pla': 3381, 'talm': 3382, 'ohite': 3383, 'okayyyy': 3384, 'teampelosi': 3385, 'susandanzig219': 3386, 'phenomenal': 3387, 'helm': 3388, 'christiaannn': 3389, 'null': 3390, 'jakesherman': 3391, 'yashar': 3392, 'isan': 3393, 'lied': 3394, 'endorsing': 3395, 'positions': 3396, 'appease': 3397, 'novrpali': 3398, 'shahada': 3399, 'laurabranigan': 3400, 'stlouisblues': 3401, 'blues': 3402, 'kathy': 3403, 'golik': 3404, 'lgb': 3405, 'playgloria': 3406, 'laurabr': 3407, 'shannon49170750': 3408, 'carmex': 3409, 'applying': 3410, 'max': 3411, 'theiriot': 3412, 'jintaellect': 3413, 'exols': 3414, 'dior': 3415, 'sam19': 3416, 'arms': 3417, 'buff': 3418, 'lyricalwarsgh': 3419, 'kahaz': 3420, 'vanish': 3421, 'scumbelievable': 3422, 'ferretpa': 3423, 'casten': 3424, 'sigil': 3425, 'shark': 3426, 'trailing': 3427, 'brianjmoran': 3428, 'b1n2h3x': 3429, 'jtrajewski': 3430, 'nccc': 3431, 'scooters': 3432, 'lfnsjxjeofkskfke': 3433, 'yoooo': 3434, 'verse': 3435, 'vampires': 3436, 'bluud': 3437, 'suckas': 3438, 'ners': 3439, 'behaviors': 3440, 'chaechae': 3441, 'chae': 3442, 'chaer': 3443, 'djdjduxudgybjdjdiendnfndj': 3444, 'hajimeme': 3445, 'lynxed': 3446, 'whyyougagging': 3447, 'romantic': 3448, 'freakyacts': 3449, 'fuckkkk': 3450, 'everyday': 3451, 'lillymarypinto': 3452, 'pappu': 3453, 'mouthed': 3454, 'siddu': 3455, 'sidramaiah': 3456, 'prakash': 3457, 'raj': 3458, 'stretching': 3459, 'cocoamochacrml': 3460, 'irontriiogy': 3461, 'blu': 3462, 'ray': 3463, 'minut': 3464, 'spytalker': 3465, 'conspi': 3466, 'canelo28969897': 3467, 'itscandyyyyyyy': 3468, 'diamondrhona': 3469, 'darrel30901325': 3470, 'noblenosey': 3471, 'myusernamesthis': 3472, 'heater': 3473, 'ought': 3474, 'jadedmemoirs': 3475, 'foh': 3476, 'trumpstudents': 3477, 'dvdjxx': 3478, 'lebron': 3479, 'butt': 3480, 'c3p0': 3481, 'lash': 3482, 'lea': 3483, 'sehunwife124': 3484, 'awiembr69': 3485, 'aweminusdub': 3486, 'rae': 3487, 'execution': 3488, 'gall': 3489, 'marcism': 3490, 'colorblindk1d': 3491, 'legitimately': 3492, 'tyrannical': 3493, 'remdior': 3494, '2013': 3495, 'oth': 3496, 'mroluwag': 3497, 'righteous': 3498, 'fierce': 3499, 'crawl': 3500, 'slagheap': 3501, 'eltlaxca': 3502, 'ruelas': 3503, 'shotwithsoju': 3504, 'msazealiabanks': 3505, 'cyst': 3506, 'soeljchillinger': 3507, 'dmed': 3508, 'greg': 3509, 'officialth': 3510, 'heydayxpe': 3511, 'hsaint': 3512, 'dothraki': 3513, 'savage': 3514, 'tatsuforbs': 3515, 'pim': 3516, 'undocumented': 3517, 'undergrad': 3518, 'chem': 3519, 'blessingmariah': 3520, 'nigeria': 3521, 'cmon': 3522, 'fairway': 3523, 'tyra267': 3524, 'purelysteve': 3525, 'abhisar': 3526, 'sharma': 3527, 'pragya': 3528, 'thakur': 3529, 'eternallyixing': 3530, '190510': 3531, 'yixing': 3532, 'weibo': 3533, 'layzhang': 3534, 'fur': 3535, 'pomusana': 3536, 'mags': 3537, 'mode': 3538, '04': 3539, '58pm': 3540, 'princessdaddyve': 3541, 'cooperated': 3542, 'twted': 3543, 'monicalauren': 3544, 'juke': 3545, 'jam': 3546, 'pfr1end': 3547, 'abolishing': 3548, 'lords': 3549, 'bichard': 3550, 'retired': 3551, 'pensions': 3552, 'ewww': 3553, 'jesss': 3554, 'poop': 3555, 'mrmanhere': 3556, 'nigerians': 3557, '2019thoughts': 3558, 'shawnmendes': 3559, 'chuckmabli': 3560, 'mushrooms': 3561, 'ascii': 3562, 'eggman': 3563, 'arent': 3564, 'chaos': 3565, 'emeralds': 3566, 'chanson': 3567, 'jiaer': 3568, 'thit': 3569, 'casforachange': 3570, 'horrid': 3571, 'avalanchethearchitect': 3572, 'shots': 3573, 'xboxshare': 3574, 'increasingly': 3575, 'unclear': 3576, 'advocating': 3577, 'pipeline': 3578, 'kayagbayani': 3579, 'calebhayden40': 3580, 'yoongibucket': 3581, 'smol': 3582, 'polite': 3583, 'gaciria': 3584, 'ryanbacardi': 3585, 'jimmymac1888': 3586, 'cannae': 3587, 'fickwit': 3588, 'cp': 3589, 'sirenalba': 3590, 'ryandestiny': 3591, 'savestar': 3592, 'renewstar': 3593, 'jeen4tae': 3594, 'sapphicaptain': 3595, 'gamora': 3596, 'indireta': 3597, 'com': 3598, 'inicial': 3599, 'prolificpapi': 3600, 'bun': 3601, 'lilbittyya': 3602, 'clinical': 3603, 'terms': 3604, 'vagina': 3605, 'sper': 3606, 'stranger': 3607, 'approach': 3608, 'unwo': 3609, 'marleygonarley': 3610, 'backwood': 3611, 'swisher': 3612, 'niggggga': 3613, 'determined': 3614, 'plumb': 3615, 'depths': 3616, 'opposite': 3617, 'alejandriaeliz1': 3618, 'kunkelana': 3619, 'ogbenidipo': 3620, 'engage': 3621, 'cybercrime': 3622, 'swamp': 3623, 'idillionaire': 3624, 'purpose': 3625, 'revel': 3626, 'supply': 3627, 'tamarajuana': 3628, 'stormzy': 3629, 'vegans': 3630, 'c1ockwork': 3631, 'jays': 3632, 'dunks': 3633, 'tmetrvlingshark': 3634, 'hworm': 3635, 'julmoonchild': 3636, 'unnie': 3637, 'crypticnoone': 3638, 'arnold': 3639, 'serendipitydon1': 3640, 'givemeyouruwus': 3641, 'mattbc': 3642, 'sap': 3643, 'francisyuuuh': 3644, 'suga': 3645, 'gekkouhigh': 3646, 'llcooitre': 3647, 'billpalmer': 3648, 'twisted': 3649, 'indaubney': 3650, 'pinning': 3651, 'colours': 3652, 'lubchansky': 3653, 'maydaymindy9': 3654, 'sycophant': 3655, 'khlomoney98': 3656, 'askplaystation': 3657, 'judicialwatch': 3658, 'foia': 3659, 'disclosures': 3660, 'referral': 3661, 'russiagate': 3662, 'targeting': 3663, 'mollybofficial': 3664, 'ehhhhh': 3665, 'molester': 3666, 'heelbalor': 3667, 'hitchcock': 3668, 'scully': 3669, 'amandapanda': 3670, 'chickencoleman': 3671, 'lukeallmond1': 3672, 'drafting': 3673, 'changeling': 3674, 'guide': 3675, 'mastodon': 3676, 'reader': 3677, 'users': 3678, 'keasiaaaaa': 3679, 'owe': 3680, 'salmannizami': 3681, 'divider': 3682, 'achievement': 3683, 'yrs': 3684, 'lacienega': 3685, 'dancing': 3686, 'sxnchild13': 3687, 'healing': 3688, 'coping': 3689, 'mechanisms': 3690, 'shivaroor': 3691, 'monsoon': 3692, 'delayed': 3693, 'masoodazhar': 3694, 'designated': 3695, 'sushma': 3696, 'ji': 3697, 'williamnhutton': 3698, 'charlatan': 3699, 'inveterate': 3700, 'transmutation': 3701, 'righ': 3702, 'jayversace': 3703, 'brennentaylor': 3704, 'gurus': 3705, 'compet': 3706, 'tim': 3707, 'dawson': 3708, 'miriam': 3709, 'gammon': 3710, 'sanctimonious': 3711, 'travelling': 3712, 'introduced': 3713, 'theladycady': 3714, 'thesunofmst': 3715, 'tmorello': 3716, 'tos': 3717, 'soooooo': 3718, 'buh': 3719, 'sloormp': 3720, 'nines': 3721, 'commence': 3722, 'g2wunder': 3723, 'fastest': 3724, 'bo5': 3725, 'hanjistar': 3726, 'minho': 3727, 'straykidsinla': 3728, 'blackytom': 3729, 'scammer': 3730, 'youngthegoat': 3731, 'silverrich39': 3732, 'skeletonbarkley': 3733, 'chubbs': 3734, 'huge': 3735, 'dork': 3736, 'finance': 3737, 'transaction': 3738, 'unexclue': 3739, 'pjmsplumplips': 3740, 'rosstmiller': 3741, 'nerf': 3742, 'bastion': 3743, 'furries': 3744, 'phrases': 3745, 'eternally': 3746, 'awestruckvox': 3747, 'treatment': 3748, 'widow': 3749, 'inkigayo': 3750, 'sandwiches': 3751, 'zerenlogy': 3752, 'nex7': 3753, 'unine': 3754, 'tyger': 3755, 'oner': 3756, 'rocket': 3757, 'snh48': 3758, 'luhan': 3759, 'yosiefff': 3760, 'oregonian': 3761, 'maddiesimonx': 3762, 'ajrbrothers': 3763, 'esnypilots': 3764, 'the1975': 3765, 'littleone1': 3766, 'confession': 3767, 'myse': 3768, 'repjerrynadler': 3769, 'patriotic': 3770, 'nark': 3771, 'mariannecar': 3772, 'pinchpunchpost': 3773, 'thekidpink': 3774, 'statement': 3775, 'brandeebabyyy': 3776, 'screams': 3777, 'motherfucker': 3778, 'breakoutnet': 3779, 'makasih': 3780, 'udah': 3781, 'nayangin': 3782, 'breakout': 3783, 'juliannarvivas': 3784, 'taegisnt': 3785, 'omeomusic': 3786, 'tr3glove': 3787, 'migrant': 3788, 'functioning': 3789, 'somalia': 3790, 'eritrea': 3791, 'yeme': 3792, 'strapped': 3793, 'mommies': 3794, 'volunteer': 3795, 'ngreyy': 3796, 'brennan': 3797, 'eldeen0': 3798, 'tifffuxxsake': 3799, 'popcorn': 3800, 'receipts': 3801, 'fatty': 3802, 'dollaz': 3803, 'wine': 3804, 'sunrickbell': 3805, 'shelving': 3806, 'profitable': 3807, 'corporations': 3808, 'luis': 3809, 'suarez': 3810, 'spa': 3811, 'obvs': 3812, 'fuckmefistmeluv': 3813, 'pounding': 3814, 'stevebucky': 3815, 'forwardarc': 3816, 'returns': 3817, 'concludes': 3818, 'miguelsantosiii': 3819, 'tamwelsh': 3820, 'unfo': 3821, 'unately': 3822, 'cloutmonger': 3823, 'assassinated': 3824, 'thefinalepisode': 3825, 'softangelporn': 3826, 'goodnight': 3827, 'blusoie': 3828, 'alabama': 3829, 'suzanneevans1': 3830, 'xoshayyj': 3831, 'theaccolonn': 3832, 'striking': 3833, 'bikini': 3834, 'tori': 3835, 'fervently': 3836, 'rodriguez': 3837, 'sage': 3838, 'tweetpotato314': 3839, 'meh': 3840, 'jar': 3841, 'address': 3842, 'stepped': 3843, 'frenchie': 3844, 'erasers': 3845, 'stationery': 3846, 'addicts': 3847, 'gre': 3848, 'imali': 3849, 'remembering': 3850, 'faces': 3851, 'hipstaaa': 3852, 'boutta': 3853, 'fw': 3854, 'waffle': 3855, 'georgeburkeee': 3856, 'rl': 3857, 'caro': 3858, 'hhypocrisy101': 3859, 'fina': 3860, 'ogsaffron': 3861, 'maghavan': 3862, 'sudas': 3863, 'cries': 3864, 'bolts': 3865, 'conquering': 3866, 'peoplefor': 3867, 'makeout': 3868, 'builds': 3869, 'shownu': 3870, 'hyungwon': 3871, 'selcas': 3872, 'jfhskhsjdja': 3873, 'vapid': 3874, 'omayarvl': 3875, 'aphrabrandreth': 3876, 'celebratechessington': 3877, 'cakes': 3878, 'painting': 3879, 'bonus': 3880, 'popcrave': 3881, 'selenagomez': 3882, 'acclaimed': 3883, 'platinum': 3884, 'peaked': 3885, 'triviamiroh': 3886, 'maze': 3887, '3racha': 3888, 'geniuses': 3889, 'diamond': 3890, 'symmone': 3891, 'bodegaacat': 3892, 'hypin': 3893, 'vivanmarwaha': 3894, 'profvarshney': 3895, 'idoitear': 3896, 'owning': 3897, 'cupboard': 3898, 'roccomckay4': 3899, 'thetonyslattery': 3900, 'troll': 3901, 'snowfairyg': 3902, 'rapping': 3903, 'dinonuggetsyt': 3904, 'roblox': 3905, 'sungjae': 3906, 'bday': 3907, 'myveganmenu': 3908, 'cauliflower': 3909, 'foodpo': 3910, 'skynews': 3911, 'meant': 3912, 'iuna': 3913, 'aegi': 3914, 'angi': 3915, 'countdankulatv': 3916, 'lefties': 3917, 'painted': 3918, 'mcsmallyy': 3919, 'andreaacepedaa': 3920, 'rachieeegeee': 3921, 'nkaiferiii': 3922, 'ijn': 3923, 'confrontational': 3924, 'grins': 3925, 'mp3': 3926, 'note': 3927, 'behaved': 3928, 'macy': 3929, 'ideal': 3930, 'votebates': 3931, 'damiangreen': 3932, 'torycanvass': 3933, 'kenttories': 3934, 'ashfordcca': 3935, 'becom': 3936, 'chloebennet': 3937, 'ericraydavidson': 3938, 'psy': 3939, 'stat': 3940, 'presentation': 3941, 'bernlennials': 3942, 'failure': 3943, 'goddam': 3944, 'rudd': 3945, 'groundbreaking': 3946, 'bigge': 3947, 'added': 3948, 'unemployment': 3949, 'lows': 3950, 'muftimenk': 3951, 'almighty': 3952, 'stunna4vegas': 3953, 'untilthespring': 3954, 'speaking': 3955, 'revolution': 3956, 'drolufunmilayo': 3957, 'darling': 3958, 'tonto': 3959, 'kindly': 3960, 'incorrecttdbkdk': 3961, 'halfie': 3962, 'shouto': 3963, 'izuku': 3964, 'cell': 3965, 'haunt': 3966, 'katiejorummie': 3967, 'responded': 3968, 'emotions': 3969, 'kay2000raybaybay': 3970, 'coryxkenshin': 3971, 'weedcriticstm': 3972, 'senthomtillis': 3973, 'discord': 3974, 'asy': 3975, 'slacking': 3976, 'itsaprilina': 3977, 'correctly': 3978, 'deadass': 3979, 'priv': 3980, 'ikonida': 3981, 'hanbin': 3982, 'lvanroon': 3983, 'lenniemerrick1': 3984, 'flyraccoonfly': 3985, 'lovewestley': 3986, 'peterwalker99': 3987, 'soulmates': 3988, 'wrk': 3989, '630': 3990, 'dnt': 3991, '830': 3992, '30am': 3993, '30000': 3994, 'syeerazak': 3995, 'emerv99': 3996, '15th': 3997, 'sexes': 3998, 'uberfacts': 3999, 'measurements': 4000, 'recorded': 4001, 'poland': 4002, 'minnie': 4003, 'jackson': 4004, 'llaurenaaaa': 4005, 'melanislays': 4006, 'freedlander': 4007, 'chokinghzard': 4008, 'remembers': 4009, 'blindmelon1967': 4010, '1990': 4011, 'bumper': 4012, 'sticker': 4013, 'follows': 4014, 'faithgoldy': 4015, 'tellyoursonthis': 4016, 'peaceful': 4017, 'affordable': 4018, 'roykeane': 4019, 'jesselingard': 4020, 'ironic': 4021, 'congr': 4022, 'ovfefe': 4023, 'aroloye': 4024, 'glamourbyborah': 4025, 'lexaa': 4026, 'flanders': 4027, 'mnnicole29': 4028, 'apostateridvan': 4029, 'qualifications': 4030, 'atte': 4031, 'trumpsgop': 4032, 'jhasanjay': 4033, 'dhamaka': 4034, 'ofadragonfey': 4035, '190': 4036, 'aiiahstans': 4037, 'east': 4038, 'cl': 4039, 'trumpsanassh0le': 4040, 'tramp': 4041, 'pulpit': 4042, 'untruths': 4043, 'thr': 4044, 'repeatedly': 4045, 'pyromancy': 4046, 'repcummings': 4047, 'legal': 4048, 'insani': 4049, 'stcvenat': 4050, 'stevenat': 4051, 'skeletons': 4052, 'twin': 4053, 'ski': 4054, 'crop': 4055, 'boots': 4056, 'kstackz': 4057, '69': 4058, 'bdrkgaming': 4059, 'susie': 4060, 'rabaca': 4061, 'kayleenaaajae': 4062, 'palmbae': 4063, 'unpopular': 4064, 'ogkkai': 4065, 'theestallion': 4066, 'zomboy': 4067, 'mixes': 4068, 'jerkvening': 4069, 'ps1': 4070, 'marx': 4071, '2001': 4072, 'wrestling': 4073, 'model': 4074, 'ruinthesteele': 4075, 'eilish': 4076, 'grabbing': 4077, 'boobs': 4078, 'sososo': 4079, 'ukpuru': 4080, 'woven': 4081, 'raffia': 4082, 'mkpuru': 4083, 'eboe': 4084, 'baikie': 4085, '1856': 4086, 'firs': 4087, 'itsginnydi': 4088, 'valour': 4089, 'alton': 4090, 'nvm': 4091, 'irritated': 4092, 'radioactive3d': 4093, 'inventing': 4094, 'telepo': 4095, 'ful': 4096, 'dotslondon': 4097, '118': 4098, 'sanctuary': 4099, 'homeless': 4100, 'injunction': 4101, 'served': 4102, 'princess': 4103, 'stef69': 4104, 'roach': 4105, 'bukolasaraki': 4106, 'attended': 4107, 'fidau': 4108, 'prayer': 4109, 'hajia': 4110, 'hanatu': 4111, 'nagode': 4112, 'agbesinga': 4113, 'adini': 4114, 'ansarul': 4115, 'asalatu': 4116, 'neverenwickk': 4117, 'folk': 4118, 'hahahaha': 4119, 'isuza': 4120, 'subbed': 4121, 'mashemejiderby': 4122, 'ingwe': 4123, 'tradepchat': 4124, 'annkillion': 4125, 'kerr': 4126, 'dionyseui': 4127, 'stans': 4128, 'dolobased': 4129, 'repeating': 4130, 'anfieldedition': 4131, 'elderly': 4132, 'foun': 4133, 'gmyn': 4134, 'simplyhealthuk': 4135, 'manchester': 4136, 'heroes': 4137, 'heyitsdeegamer': 4138, 'stephcurry': 4139, 'stephbetter': 4140, 'reaidababy': 4141, 'accent': 4142, 'offended': 4143, 'brielarsqn': 4144, 'rubiu5': 4145, 'tchivese': 4146, 'tarirokamuti': 4147, 'bred': 4148, 'soulfluidity': 4149, 'spinoff': 4150, 'etika': 4151, 'cancelled': 4152, 'jacobamirabrown': 4153, '7im': 4154, 'buisness': 4155, 'igetcrazyyyyyy': 4156, 'undercoveriana': 4157, 'byee': 4158, 'swimming': 4159, 'alinefe72533941': 4160, 'obli': 4161, 'crushinat0r': 4162, 'haunted': 4163, 'lighthouse': 4164, 'dee': 4165, 'niallbaxter': 4166, 'gobshite': 4167, 'creating': 4168, 'edits': 4169, 'angstyhea': 4170, 'grows': 4171, 'sholamos1': 4172, 'huffpostuk': 4173, 'whiteprivilege': 4174, 'whitewashes': 4175, 'inflammatory': 4176, 'unconscious': 4177, 'perpetuates': 4178, 'mindyanns': 4179, 'punished': 4180, 'ethnicity': 4181, 'victims': 4182, 'iaintasiantho': 4183, 'hless': 4184, 'lachicapicante': 4185, 'cages': 4186, 'separated': 4187, 'frankwarren': 4188, 'bronzebomber': 4189, 'tyson': 4190, 'congratulates': 4191, 'gentlemen': 4192, 'playnikes': 4193, 'afedbeats': 4194, 'jensrina': 4195, 'coincidence': 4196, 'sinnsage': 4197, 'addicted': 4198, 'delicious': 4199, 'outstanding': 4200, 'worsh': 4201, 'adamrowecomedy': 4202, 'divvy': 4203, 'scouse': 4204, 'climb': 4205, 'trunks': 4206, 'multiverse': 4207, 'seauiater': 4208, 'nikkietutorials': 4209, 'slthnrafi': 4210, 'destroy': 4211, 'alter': 4212, 'testosterone': 4213, 'skinnyboyblu': 4214, 'fredbrooklyn': 4215, 'shekels': 4216, 'misslilydoodle': 4217, 'simpboylen': 4218, '45am': 4219, 'renhotels': 4220, 'mrpoints': 4221, 'yvonnecampfens': 4222, 'cokofoundation': 4223, 'yvonne': 4224, 'accepted': 4225, 'manuscripts': 4226, 'ratio': 4227, 'maggie': 4228, 'schmieg': 4229, 'thegoldenratio4': 4230, 'cussing': 4231, 'stopping': 4232, 'annually': 4233, 'convinced': 4234, 'arc': 4235, 'das': 4236, 'determine': 4237, 'bangtanseadayak': 4238, 'mentioned': 4239, 'anarchoswift': 4240, 'liberating': 4241, 'summereatsdi': 4242, 'whorganicmayo': 4243, 'bukola': 4244, 'saraki': 4245, 'deputy': 4246, 'ike': 4247, 'ekweremadu': 4248, 'hailed': 4249, 'nigerian': 4250, 'playi': 4251, 'kongie223': 4252, 'fana': 4253, 'tgd': 4254, 'nim': 4255, 'puddingxknight': 4256, 'chargin': 4257, 'no1': 4258, 'hobbies': 4259, 'traditionally': 4260, 'framed': 4261, 'salice': 4262, 'roseee': 4263, 'hesowholesome': 4264, 'larrygrant95': 4265, 'nfl': 4266, 'niners': 4267, 'lift': 4268, 'station': 4269, 'beygency': 4270, 'hopelogist': 4271, 'vmkism': 4272, 'speeding': 4273, 'approximately': 4274, 'auc': 4275, 'hay1343': 4276, 'wykodie': 4277, 'usbpchief': 4278, 'rmconservative': 4279, 'dhsgov': 4280, 'nielsen': 4281, 'refuse': 4282, 'hireko': 4283, 'somecallmejon': 4284, 'hotspot': 4285, 'shaky': 4286, 'standstill': 4287, 'fbfishslap': 4288, 'challenged': 4289, 'knuckles': 4290, 'bobbypits': 4291, 'sxsw': 4292, 'binhwan': 4293, 'selves': 4294, 'yaindmachine': 4295, 'praying': 4296, 'alhamdulilah': 4297, 'rabil': 4298, 'alameen': 4299, 'arahman': 4300, 'eraheem': 4301, 'mal': 4302, 'racerx393': 4303, 'usmnt': 4304, 'mattmiazga3': 4305, 'onnn': 4306, 'aries': 4307, 'oskar': 4308, 'chesting': 4309, 'realizes': 4310, 'grammatically': 4311, 'furry': 4312, 'sickkids': 4313, 'terrace': 4314, 'poo': 4315, 'greaaat': 4316, 'mumlife': 4317, 'justdarell': 4318, 'naps': 4319, '35': 4320, 'ksoobyuns': 4321, 'copying': 4322, 'bams': 4323, 'jnr': 4324, '65th': 4325, 'boateng': 4326, 'gamble': 4327, 'carlthegrandpa': 4328, 'kkunta': 4329, 'cadensmommy': 4330, 'theatre': 4331, 'anddd': 4332, '801': 4333, '600': 4334, 'assists': 4335, '199': 4336, 'lionel': 4337, 'mbal': 4338, 'rely': 4339, '2pac': 4340, 'eireannk1': 4341, 'anthemrespect': 4342, 'spookygothloser': 4343, 'suckers': 4344, 'slaps': 4345, 'cieannaheavenly': 4346, 'bangtanblooms': 4347, 'ranaayyub': 4348, 'effucktivehumor': 4349, 'pope': 4350, 'demeaning': 4351, 'doubl': 4352, 'liisad': 4353, 'sadness': 4354, 'serenitys': 4355, 'jiminhopeworld': 4356, 'bookmark': 4357, 'caps': 4358, '300': 4359, 'kekestrella': 4360, 'cappin': 4361, 'bethanievera': 4362, 'banning': 4363, 'ggyounggboy': 4364, 'realjack': 4365, 'decide': 4366, 'markets': 4367, 'instance': 4368, 'aladdin': 4369, 'remake': 4370, 'ggukreum': 4371, 'getty': 4372, 'watermark': 4373, 'iwriteallday': 4374, 'folx': 4375, 'firstly': 4376, 'empathy': 4377, 'debate': 4378, 'moth': 4379, 'underwhelmed': 4380, 'highlight': 4381, 'debit': 4382, 'popeye': 4383, 'hun': 4384, 'saint': 4385, 'referring': 4386, 'eliesaaab': 4387, 'valyrian': 4388, 'tongue': 4389, 'heyhietpas': 4390, 'johnsonjr': 4391, 'bars': 4392, 'timisola': 4393, 'olatoyosi': 4394, 'timi': 4395, 'gav7': 4396, 'rushed': 4397, 'compare': 4398, 'sopranos': 4399, 'strategic': 4400, 'eilidh': 4401, 'blair': 4402, 'jobless': 4403, 'vacuum': 4404, 'cleaner': 4405, 'thepplsxchoice': 4406, 'kool': 4407, 'aid': 4408, 'maga': 4409, 'junk': 4410, 'lemzhussle': 4411, 'teemonstaaaa': 4412, 'teurope': 4413, 'dxp': 4414, 'heels': 4415, 'hehe': 4416, 'fuuuuuuuckkk': 4417, 'greciaa650': 4418, 'anodyneparadigm': 4419, 'clive': 4420, 'contracted': 4421, 'hire': 4422, 'recruit': 4423, 'purchasing': 4424, 'products': 4425, 'jayhitemup': 4426, 'fsu': 4427, 'showtibzlove': 4428, 'malls': 4429, 'yoo': 4430, 'tirzahmj': 4431, 'tipsy': 4432, 'rufared': 4433, 'goodness': 4434, 'pterodactyl': 4435, 'anassrhallab': 4436, 'lyanna': 4437, 'mormont': 4438, 'thedailyshow': 4439, 'julius': 4440, 'lowvav': 4441, 'vav': 4442, 'bracelet': 4443, 'ricovtheg': 4444, 'std': 4445, 'condoms': 4446, 'oral': 4447, 'routinely': 4448, 'carminezozzora': 4449, 'erased': 4450, 'barstoolspo': 4451, 'stoolpresidente': 4452, 'jontaffer': 4453, 'coat': 4454, 'scratch': 4455, 'ohjd': 4456, 'samlazim': 4457, 'skint': 4458, 'pocketcheol': 4459, 'gona': 4460, 'curseofoak': 4461, 'scottymaru': 4462, 'urself': 4463, 'taught': 4464, 'lac': 4465, 'dwright75': 4466, 'chelsea': 4467, 'sung': 4468, 'salah': 4469, 'unadult': 4470, 'cityboicoop': 4471, 'peterboykin': 4472, 'icles': 4473, 'ifntvisual': 4474, 'stressed': 4475, 'bloops': 4476, 'onlyhessascott': 4477, 'necks': 4478, 'paramedic1965': 4479, '2tweetaboutit': 4480, 'jetstepper': 4481, 'vincecable': 4482, 'slightly': 4483, 'carried': 4484, 'il': 4485, 'pills': 4486, 'methods': 4487, 'mulla': 4488, 'beaten': 4489, 'hanky': 4490, 'panky': 4491, 'gir': 4492, 'wvjoe911': 4493, 'politicususa': 4494, 'proclamation': 4495, 'dumbassdonny': 4496, 'rectum': 4497, 'kevinandbean': 4498, 'intended': 4499, 'toast': 4500, 'etaerealkookie': 4501, 'bigbuttnthighs': 4502, 'hood': 4503, 'koosdlrey': 4504, 'going2': 4505, 'anyon': 4506, 'shaynaaconnell': 4507, 'shaybrenno': 4508, 'rotten': 4509, 'theellenshow': 4510, 'hanszimmer': 4511, 'ellen': 4512, 'acknowledge': 4513, 'wolfrad': 4514, 'hentai': 4515, 'repdluxe': 4516, 'easter': 4517, 'momotozakis': 4518, 'lizncube': 4519, 'radio': 4520, 'jill': 4521, 'darke': 4522, 'zodiac': 4523, 'notamuggle11': 4524, 'avengerlars': 4525, 'wales': 4526, 'swansea': 4527, 'yangyang': 4528, 'convinces': 4529, 'xiaojun': 4530, 'stupidest': 4531, 'dlfferentvibe': 4532, 'healed': 4533, 'homeworldof': 4534, 'liberals': 4535, 'fundraisers': 4536, 'homes': 4537, 'kingsjaehwan': 4538, 'jaehwan': 4539, 'independent': 4540, 'trainee': 4541, '101': 4542, 'atetizaya': 4543, 'shalabh77302166': 4544, 'ins': 4545, 'vikrant': 4546, '1986': 4547, 'sae': 4548, 'lake': 4549, 'lanier': 4550, 'dasjozef': 4551, 'dfbharvard': 4552, 'ismuttyy': 4553, 'linklamont': 4554, 'peaches': 4555, 'babyhayhayyy': 4556, 'wonyoungakgae': 4557, 'soloistyujin': 4558, 'shree': 4559, 'mariya': 4560, 'catalog': 4561, 'weight': 4562, 'caramelslade': 4563, 'jekojekouem': 4564, 'opticmboze': 4565, 'premasridevi': 4566, 'virgin': 4567, 'chaste': 4568, 'comparison': 4569, 'lzwegotone': 4570, 'talkn': 4571, 'shoulda': 4572, 'grandma': 4573, 'asnwering': 4574, 'rhis': 4575, 'grisl': 4576, 'axeldiamandis': 4577, 'accidentally': 4578, 'sonwooang': 4579, 'visited': 4580, 'bluebelles': 4581, 'obello': 4582, 'cafes': 4583, 'hannah': 4584, 'durcan': 4585, 'austin': 4586, 'ray1': 4587, 'dbirch214': 4588, 'emis': 4589, 'csgo': 4590, 'tomatoes': 4591, 'taywildin': 4592, 'lewdplex': 4593, 'lewd': 4594, 'att': 4595, 'omfgggg': 4596, 'caption': 4597, 'jalancarimakan': 4598, 'opens': 4599, 'nasi': 4600, 'repjimlower': 4601, 'native': 4602, 'michigander': 4603, 'deplorable': 4604, 'grild': 4605, 'cheez': 4606, 'clearl': 4607, 'tmz': 4608, 'terminator': 4609, 'cfcjamie': 4610, 'clearances': 4611, 'rudiger': 4612, 'cb': 4613, 'breitba': 4614, 'awaited': 4615, 'stream': 4616, 'viewer': 4617, 'taehyunsvisuals': 4618, 'taehyun': 4619, 'cheers': 4620, 'meonbbmas': 4621, 'iconic': 4622, 'taylorswift13': 4623, 'brendonurie': 4624, 'panicatthedisco': 4625, 'mativenko80': 4626, 'anunziata': 4627, 'scotthech': 4628, 'seal': 4629, 'medic': 4630, 'wounded': 4631, 'fighter': 4632, 'cinemasins': 4633, 'monty': 4634, 'python': 4635, 'grail': 4636, 'og': 4637, 'kiee': 4638, 'aaaanit': 4639, 'flyasjaay': 4640, 'yatta': 4641, 'southside': 4642, 'uthman': 4643, 'waxcav': 4644, 'abraham': 4645, 'lincoln': 4646, 'efcc': 4647, 'mail': 4648, 'dominatrix': 4649, 'liza': 4650, 'smith': 4651, 'quancyclayborne': 4652, 'rent': 4653, 'fund': 4654, 'claire': 4655, 'adonis': 4656, 'biased': 4657, 'panel': 4658, 'split': 4659, 'favour': 4660, 'nhs': 4661, 'ambulance': 4662, 'emergency': 4663, 'elspetheastman': 4664, 'looper': 4665, 'embarrassed': 4666, 'ahaha': 4667, 'sounded': 4668, 'jcissakih': 4669, 'ahahahhahahaha': 4670, 'iitsabrahim': 4671, '6m': 4672, '21000': 4673, 'cate': 4674, 'counter': 4675, 'favourite': 4676, 'airbnb': 4677, 'mirrors': 4678, 'bestie': 4679, 'ninextwentyyy3': 4680, 'thoughts': 4681, 'dexol99': 4682, 'edmure': 4683, 'marco': 4684, 'stabed': 4685, 'flip': 4686, 'ashleyfeinberg': 4687, 'saeen90': 4688, 'inequality': 4689, 'lavendernred': 4690, 'giftedsole': 4691, 'ankle': 4692, 'primary': 4693, 'bfred': 4694, 'slimerre': 4695, 'thumbs': 4696, 'delavegalaw': 4697, 'objectively': 4698, 'neutrality': 4699, 'tomiahonen': 4700, 'drewgash': 4701, 'devs': 4702, 'thekalenallen': 4703, 'uttered': 4704, 'gisselle': 4705, 'knowles': 4706, 'wtvrbriii': 4707, 'complaining': 4708, 'unproblematic': 4709, 'youtuber': 4710, 'hom': 4711, '86': 4712, 'petebuttigieg': 4713, 'lgbt': 4714, 'peoples': 4715, 'pockets': 4716, 'trollbot': 4717, 'whacked': 4718, 'smdh': 4719, 'soundcloud': 4720, 'nuchbannister': 4721, 'orbits': 4722, 'path': 4723, 'japuntichkim': 4724, 'jiwooyeri': 4725, 'mattox215': 4726, 'danda': 4727, 'ss7sinjkt': 4728, 'advance': 4729, 'hubby': 4730, 'premiosmtv': 4731, 'violetirwin76': 4732, 'lasted': 4733, 'thedailybeast': 4734, 'writes': 4735, 'minion': 4736, 'arreola': 4737, 'timeline': 4738, 'nojewl': 4739, 'adamantxyves': 4740, 'huffmanfornc': 4741, 'ordained': 4742, 'struggles': 4743, 'imthirstaekim': 4744, 'ceecam16': 4745, 'adamsugly': 4746, 'ceesir143': 4747, 'jesssipoop': 4748, 'denny': 4749, 'spilled': 4750, 'alcoholic': 4751, 'krishank9': 4752, 'encourage': 4753, 'dylangreer11': 4754, 'ialso': 4755, 'hardie': 4756, 'entrepreneurs': 4757, 'influencers': 4758, 'auth': 4759, 'peachbaby': 4760, 'hugging': 4761, 'endearing': 4762, 'throws': 4763, 'whoevers': 4764, 'deemed': 4765, 'akademiks': 4766, 'reprieve': 4767, 'wednesday': 4768, 'serendipitydizl': 4769, 'mazie': 4770, 'accuses': 4771, 'grifter': 4772, 'jessbelmosto': 4773, 'pacer': 4774, 'collective': 4775, 'mandela': 4776, 'timelines': 4777, 'jenashleywright': 4778, 'bodily': 4779, 'autonomy': 4780, 'aebxo': 4781, 'tampons': 4782, 'wip': 4783, 'quandanine': 4784, 'aksually': 4785, 'mcchicken': 4786, 'nowadays': 4787, 'ewanrushrblx': 4788, 'viciousjango': 4789, 'kk': 4790, 'blocking': 4791, 'rk': 4792, 'exploit': 4793, 'tiaramvrie': 4794, 'anger': 4795, 'lincmurllamy': 4796, 'mikrokosmossis': 4797, 'jsksks': 4798, 'knowin': 4799, 'pigeon': 4800, 'crucified': 4801, 'clone': 4802, 'carsensmaddah': 4803, 'fave': 4804, 'scramble': 4805, 'rukhsxr': 4806, 'shuts': 4807, 'pissedkuromi': 4808, 'lmaooooooooooo': 4809, 'rioting': 4810, 'versatile': 4811, 'dilf2050': 4812, 'aussietrbl': 4813, 'michaeljc1979': 4814, 'omaha': 4815, 'pal': 4816, 'rodsteelofficia': 4817, 'unifo': 4818, 'amarinriar': 4819, 'needa': 4820, 'ioi': 4821, 'putrajaya': 4822, 'iamvore': 4823, 'youve': 4824, 'ratlike': 4825, 'spherical': 4826, 'vore': 4827, 'orb': 4828, 'lumpen': 4829, 'proletariat': 4830, 'pseudodaddy': 4831, 'bronxtneck': 4832, 'searched': 4833, '3x': 4834, 'unequivocally': 4835, 'alexmonney': 4836, 'cummiesryummies': 4837, 'apush': 4838, 'hipp': 4839, 'misinterpreted': 4840, 'pushin': 4841, 'nothingbenji': 4842, 'kathyyyo': 4843, 'iz': 4844, 'payroll': 4845, 'depa': 4846, 'checks': 4847, 'fridays': 4848, 'shanalala': 4849, 'admitting': 4850, 'genitals': 4851, 'bum': 4852, 'classist': 4853, 'implying': 4854, 'mikeabrusci': 4855, 'bigtime': 4856, 'tommie': 4857, 'responsible': 4858, 'makk1123': 4859, 'visiting': 4860, 'posts': 4861, 'obstruction': 4862, 'poopin': 4863, 'honeyymac': 4864, 'bhoy': 4865, 'milaaa': 4866, '4000': 4867, 'kylesquid126': 4868, 'bottomed': 4869, 'rockin': 4870, 'ibern4you': 4871, 'giva': 4872, 'progressive': 4873, 'ta': 4874, 'stu': 4875, 'drewskikeaton': 4876, 'heroically': 4877, 'benfan29650536': 4878, 'ikr': 4879, 'anime': 4880, 'wenywhen': 4881, 'rurouni': 4882, 'kenshin': 4883, 'kyoto': 4884, 'inferno': 4885, 'prejury': 4886, 'nicolepineiro1': 4887, 'g0ofykid91': 4888, 'keptsecretxxx': 4889, 'winxx': 4890, 'keptsecret': 4891, 'diggin': 4892, 'tylerwinxx': 4893, 'yaiiugiy': 4894, 'replying': 4895, 'emotionally': 4896, 'dealt': 4897, 'manner': 4898, 'lindaforamerica': 4899, 'deflecting': 4900, 'retarded': 4901, 'drill': 4902, 'machines': 4903, 'spaces': 4904, 'dentists': 4905, '031000adoestuntin': 4906, 'amadijuana': 4907, 'somethings': 4908, 'overlooked': 4909, 'ignored': 4910, 'notlipglosse': 4911, 'snatches': 4912, 'satan': 4913, 'fireproof': 4914, 'spitfiregay': 4915, 'koalas': 4916, 'functionally': 4917, 'extinct': 4918, 'lilsasquatch66': 4919, 'isis': 4920, 'femiabodunde': 4921, 'dprk': 4922, 'gangstas': 4923, 'peasant': 4924, 'gangsta': 4925, 'rum': 4926, 'dipityjimins': 4927, 'plantain': 4928, 'fghfddsdgss': 4929, 'instinct': 4930, 'jungk': 4931, 'nikkicanape': 4932, 'donttrythis': 4933, 'reflecting': 4934, 'richards': 4935, 'karin': 4936, 'mtn': 4937, 'jumia': 4938, 'citron': 4939, 'french': 4940, 'seams': 4941, 'hardcore': 4942, 'erotic': 4943, 'sp0etynigga': 4944, 'blocc': 4945, 'donnt': 4946, 'detached': 4947, 'mynameisradzi': 4948, 'tolu': 4949, 'lvd': 4950, '1061000issfmdfw': 4951, 'btsarmy': 4952, '4pm': 4953, 'creep': 4954, 'aupafigura': 4955, 'delightedlyskit': 4956, 'friendships': 4957, 'easier': 4958, 'emhealyyy': 4959, 'jamescharlesisoverpa': 4960, 'grooming': 4961, 'gangs': 4962, 'gullible': 4963, 'haytlayf': 4964, 'philippines': 4965, 'altruistic': 4966, 'centered': 4967, 'bestfrsn': 4968, 'bestfrannn': 4969, 'bestfran': 4970, 'michelleskiwi': 4971, 'edamemebeans': 4972, 'michelle': 4973, '50shadesofday': 4974, 'itsfoodporn': 4975, 'chow': 4976, 'mein': 4977, 'exofancam60sc': 4978, 'stre': 4979, 'lietomelwt': 4980, 'ways': 4981, 'netflixlifee': 4982, 'luciferseason5': 4983, '90swwe': 4984, 'dta': 4985, 'realmickfoley': 4986, 'steveaustinbsr': 4987, 'stopadani': 4988, 'chicks': 4989, 'theweekly': 4990, 'willie': 4991, 'swallow': 4992, 'utu': 4993, 'acceptance': 4994, 'ezra': 4995, 'miller': 4996, 'gender': 4997, 'fluid': 4998, 'ignoring': 4999, 'cis': 5000, 'sweatyhairy': 5001, 'nike': 5002, 'shi': 5003, 'hardest': 5004, 'worldwidechels': 5005, 'teams': 5006, 'cfc': 5007, 'imnotthatsad': 5008, 'goi': 5009, 'notk4iley': 5010, 'umg': 5011, 'covers': 5012, 'censoring': 5013, 'creati': 5014, 'moodalisa': 5015, 'reay': 5016, 'theonion': 5017, 'pbs': 5018, 'hur': 5019, 'ratburn': 5020, 'twink': 5021, 'pinned': 5022, 'rosebro49034242': 5023, 'unmarried': 5024, 'pentecostal': 5025, 'preacher': 5026, 'rare': 5027, 'lustylacroix': 5028, 'mmmm': 5029, 'thrus': 5030, 'daveg2018': 5031, 'emilyadele18': 5032, 'petermonn': 5033, 'jaxs': 5034, 'talks': 5035, 'presidenttweety': 5036, 'phucked': 5037, 'ileys': 5038, 'mii': 5039, 'v4000arian': 5040, 'arggg': 5041, 'stressful': 5042, 'democrat': 5043, 'violent': 5044, 'rhetori': 5045, 'any1': 5046, 'taeilrnb': 5047, 'flo': 5048, 'sprinkleofjesus': 5049, 'boobster': 5050, 'luckycharms': 5051, 'ezioisntcool56': 5052, 'generation': 5053, 'nurhidayu': 5054, 'define': 5055, 'nominated': 5056, 'actress': 5057, 'laurenridloff': 5058, 'brin': 5059, 'drebae': 5060, 'companionshi': 5061, 'midway': 5062, 'chats': 5063, 'evaporate': 5064, 'dodgy': 5065, 'raheemkassam': 5066, 'alienated': 5067, 'audiences': 5068, 'wohbts': 5069, 'coaching': 5070, 'parry': 5071, 'fiftieth': 5072, 'worke': 5073, 'nuby': 5074, 'busyphilipps': 5075, 'spoke': 5076, 'idly': 5077, 'stripped': 5078, 'talie': 5079, 'updating': 5080, 'wikipedia': 5081, 'cuteassmoemoe': 5082, 'tryn': 5083, 'triviageuk': 5084, 'aeonian': 5085, 'ssi': 5086, 'jungkookie': 5087, 'spocky': 5088, 'fanfic': 5089, 'impeaching': 5090, 'elecction': 5091, 'dietc0caine': 5092, '5000gs': 5093, 'henry44857029': 5094, 'fireplace': 5095, 'salesman': 5096, 'starksbruce': 5097, 'pepper': 5098, 'shxxxb131': 5099, 'sensibly': 5100, 'penultimate': 5101, 'bakaberries': 5102, 'smash': 5103, 'sniff': 5104, 'hskissey': 5105, 'harry': 5106, 'tours': 5107, '28': 5108, 'cohosting': 5109, 'gala': 5110, 'jayrecher': 5111, 'warhawks': 5112, 'shouldve': 5113, 'stillj93': 5114, 'grass': 5115, 'greener': 5116, 'fence': 5117, 'mowing': 5118, 'antfranco': 5119, 'rave': 5120, 'dharmorla': 5121, 'sanogram': 5122, 'stallion': 5123, 'closes': 5124, 'mb': 5125, 'maintenance': 5126, 'bac': 5127, 'xobritdear': 5128, 'bus': 5129, 'wankers': 5130, 'stagecoach': 5131, 'antoni': 5132, 'hyped': 5133, 'bench': 5134, 'thickmzansibbw': 5135, 'imbubbles': 5136, 'ggukfair': 5137, 'amateur': 5138, 'housewife': 5139, 'porno': 5140, 'sites': 5141, 'movies': 5142, 'myanmar': 5143, 'sexyiest': 5144, 'ramirez12jilian': 5145, 'fucken': 5146, 'brunodurso': 5147, 'maximilian': 5148, 'msleasalonga': 5149, 'reproductive': 5150, 'xcanogilbe': 5151, 'bihhhh': 5152, 'j0diesst': 5153, 'auba': 5154, 'prem': 5155, 'scores': 5156, 'mamma': 5157, 'mia': 5158, 'brazil': 5159, 'tonypaul45': 5160, 'noahbradley': 5161, 'hirobotix': 5162, 'scent': 5163, 'riceball': 5164, 'perked': 5165, 'hiro': 5166, 'vee': 5167, 'eevee': 5168, 'dazzling': 5169, 'faint': 5170, 'manliest': 5171, 'fuckers': 5172, 'softehun': 5173, 'ent': 5174, 'silence': 5175, 'confirm': 5176, '1256': 5177, '153': 5178, 'nollie': 5179, 'gloria': 5180, 'transpo': 5181, 'leadlawson': 5182, 'scrufflez': 5183, 'yaaasss': 5184, 'beeetch': 5185, 'possum': 5186, 'shehusani': 5187, 'kaduna': 5188, 'abuja': 5189, 'i': 5190, 'crisis': 5191, 'tapachula': 5192, 'border': 5193, 'thetunnelingcat': 5194, 'transcrowdfund': 5195, 'fvrrara': 5196, 'dogg': 5197, 'fuckkkkkkkkkkkkkkkk': 5198, 'lisamromano': 5199, 'poached': 5200, 'wasabi': 5201, 'mayo': 5202, 'segmented': 5203, 'lebanese': 5204, 'cucumber': 5205, 'slices': 5206, 'sesame': 5207, 'seeds': 5208, 'bpx5hh': 5209, 'blackpersonaa': 5210, 'arieella': 5211, 'folds': 5212, '100x': 5213, 'har': 5214, 'sansquish': 5215, 'sksksnks': 5216, 'idler': 5217, 'wyzo': 5218, 'michaelgoros': 5219, 'obamaspieddemocratslied': 5220, 'cuppymusic': 5221, 'maybelline': 5222, 'danieltiluk': 5223, 'barca': 5224, 'outsiders': 5225, 'stollerykids': 5226, 'stollery': 5227, 'proceeds': 5228, 'brianklaas': 5229, 'pre': 5230, 'barns': 5231, '15s': 5232, 'boogie22': 5233, 'gr3atness': 5234, 'ffs': 5235, 'lockupbarr': 5236, 'bishop': 5237, 'myron': 5238, 'matteoadragn': 5239, 'hillaryclinton': 5240, 'asylum': 5241, 'kyungseng': 5242, 'kyungsoo': 5243, 'breatheierjdj': 5244, 'timmounce': 5245, 'alliextina': 5246, 'apologizing': 5247, 'yessss': 5248, 'sighbrattt': 5249, 'donladywon': 5250, 'app': 5251, 'rainiest': 5252, 'weekends': 5253, 'stank': 5254, 'soulfulvegan269': 5255, '29': 5256, 'cows': 5257, 'dairy': 5258, 'damiergenesis': 5259, 'hateration': 5260, 'holleration': 5261, 'danceree': 5262, 'hoffa': 5263, 'wavyhardaway': 5264, 'mines': 5265, 'notpressed': 5266, 'ezidi2': 5267, 'eight': 5268, 'aprildawnr1974': 5269, 'channel4': 5270, 'factor': 5271, 'greatbritishschoolswap': 5272, 'okaayday': 5273, 'fuuuuuck': 5274, 'dyke': 5275, 'winks': 5276, 'resta': 5277, 'thatquirkywife': 5278, 'marrying': 5279, 'completes': 5280, 'woul': 5281, 'chaewwon': 5282, 'mbbs': 5283, 'kihyun': 5284, 'pen': 5285, 'sharonk00453169': 5286, 'huudrich': 5287, 'bioach': 5288, 'dimly': 5289, 'cellar': 5290, 'ffffyuiop': 5291, 'macawcaw123': 5292, 'jarheadz88': 5293, 'lizzfizzzz': 5294, 'redalphababe': 5295, 'remainers': 5296, 'boycott': 5297, 'tribal': 5298, 'instincts': 5299, 'aside': 5300, 'johnwhuber': 5301, 'doj': 5302, 'sources': 5303, 'survivors': 5304, 'predators': 5305, 'visualizations': 5306, 'pornhub': 5307, 'insights': 5308, 'lgbtmysterio': 5309, 'endgame': 5310, 'skillet': 5311, 'biscot': 5312, 'keegannnnn': 5313, 'rawdogged': 5314, 'applebee': 5315, 'organised': 5316, 'paynejklein': 5317, 'wished': 5318, 'happ': 5319, 'fromeneka': 5320, 'triggered': 5321, 'jamalaboutspo': 5322, 'mariamb18': 5323, 'winker': 5324, 'officialchip': 5325, 'ddoublee7': 5326, 'spittaz': 5327, 'frauds': 5328, 'versions': 5329, 'amberruddhr': 5330, 'reckless': 5331, 'payments': 5332, 'prodnose': 5333, 'formally': 5334, 'apologise': 5335, 'outrage': 5336, 'cheezitking123': 5337, 'kfc': 5338, 'popeyes': 5339, 'linguist': 5340, 'realgonegirl': 5341, 'physically': 5342, 'melt': 5343, 'terribleatnames': 5344, 'hdaysplash': 5345, 'register': 5346, 'debellatis': 5347, 'quickiethiccie': 5348, 'torn': 5349, 'ragnarok': 5350, 'slips': 5351, 'admits': 5352, 'illegitimate': 5353, 'unconstitutional': 5354, 'fails': 5355, 'throwbackhoops': 5356, 'shaq': 5357, 'venom': 5358, 'poorlycatdraw': 5359, 'kylegriffin1': 5360, 'lacks': 5361, 'credibility': 5362, 'compromised': 5363, 'dua': 5364, 'tfffccshsh': 5365, 'flowerlady61': 5366, 'ashlea': 5367, 'wii': 5368, 'theme': 5369, 'seconds': 5370, 'submitted': 5371, 'withyukrist': 5372, 'peck': 5373, 'hanstag': 5374, 'akin': 5375, 'cafe': 5376, 'olliethfc': 5377, 'minibus': 5378, 'buttandbutter': 5379, 'meditating': 5380, 'cave': 5381, 'cameras': 5382, 'hornyfacts': 5383, 'chin': 5384, 'yess': 5385, 'princessavabiso': 5386, 'gigi': 5387, 'pawls': 5388, 'celebrate': 5389, '48am': 5390, 'hly': 5391, 'heavenly': 5392, 'americeno': 5393, 'creme': 5394, 'brulee': 5395, 'brown': 5396, 'trendy': 5397, 'cendol': 5398, 'harga': 5399, 'milkyjooon': 5400, 'someth': 5401, 'dasucre': 5402, 'diorsjjk': 5403, '93': 5404, 'moots': 5405, 'andrearussett': 5406, 'aprilandflower1': 5407, 'rachelvmckinnon': 5408, 'testing': 5409, 'feminine': 5410, 'ajrafael': 5411, 'filipino': 5412, 'adobo': 5413, 'pancit': 5414, 'lumpia': 5415, 'breathe': 5416, '1t': 5417, 'hungry': 5418, 'iyanulive': 5419, 'urology': 5420, 'starlight': 5421, '9488': 5422, 'wakabaislitaf': 5423, 'alina': 5424, 'yapping': 5425, 'oopsie': 5426, 'seafire': 5427, 'market': 5428, 'booming': 5429, 'mekia': 5430, 'lordtttt': 5431, 'seokjinspout': 5432, 'jdawsey1': 5433, 'wind': 5434, 'lexialex': 5435, 'dox': 5436, 'teenage': 5437, 'hmtfu': 5438, 'musty': 5439, 'foolish': 5440, 'tix': 5441, 'que': 5442, 'kaioti23': 5443, 'clay': 5444, 'sims3': 5445, 'imtoblame': 5446, 'houstonrockets': 5447, 'gaypaulitumblr': 5448, 'rivet': 5449, 'begging': 5450, 'teamlocked': 5451, 'chastity': 5452, 'permanentchastity': 5453, 'gayslave': 5454, 'copperjacket': 5455, 'rob': 5456, 'rico': 5457, 'nastyy': 5458, 'valedictorian': 5459, 'graduation': 5460, 'mormon': 5461, 'byu': 5462, 'trvpism': 5463, 'randomly': 5464, 'saneeer': 5465, 'quiet': 5466, 'annaapp91838450': 5467, 'roll': 5468, 'luv': 5469, 'ha': 5470, 'thatgirlshaexo': 5471, 'ties': 5472, 'brands': 5473, 'sponsor': 5474, 'janelle': 5475, 'gffn': 5476, 'kylian': 5477, 'mbapp': 5478, 'u19': 5479, 'euros': 5480, 'troph': 5481, 'votes': 5482, 'accumulated': 5483, 'unlock': 5484, 'mtvs': 5485, 'fluorescentgrey': 5486, 'mediarootsnews': 5487, 'mikegravel': 5488, 'yuji': 5489, 'sakai': 5490, 'godzilla': 5491, '1995': 5492, 'ric': 5493, 'ange': 5494, 'lilskappi': 5495, 'paths': 5496, 'cevansdodger': 5497, 'peggy': 5498, 'chose': 5499, 'ico': 5500, 'jamieocitizen': 5501, 'dictators': 5502, 'countries': 5503, 'wraithdick': 5504, 'rein': 5505, 'brav': 5506, 'jksteinberger': 5507, 'pointing': 5508, 'quoting': 5509, 'molly7anne': 5510, 'zach': 5511, 'pronschinske': 5512, 'strikeouts': 5513, 'eighth': 5514, 'uw': 5515, 'whitewater': 5516, 'strands': 5517, 'fir': 5518, 'rawclutch3': 5519, 'deion': 5520, 'toolatetotellu': 5521, 'nafina': 5522, 'danielle': 5523, 'brithume': 5524, 'stonewalling': 5525, 'declaring': 5526, 'alezander': 5527, 'bees': 5528, 'cr7i9': 5529, '2011': 5530, 'gladys': 5531, 'jovel': 5532, 'failing': 5533, 'classes': 5534, 'joshuarush': 5535, 'bandaid': 5536, 'uggo': 5537, 'mazzello': 5538, 'tnomap': 5539, 'eeveespectrum': 5540, 'atua': 5541, 'pedophile': 5542, 'tmi3rd': 5543, 'gadsdenjazz': 5544, 'alexthechick': 5545, 'roddy': 5546, 'pipeher': 5547, 'streets': 5548, 'micaaaelllla': 5549, 'astro': 5550, 'cuties': 5551, 'mikefossey': 5552, 'ishmael': 5553, 'moby': 5554, 'wildling': 5555, 'thalibrakid': 5556, 'stag': 5557, 'carib': 5558, 'education4libs': 5559, 'javaughnsyw': 5560, 'ance': 5561, '12youngnelly': 5562, 'biuesided': 5563, 'seungri': 5564, 'wise': 5565, 'ohhh': 5566, 'hyukoh': 5567, 'juice': 5568, 'fate': 5569, 'colo': 5570, 'heory': 5571, 'century': 5572, 'discovering': 5573, 'hoes': 5574, 'playin': 5575, 'crunch': 5576, 'zoo': 5577, 'costs': 5578, 'adopt': 5579, 'literal': 5580, 'virgojoonie': 5581, 'taegi': 5582, 'aiondraor': 5583, 'actor': 5584, 'dinklage': 5585, 'shakespears': 5586, 'evosalazarrex': 5587, 'opened': 5588, 'mahlee': 5589, 'zamo': 5590, 'admiration': 5591, 'carlvinharris': 5592, 'near': 5593, 'buildings': 5594, 'makemecumplz': 5595, 'vaxxers': 5596, 'medicine': 5597, 'ahead': 5598, 'ext': 5599, 'prudinx': 5600, 'fecking': 5601, 'hypocrite': 5602, 'imessage': 5603, 'meaux': 5604, 'meauxx': 5605, 'burns': 5606, 'alana': 5607, 'slavers': 5608, 'taheer': 5609, 'hb': 5610, 'broooooooooooooooooooooo': 5611, 'mixed': 5612, 'substance': 5613, 'agreed': 5614, 'thirst': 5615, 'root': 5616, 'fruiteapunch': 5617, 'tge': 5618, 'sheer': 5619, 'filled': 5620, 'recieved': 5621, 'timescorbyn': 5622, 'adviser': 5623, 'murray': 5624, 'complaints': 5625, 'murderous': 5626, 'tyrant': 5627, 'stalin': 5628, 'adultmoodz': 5629, 'trippinassmeek': 5630, 'careful': 5631, 'pill': 5632, 'cosby': 5633, 'fault': 5634, 'raptors': 5635, 'milwaukee': 5636, 'whoyagot': 5637, 'mkf': 5638, 'theaqil': 5639, 'untuk': 5640, 'kali': 5641, 'ke': 5642, 'berapa': 5643, 'ntah': 5644, 'aku': 5645, 'harap': 5646, 'sangat': 5647, 'kaylaisblisss': 5648, 'environment': 5649, 'cu': 5650, 'muabriannna': 5651, 'se': 5652, 'emoore': 5653, 'lenomesthina': 5654, 'duncanj03516285': 5655, 'deceased': 5656, 'stephozz': 5657, 'bright': 5658, 'brittanyy': 5659, 'diaz': 5660, 'guaranteed': 5661, 'hardpops571': 5662, 'hime': 5663, 'armed': 5664, 'guard': 5665, 'stoplorddampnut': 5666, 'gregshugar': 5667, 'hater': 5668, 'thebigjc1969': 5669, 'lexylupinup': 5670, 'jessegladsaget': 5671, 'tayloreve': 5672, 'soulofthestreets': 5673, '5pm': 5674, 'bpemusic': 5675, 'bakery': 5676, 'hridge': 5677, 'alamgirizvi': 5678, 'responsibility': 5679, 'details': 5680, 'holes': 5681, 'sheilagaytan': 5682, 'cancels': 5683, 'testimony': 5684, 'escalating': 5685, 'ttungttunghano1': 5686, 'haiku': 5687, 'seunghyub': 5688, 'propo': 5689, 'fizzy': 5690, 'rivera': 5691, 'hipster': 5692, 'driving': 5693, 'downtown': 5694, 'premieres': 5695, 'thefinalepi': 5696, 'kbrilondon': 5697, 'hrc': 5698, 'bangkok': 5699, 'preparation': 5700, 'fobel': 5701, 'revolutionary': 5702, 'unelected': 5703, 'monarch': 5704, 'ftbichris': 5705, 'tbf': 5706, 'capcrunchbeats': 5707, 'johnbradleywest': 5708, 'burned': 5709, 'insen': 5710, 'fabalfaqih': 5711, 'virginity': 5712, 'briennesburnbook': 5713, 'rudygiuliani': 5714, 'senile': 5715, 'abigail': 5716, '1692': 5717, 'complexmusic': 5718, 'handed': 5719, 'rolex': 5720, 'gh': 5721, 'turks': 5722, 'lucas': 5723, 'meissner': 5724, 'sariaahhordway': 5725, 'benjaminwittes': 5726, 'vormirloki': 5727, 'n00bmaster69': 5728, 'korg': 5729, 'giggles': 5730, 'flashback': 5731, 'rhaegar': 5732, 'trident': 5733, 'yusefroach': 5734, 'brave': 5735, 'illness': 5736, 'ac': 5737, 'willbang4': 5738, 'gillellisyoung1': 5739, 'tittytuesday': 5740, 'bigtits': 5741, 'ladysonia': 5742, 'magicalboy1210': 5743, '181014': 5744, 'kangdaniel': 5745, 'nume': 5746, 'sexxx': 5747, 'khmerki': 5748, 'yaw': 5749, 'emmacliff0rd': 5750, 'snp': 5751, 'abused': 5752, 'felixverses': 5753, 'discovery': 5754, 'tomilahren': 5755, 'invaders': 5756, 'agricultural': 5757, 'construction': 5758, 'super70sspo': 5759, 'porch': 5760, 'scenario': 5761, 'unfold': 5762, 'listenbts': 5763, 'willl': 5764, 'floy': 5765, 'tokyorap': 5766, 'pleasure': 5767, 'hel': 5768, 'taika': 5769, 'waititi': 5770, 'keanu': 5771, 'reeves': 5772, 'markeymarc12': 5773, 'chi': 5774, 'zeeeraya': 5775, 'geez': 5776, 'reynaadelacruz': 5777, 'hip': 5778, 'sandals': 5779, 'monday': 5780, 'lunchbox507': 5781, 'bluestrike905': 5782, 'graceabelles': 5783, 'fallenintrov': 5784, 'sunnibunnn': 5785, 'batteredfighter': 5786, 'bluemotelroom': 5787, 'mayer': 5788, 'kswizzs': 5789, 'antisocialjw2': 5790, 'rekt': 5791, 'annford': 5792, 'mcclellen': 5793, 'vsugrad200019': 5794, 'barrhearing': 5795, 'wednesdayw': 5796, 'official': 5797, 'c9boyz': 5798, 'aggroflare': 5799, 'iamandalioloisa': 5800, 'sasamahankitaloisa': 5801, 'rstarovich': 5802, 'fits': 5803, 'nowhere': 5804, 'tycoons': 5805, 'crass': 5806, 'hollywood': 5807, 'democrati': 5808, 'buttigieg': 5809, 'permission': 5810, 'rolling': 5811, 'steelknife23': 5812, 'stressfull': 5813, 'continuously': 5814, 'bombared': 5815, 'landoee': 5816, 'staxx': 5817, 'drjengunter': 5818, 'ohio': 5819, 'outlaws': 5820, 'hppy': 5821, 'adam22': 5822, 'mozzy': 5823, 'philthy': 5824, 'records': 5825, 'machinegunkelly': 5826, 'seankeeleyisme': 5827, 'hawazupardi': 5828, 'viridianloki': 5829, 'friggason': 5830, 'promisewoo': 5831, 'victoriaosteen': 5832, 'yukheithrusts': 5833, 'sophiedee': 5834, 'rashadswavey5': 5835, 'memphis': 5836, 'recruiting': 5837, 'woodwork': 5838, 'thebulletetg': 5839, 'sonofamericawv': 5840, 'carrieksada': 5841, 'snakes': 5842, 'rafpolicedog': 5843, 'web': 5844, 'graphic': 5845, 'designer': 5846, 'kinds': 5847, 'folio': 5848, 'nattiebee': 5849, 'minus': 5850, 'winniefred': 5851, 'tummy': 5852, 'nicki': 5853, 'sexycelebz1': 5854, 'genie': 5855, 'bouchard': 5856, 'yacht': 5857, 'mcuharrier': 5858, 'passes': 5859, 'dzrishmawy': 5860, 'between2worlds': 5861, 'length': 5862, 'helps': 5863, 'reread': 5864, 'verify': 5865, 'neighbors': 5866, 'blaring': 5867, 'tiger': 5868, 'btstranslation7': 5869, 'deliciously': 5870, 'cutting': 5871, 'moonstarkchao': 5872, 'motiva': 5873, 'panda80': 5874, 'unofficial': 5875, 'xavier': 5876, 'ocr': 5877, 'ahha': 5878, 'accountant': 5879, 'wallowsmusic': 5880, 'lube': 5881, 'smooth': 5882, 'iphone': 5883, 'impapitrees': 5884, 'shower': 5885, 'kjiltmn': 5886, 'pancake': 5887, 'mousakraish': 5888, 'calculate': 5889, 'dummy': 5890, 'wrathofspears': 5891, 'mikelee762': 5892, '4yrstoday': 5893, 'awe': 5894, 'cornonthegoblin': 5895, 'babysitting': 5896, 'hodujeojangstor': 5897, 'bantu': 5898, 'bongbong': 5899, 'koalajihoon': 5900, 'rp': 5901, '195': 5902, 'pillow': 5903, 'ems': 5904, '16th': 5905, 'joonsforeveraln': 5906, 'camerawork': 5907, 'european': 5908, 'african': 5909, 'connections': 5910, 'researc': 5911, 'program': 5912, 'analyst': 5913, 'aviation': 5914, 'michellenewitt': 5915, 'alarma': 5916, 'papi': 5917, 'estamos': 5918, 'valerievolco': 5919, 'centerforbiodiv': 5920, 'filed': 5921, 'intent': 5922, 'surface': 5923, 'reclamat': 5924, 'augsburgalumni': 5925, 'offical': 5926, 'tobi': 5927, 'wtffff': 5928, 'hornyxxxxs': 5929, 'itsjasonpatel': 5930, 'xoxox': 5931, 'ksplanet': 5932, 'mypillowusa': 5933, 'stoned2thabones': 5934, 'symplysimi': 5935, 'yuhrobby': 5936, 'pepperkatw': 5937, 'skippy': 5938, 'suffolk': 5939, 'skeet': 5940, 'sunday': 5941, 'eriswell': 5942, 'lodge': 5943, 'training': 5944, 'sev': 5945, 'obstructing': 5946, 'seintaur': 5947, 'spank': 5948, 'pagantrelawney': 5949, 'credible': 5950, 'believable': 5951, 'passionate': 5952, 'misterjeonguk': 5953, 'sumn': 5954, 'wenditotomato': 5955, 'andreahorwath': 5956, 'onpoli': 5957, 'amc': 5958, 'boriqua510': 5959, 'ws': 5960, 'zahee': 5961, 'dah': 5962, 'translates': 5963, 'argument': 5964, 'condom': 5965, 'fishermans': 5966, 'fabulous': 5967, 'amirite': 5968, 'torres': 5969, 'lizzy': 5970, 'missgatheca': 5971, 'unwritten': 5972, 'rule': 5973, 'ehte': 5974, 'staronfoxfan': 5975, 'starpostgroup': 5976, 'derek': 5977, 'bullets': 5978, 'danitzaaserene': 5979, 'chipotle': 5980, 'bowl': 5981, 'large': 5982, 'chips': 5983, 'colombian': 5984, 'yeh': 5985, 'hindugenocide': 5986, '148': 5987, 'jihad': 5988, 'hindus': 5989, 'estimate': 5990, 'bossie': 5991, 'devastating': 5992, 'narrative': 5993, 'cheshiremermaid': 5994, 'crew': 5995, 'faggots': 5996, 'literarygrrrl': 5997, 'fouls': 5998, 'received': 5999, 'eillish': 6000, 'rjmxrell': 6001, 'terry': 6002, 'silver': 6003, 'akashbanerjee': 6004, 'wondered': 6005, 'nehru': 6006, 'shifted': 6007, 'focus': 6008, 'rajivgandhi': 6009, 'becaus': 6010, 'sahluwal': 6011, 'aliamjadrizvi': 6012, 'saudi': 6013, 'arabia': 6014, 'mschikis1128': 6015, 'warned': 6016, 'wearekosmonauts': 6017, 'aintthatjackie': 6018, 'friendship': 6019, 'infringed': 6020, 'unfit': 6021, 'realmagasteve': 6022, 'izzysroses': 6023, 'depo': 6024, 'ohitzlilaah': 6025, 'belongs': 6026, 'colorized': 6027, 'andyrala': 6028, 'throooougghhh': 6029, 'chain': 6030, 'kayedaily': 6031, 'roommate': 6032, 'collegeau': 6033, 'blowhards': 6034, 'tbhbyron': 6035, 'mendes': 6036, 'afterwards': 6037, 'stings': 6038, 'huma': 6039, 'sleepeasy11': 6040, 'whitee34': 6041, 'daveyshamrock': 6042, 'juliee': 6043, 'yasmohammedxx': 6044, 'wondrousiights': 6045, 'nebula': 6046, 'lightyears': 6047, 'carol': 6048, 'mana': 6049, 'bffnamu': 6050, 'cpfcbants': 6051, 'automotive': 6052, 'gatwick': 6053, 'automotivejobs': 6054, 'nypost': 6055, 'bansheerabidcat': 6056, 'walang': 6057, 'reklamo': 6058, 'frankrich': 6059, 'capela': 6060, 'patch': 6061, 'swag': 6062, 'score': 6063, 'seokcafe': 6064, 'petitcerf': 6065, 'lapras': 6066, 'manger': 6067, 'schedule': 6068, 'kjleed2': 6069, 'lannister': 6070, 'nastiest': 6071, 'skank': 6072, 'fugly': 6073, '926388383': 6074, 'ruel': 6075, 'tbs': 6076, 'mornings': 6077, 'bell': 6078, 'fcukyoongi': 6079, 'managers': 6080, 'jovanrisen': 6081, 'wastin': 6082, 'omgrichelle': 6083, 'prncessalli': 6084, 'tested': 6085, 'legally': 6086, 'owned': 6087, 'handguns': 6088, 'billblair': 6089, 'jerseywur': 6090, 'itvchanneltv': 6091, 'sector': 6092, 'earn': 6093, 'zackmeisel': 6094, 'corey': 6095, 'replay': 6096, 'chamwink': 6097, 'daniel': 6098, 'deliver': 6099, 'boooootyfull': 6100, 'nutitty': 6101, 'charlenewhite': 6102, 'chiminminn': 6103, 'swoopy': 6104, 'jeff': 6105, 'gluck': 6106, 'theblondemd': 6107, 'rant': 6108, 'intlrich': 6109, 'ting': 6110, 'lecturer': 6111, 'wah': 6112, 'lietomebrandon': 6113, 'tyannatimbs': 6114, 'dopee': 6115, 'papicurl': 6116, 'lany': 6117, 'icygrl': 6118, 'intense': 6119, 'workout': 6120, 'conquer': 6121, 'savinthebees': 6122, 'dislike': 6123, 'barcauniversal': 6124, 'spain': 6125, 'ichanticleer': 6126, 'bulk': 6127, 'interacted': 6128, 'wei': 6129, 'ccabellonews': 6130, 'hernassist': 6131, 'publictranspo': 6132, 'chuckcallesto': 6133, 'tlaib': 6134, 'submits': 6135, 'measure': 6136, 'ankasa': 6137, 'ghanaians': 6138, 'leeyaahlee': 6139, 'slides': 6140, 'rs': 6141, 'harshavasishta': 6142, 'avarige': 6143, 'ciara': 6144, 'jazzayers': 6145, 'student': 6146, 'loans': 6147, 'generational': 6148, 'curses': 6149, 'traditions': 6150, 'tiiiiirrreeeedddd': 6151, 'roblowe': 6152, 'danaperino': 6153, 'kristians': 6154, 'louirosequa': 6155, 'commissions': 6156, 'chef': 6157, 'workin': 6158, 'nareik4g': 6159, 'secondly': 6160, 'themunalisa': 6161, 'joelycett': 6162, 'presume': 6163, 'spen': 6164, 'luvs2dream82': 6165, 'underestimated': 6166, 'tehlorkay': 6167, 'discuss': 6168, 'halves': 6169, 'sword': 6170, 'chanknots': 6171, 'reacts': 6172, 'rubberduckychan': 6173, 'cc': 6174, 'a2': 6175, 'completion': 6176, 'analysis': 6177, 'empirical': 6178, 'studies': 6179, 'chapters': 6180, 'drafted': 6181, 'literature': 6182, 'xdevul': 6183, 'spaziotwat': 6184, 'crucify': 6185, 'sethshruti': 6186, 'bhakts': 6187, 'educated': 6188, 'immigrants': 6189, 'apprehended': 6190, 'april': 6191, 'cbp': 6192, '110': 6193, '167': 6194, 'proved': 6195, 'dannygonzalez': 6196, 'comedic': 6197, 'retronewsnow': 6198, '1977': 6199, 'frost': 6200, '5strat': 6201, 'attempt': 6202, 'msm': 6203, 'slowl': 6204, 'summons': 6205, 'regretted': 6206, 'happinesspjm': 6207, 'kathleenlights1': 6208, 'soo': 6209, 'sddphoto': 6210, 'flat': 6211, 'pillows': 6212, 'babyjeytwt': 6213, 'pull': 6214, 'dreafresh': 6215, 'everythin': 6216, 'cllrnicksmall': 6217, 'tories': 6218, 'austerity': 6219, 'lib': 6220, 'votelabour2may': 6221, 'rksinhabjp': 6222, 'humbled': 6223, 'shatrugansinha': 6224, 'sonu': 6225, 'bhaiya': 6226, 'ritura': 6227, 'ahlludde': 6228, 'ultrajoecomic': 6229, 'mlk': 6230, 'descri': 6231, 'vlovest98': 6232, 'aoc': 6233, 'kerihilson': 6234, 'aicohol': 6235, 'defense': 6236, 'iot': 6237, 'nhikkyb': 6238, 'thewifeyoulove': 6239, 'rinse': 6240, 'knife': 6241, 'dishwasher': 6242, 'motherf': 6243, 'spookykevv': 6244, 'thekillakay': 6245, 'cnewlove5': 6246, 'touched': 6247, 'swish': 6248, 'andflick': 6249, 'realistic': 6250, 'tiadanaee': 6251, 'lemonade': 6252, 'hung': 6253, 'nicolebyer': 6254, 'nailedit': 6255, 'volunteered': 6256, 'electricity': 6257, 'rural': 6258, 'bolivian': 6259, 'trilliamclinton': 6260, 'deagonfly': 6261, 'abroad': 6262, 'abide': 6263, 'kb': 6264, 'juelz': 6265, 'jatasia': 6266, 'yasier': 6267, 'lennard': 6268, 'akiyoshikitaoka': 6269, 'shapes': 6270, 'square': 6271, 'oblong': 6272, 'omgosh': 6273, 'efurubizzle': 6274, 'jhen': 6275, 'trill98': 6276, 'anishaflower': 6277, 'lucifernetflix': 6278, 'kylahguion': 6279, 'prince': 6280, 'bel': 6281, 'aired': 6282, 'marinacanales': 6283, 'weldon': 6284, 'danniphantom': 6285, 'rueskend': 6286, 'madialexander0': 6287, 'headbands': 6288, 'duttybry': 6289, 'substances': 6290, 'fuckked': 6291, 'gospels': 6292, 'taehyungthugg': 6293, 'outbdndjsk': 6294, 'erickaaaaaaa': 6295, 'migostrash': 6296, 'footbalistuff': 6297, 'rashfor': 6298, 'xoanh': 6299, 'birds': 6300, 'tug': 6301, 'gde60': 6302, 'theneweuropean': 6303, 'tanvish59050766': 6304, 'fatal': 6305, 'fields': 6306, 'itsrickkbitch': 6307, 'kaylarwill': 6308, 'jeans': 6309, 'install': 6310, 'markgoldbridge': 6311, 'stadium': 6312, 'ariesbryant': 6313, 'jhene': 6314, 'bsisomphane': 6315, 'bretman': 6316, 'lmfaooooooo': 6317, 'stylesmercurys': 6318, 'march': 6319, 'bedroom': 6320, 'bawling': 6321, 'zayn': 6322, 'sermon': 6323, 'noe': 6324, 'sundresses': 6325, 'basketball': 6326, 'tatiannanow': 6327, 'alyssaedwards': 6328, 'ilhanmn': 6329, 'candidates': 6330, 'valentina': 6331, 'livestream': 6332, 'pseudo': 6333, 'nihonjin': 6334, 'ragnarock4455': 6335, 'ff7': 6336, 'psychotherapist': 6337, 'physician': 6338, 'heal': 6339, 'thyself': 6340, 'liberalismisamentaldisorder': 6341, 'toonz': 6342, '2you': 6343, 'podcast': 6344, 'nbcsbears': 6345, 'bearsaddict1976': 6346, 'fullbushfever': 6347, 'facial': 6348, 'hookup': 6349, 'spray': 6350, 'nbstv': 6351, 'idiotic': 6352, 'mps': 6353, 'nyaikae': 6354, 'advice': 6355, 'jahsuke': 6356, 'experiences': 6357, 'alienating': 6358, 'cwazz': 6359, 'pos': 6360, 'baekhyuncrack': 6361, 'keychain': 6362, 'hamillhimself': 6363, 'thankyoupeter': 6364, 'alway': 6365, 'iconicaesthetic': 6366, 'sheslulu': 6367, 'masturbate': 6368, 'rheumatoid': 6369, 'hritis': 6370, 'darrendh': 6371, 'rolileaks': 6372, 'kodingu': 6373, 'releasing': 6374, 'embodiment': 6375, 'silliness': 6376, 'presidents': 6377, 'numbers': 6378, 'salmon': 6379, 'waterways': 6380, 'lowest': 6381, 'curr': 6382, 'selenahndrx': 6383, 'answering': 6384, 'anonymous': 6385, 'megs': 6386, 'isu': 6387, 'unbearable': 6388, 'street': 6389, 'lifeofjay98': 6390, 'assure': 6391, 'alasdairmcgill': 6392, 'hafsag8': 6393, 'heermaanja': 6394, 'aceglobal3': 6395, 'cactus': 6396, 'button': 6397, 'ixohuda': 6398, 'peach3102': 6399, 'poised': 6400, 'anavibkwd': 6401, 'kh3ngboon': 6402, '740000': 6403, '84000': 6404, '36000': 6405, 'kpoop': 6406, 'btzuwrld': 6407, 'slayer': 6408, 'triviaje0n': 6409, 'unfollowed': 6410, '10downingstreet': 6411, 'theresa': 6412, 'cheet0': 6413, 'stereo': 6414, '70': 6415, 'handled': 6416, 'ann': 6417, '45pm': 6418, 'pjaay': 6419, 'donkeeyballssss': 6420, 'hdaaaay': 6421, 'btsbreakingnews': 6422, 'coolhtowngirl': 6423, 'pubic': 6424, 'yay': 6425, 'traumatized': 6426, 'organizations': 6427, 'redhaired': 6428, 'witchy': 6429, 'jewelery': 6430, 'xiumandu': 6431, 'fuckn': 6432, 'bolton': 6433, 'surro': 6434, 'offender': 6435, 'cutler': 6436, 'roaring': 6437, 'tyle': 6438, 'hecreator': 6439, 'mellamannormal': 6440, 'albania': 6441, 'montenegro': 6442, 'digz': 6443, 'davidjollyfl': 6444, 'lesbianironman': 6445, 'limb': 6446, 'chopping': 6447, 'whispers': 6448, 'ra': 6449, 'sulagnadash6': 6450, 'bhopal': 6451, 'porki': 6452, 'muhdnqiuddin': 6453, 'joonschild': 6454, 'gohuneygo': 6455, 'megansource': 6456, 'stali': 6457, 'incoming': 6458, 'commented': 6459, '90s': 6460, 'aesthetic': 6461, 'sensibilities': 6462, 'slendersherbet': 6463, 'cap': 6464, 'blvckmoose': 6465, 'oxygen': 6466, 'crisp': 6467, 'lgbtkenobi': 6468, 'rhodey': 6469, 'clint': 6470, 'breakyoni': 6471, '5th': 6472, 'girlllllllll': 6473, 'mohdanasraza8': 6474, 'squintneon': 6475, 'perhaps': 6476, 'frustrating': 6477, 'insinuation': 6478, 's106': 6479, 'shore': 6480, 'finances': 6481, 'cough': 6482, 'hobihoseokie': 6483, 'itsadamm': 6484, 'mtlsaiyan': 6485, 'screeaming': 6486, 'shadow': 6487, 'ncjdjfjrjckrkjfkrnfnfnf': 6488, 'sahana': 6489, 'mateeee': 6490, 'chookook': 6491, 'milanemelyn': 6492, 'hennessey': 6493, 'travis': 6494, 'kolkata': 6495, 'nud': 6496, 'naked': 6497, 'stripping': 6498, 'pp': 6499, 'scarletteharlet': 6500, 'bg': 6501, 'including': 6502, 'angles': 6503, 'joh': 6504, 'ana69': 6505, 'topfunnyvideos5': 6506, 'cloudsigma': 6507, 'credit': 6508, 'magapatriot': 6509, 'tgm': 6510, 'mittromney': 6511, 'azz': 6512, 'uta': 6513, 'shezuschrist': 6514, 'szn': 6515, 'amoralexis': 6516, 'gracecooter': 6517, 'newtgingrich': 6518, 'hmmm': 6519, 'anstsiaceria': 6520, 'freakin': 6521, 'nutshell': 6522, 'priyatharshiniy': 6523, 'fasting': 6524, 'foods': 6525, 'iigodmomll': 6526, 'contribution': 6527, 'value': 6528, 'justindoublem': 6529, 'karen': 6530, 'gaming': 6531, 'surely': 6532, 'cheerfulcolby': 6533, 'kianandjc': 6534, 'kianlawley': 6535, 'kian': 6536, 'jccaylen': 6537, 'jaysee': 6538, 'paola': 6539, 'dec1231': 6540, 'httpschsnl': 6541, 'aguamenting': 6542, 'gabeawaycar': 6543, 'calmly': 6544, 'katyperry': 6545, 'lainestorm': 6546, 'alejandro': 6547, 'wield': 6548, 'axe': 6549, 'madison': 6550, 'borin': 6551, 'funds': 6552, 'papilionocte': 6553, 'chuu': 6554, 'sdent75': 6555, 'pepelarraz': 6556, 'whew': 6557, 'experiencing': 6558, 'xciusvv': 6559, 'george': 6560, 'yelling': 6561, 'khalil': 6562, 'wooowww': 6563, 'iggyazalea': 6564, 'goddessamy3': 6565, 'accident': 6566, 'confirmation': 6567, 'rig': 6568, 'lovingpcy': 6569, 'jaemin': 6570, 'jeno': 6571, 'haechan': 6572, 'renjun': 6573, 'chenle': 6574, '90sshane': 6575, 'pissy': 6576, 'pamper': 6577, 'turnin': 6578, 'omrafaizy': 6579, 'litter': 6580, 'var': 6581, 'zideresed': 6582, 'thoothukudi': 6583, 'massacre': 6584, 'govt': 6585, 'behest': 6586, 'vedanta': 6587, 'majorhughes': 6588, 'vape': 6589, 'elicia': 6590, 'vaping': 6591, 'loona': 6592, 'backstreetboys': 6593, 'bsb23chromosomes': 6594, 'persistence': 6595, 'achieve': 6596, 'rwpusa': 6597, 'affair': 6598, 'intern': 6599, 'tmrw': 6600, 'duh': 6601, '780613': 6602, 'seokjins': 6603, 'ahhhh': 6604, 'yahhh': 6605, 'yahhhhh': 6606, 'sudheenkulkarni': 6607, 'incumbent': 6608, 'howardjeff10': 6609, 'swerve': 6610, 'nobledemonsxo': 6611, 'profits': 6612, 'alley': 6613, 'jungoojeoon': 6614, 'omz': 6615, 'syphilis': 6616, 'riddled': 6617, 'projecting': 6618, 'violating': 6619, 'extras': 6620, 'newthor': 6621, 'figuring': 6622, 'policies': 6623, 'stated': 6624, 'tons': 6625, 'ebrandom': 6626, 'novel': 6627, 'lolajaey': 6628, 'leslies33236013': 6629, 'sensanders': 6630, 'sexiest': 6631, 'leahbolt': 6632, 'stressing': 6633, 'hongbinesque': 6634, 'sweetsp902': 6635, 'scenter': 6636, 'protests': 6637, 'voicing': 6638, 'frustration': 6639, 'everette': 6640, 'discovered': 6641, 'building': 6642, 'brand': 6643, 'kimtrivia': 6644, 'views': 6645, 'hesitate': 6646, 'seeking': 6647, 'arrogant': 6648, 'thugs': 6649, 'lookidontcareok': 6650, 'surgeries': 6651, 'kaepernick7': 6652, 'someones': 6653, 'pri': 6654, 'stevie': 6655, 'whites': 6656, 'header': 6657, 'fiackito': 6658, 'jiminigeria': 6659, 'annoys': 6660, 'rea': 6661, 'ayalam03': 6662, 'jasoninthehouse': 6663, 'biden': 6664, 'shumdarionews': 6665, 'haventhornfalls': 6666, 'matthewdaddario': 6667, 'namsingto': 6668, 'diegor': 6669, 'barros': 6670, 'forest': 6671, 'woods': 6672, 'johnny': 6673, 'joestar': 6674, 'mikeadamonair': 6675, 'cruel': 6676, 'smoothtranny': 6677, 'catherinvaritek': 6678, 'celtics': 6679, 'fa': 6680, 'iamya': 6681, 'darla': 6682, 'samarasbritt': 6683, 'darcar4': 6684, 'reesew': 6685, 'ppact': 6686, 'aclu': 6687, 'railing': 6688, 'entitlements': 6689, 'freeload': 6690, 'abod': 6691, 'alnasr': 6692, 'ladee7000': 6693, 'ladyraven3': 6694, 'foreverlogical': 6695, 'michi83': 6696, 'typical': 6697, 'conman': 6698, 'trumplies': 6699, 'excess': 6700, '111': 6701, 'sydneylwatson': 6702, 'incorrectnewtin': 6703, 'irrational': 6704, 'expected': 6705, 'silk': 6706, 'crush': 6707, 'rhyschiwara': 6708, 'jordynn': 6709, 'mcguire': 6710, 'molliescx': 6711, 'anatomy': 6712, 'throwin': 6713, 'jesouellette': 6714, 'yassir': 6715, 'lester': 6716, 'despite': 6717, 'barrier': 6718, 'liesmacmaya32': 6719, 'vika': 6720, 'webcam': 6721, 'usuck2me': 6722, 'pingu': 6723, 'guuuurls': 6724, 'sisi': 6725, 'breakfast': 6726, 'egyptian': 6727, 'demovulchers': 6728, 'senators': 6729, 'mruncutcock': 6730, 'cumshot': 6731, 'sethabramson': 6732, 'controversial': 6733, 'prolumchild': 6734, 'vocal': 6735, 'vocalization': 6736, 'btsxmetlife': 6737, 'conflictaedd': 6738, 'tracklist': 6739, 'niggga': 6740, 'markiaag': 6741, 'imohsinbaig': 6742, 'hospitalized': 6743, 'crimson': 6744, 'stray': 6745, 'karlykahle': 6746, 'gambling': 6747, 'scam': 6748, 'disqusting': 6749, 'switchdoms': 6750, 'soooo': 6751, 'kinkyafpodcast': 6752, 'domconla': 6753, 'rayvenblood': 6754, 'enou': 6755, 'rednationrising': 6756, 'bernie': 6757, 'priceless': 6758, 'developing': 6759, 'alichat66': 6760, 'gin': 6761, 'halajoongz': 6762, 'tugas': 6763, 'chizmaga': 6764, 'digenova': 6765, 'inspector': 6766, 'fisa': 6767, 'warrants': 6768, 'spy': 6769, 'laylaalisha11': 6770, 'presidential': 6771, 'harrassment': 6772, 'issuing': 6773, 'subpoenas': 6774, 'grandkids': 6775, 'muhaiminaj': 6776, 'whitneyysanders': 6777, 'sandraserag': 6778, 'johnlegend': 6779, 'trkcoach': 6780, 'divinity11': 6781, 'mattiandpete': 6782, 'epi': 6783, 'aiden': 6784, 'juifu': 6785, 'sexslave': 6786, 'kissradiohits': 6787, 'closer': 6788, 'vancouver': 6789, 'vinterine': 6790, 'idaho': 6791, 'allowing': 6792, 'microscope': 6793, 'sophomore': 6794, 'biology': 6795, 'tparsi': 6796, 'genocidal': 6797, 'threats': 6798, 'budge': 6799, 'alphajeons': 6800, 'selca': 6801, 'misssweets8': 6802, 'epic': 6803, '25000': 6804, 'sac': 6805, 'lul': 6806, 'dark': 6807, 'timer': 6808, 'bakkenbits': 6809, 'cathmckenna': 6810, 'thorstrilogy': 6811, 'midwestcocksuck': 6812, 'upside': 6813, 'rd': 6814, 'bana': 6815, 'jihadi': 6816, 'sponsored': 6817, 'pakistan': 6818, 'spread': 6819, 'hounded': 6820, 'draseemmalhotra': 6821, 'berniesanders': 6822, 'richest': 6823, 'seoul': 6824, 'ch0ibeomgyus': 6825, '330': 6826, 'duetkm': 6827, 'priyanka': 6828, 'vadra': 6829, 'presence': 6830, 'jasonlk': 6831, 'maddowblog': 6832, 'chuck': 6833, 'rosenberg': 6834, 'lightly': 6835, 'formoftherapy': 6836, 'colbychambersxx': 6837, 'evie': 6838, 'mumford': 6839, 'muglikar': 6840, 'gautam': 6841, 'gambhir': 6842, 'aaptards': 6843, 'police': 6844, 'markwarner': 6845, 'integrity': 6846, 'appears': 6847, 'magicmew12': 6848, 'sarazanmai': 6849, 'peel': 6850, 'types': 6851, 'evrrythingoes': 6852, 'vet': 6853, 'biting': 6854, 'sternly': 6855, 'naturally': 6856, 'chrissygx33': 6857, 'vaca': 6858, 'antly': 6859, 'estebansevillaa': 6860, 'girlslikedahyun': 6861, 'giannis': 6862, 'humble': 6863, 'fli': 6864, 'aeguk': 6865, 'dacydukes': 6866, 'redeeming': 6867, 'politibunny': 6868, 'rad': 6869, 'medicated': 6870, 'marino5': 6871, 'robots': 6872, 'therreaaltiaa': 6873, 'uncousin': 6874, 'unfamily': 6875, 's7venj': 6876, 'subscribed': 6877, 'onlyfans': 6878, 'twerk': 6879, 'trap': 6880, 'tha': 6881, 'kierragabrielle': 6882, 'miceliroyce': 6883, 'carlosgzz03': 6884, 'jade': 6885, 'meets': 6886, 'netherrealm': 6887, 'noobde': 6888, 'mk11': 6889, 'alkombat11': 6890, 'mkkollect': 6891, 'franklin': 6892, 'lakes': 6893, 'kehinde': 6894, 'wiley': 6895, 'boattt': 6896, 'steal': 6897, 'managed': 6898, 'bubblebussa1': 6899, 'rudemouthh': 6900, 'tattoos': 6901, '999496': 6902, 'aaaa': 6903, 'kak': 6904, 'cin': 6905, 'luvs': 6906, 'leaaronjames': 6907, 'jalen': 6908, 'nelaj': 6909, 'hmm': 6910, 'kangaroos': 6911, 'roast': 6912, 'kadetrentham': 6913, 'intoxicated': 6914, 'jigglybvff': 6915, 'omigosh': 6916, 'aaaaaaa': 6917, 'ptbothisweek': 6918, 'caretaker': 6919, 'strath': 6920, 'norman': 6921, 'legate': 6922, 'viable': 6923, 'liver': 6924, 'donor': 6925, 'kyayigirl': 6926, 'barber': 6927, 'looooool': 6928, 'igoham': 6929, '79': 6930, 'essence': 6931, 'unnecessary': 6932, 'potentially': 6933, 'incatnito1': 6934, 'dcpoll': 6935, 'otherwise': 6936, '12u': 6937, 'ambrose': 6938, 'pitching': 6939, 'tripp': 6940, 'vaughan': 6941, 'astros': 6942, 'batting': 6943, 'obrien': 6944, 'swanksdaleon': 6945, 'columbus': 6946, 'traits': 6947, 'stove': 6948, 'goljeonjunggoo': 6949, 'retwe': 6950, 'destinycapas': 6951, 'reggie': 6952, 'skzcypher': 6953, 'fanchant': 6954, 'chutinona': 6955, 'counterfeits': 6956, 'yafavdeyj': 6957, 'dasuki': 6958, 'kjdsprettynipps': 6959, 'lyssalalaa': 6960, 'insta': 6961, 'upgraded': 6962, 'cosplay': 6963, 'anthonyvslater': 6964, 'demarcus': 6965, 'cousins': 6966, 'floor': 6967, 'vincestaples': 6968, 'singletau': 6969, 'abuyoshi': 6970, 'computer': 6971, 'ronaldnzimora': 6972, 'hangout': 6973, 'sto': 6974, 'kp': 6975, 'ppies': 6976, 'tagaq': 6977, 'ostriches': 6978, 'wasting': 6979, 'abolish': 6980, 'asmrglow': 6981, 'sometaems': 6982, 'on': 6983, 'kihno': 6984, 'mp4': 6985, 'subtitles': 6986, 'eng': 6987, 'chn': 6988, 'gb': 6989, 'seas2love': 6990, 'daveweigel': 6991, 'nycsouthpaw': 6992, 'ryanlcooper': 6993, 'popular': 6994, 'ichiro': 6995, 'vibe': 6996, 'imtheebrock': 6997, 'childish': 6998, 'ewzc': 6999, 'tr1zz': 7000, 'ff': 7001, 'xiv': 7002, 'ffxiv': 7003, 'twitchprime': 7004, 'evilbobj': 7005, 'generations': 7006, 'golf': 7007, 'four': 7008, '252nd': 7009, 'ies': 7010, 'taylorivers': 7011, 'oprah': 7012, 'plane': 7013, 'firmlyclimactic': 7014, '43': 7015, 'hustle': 7016, 'jeonspoppy': 7017, 'btscullt': 7018, 'rapline': 7019, 'seee': 7020, 'reaction': 7021, 'themistagg': 7022, 'gangster': 7023, 'jordan11307335': 7024, 'tef': 7025, 'tessavalkk': 7026, 'stucky': 7027, 'canon': 7028, 'corybooker': 7029, 'properly': 7030, 'themelaninwitch': 7031, 'stimulating': 7032, 'convo': 7033, 'clapping': 7034, 'ian': 7035, 'mac7': 7036, 'thegr8000haaiid': 7037, 'smeggledorf': 7038, 'sao': 7039, 'pfp': 7040, 'doug': 7041, 'vecenie': 7042, 'seed': 7043, 'joesilverman7': 7044, 'darlin': 7045, 'ect': 7046, 'enunciate': 7047, 'deltal0l': 7048, 'parking': 7049, 'bimmerella': 7050, 'emptywheel': 7051, 'oftrump': 7052, 'tit': 7053, 'spree': 7054, 'drum': 7055, 'bots': 7056, 'rosaliecastardo': 7057, 'hullboydan': 7058, 'amynichonchuir': 7059, 'takehayaseiya': 7060, 'dksgscmshs': 7061, 'seniors': 7062, 'nsm': 7063, 'presentations': 7064, 'massyomay': 7065, 'laen': 7066, 'suju': 7067, 'anyink': 7068, 'lagu': 7069, 'saha': 7070, 'ieu': 7071, 'wahhh': 7072, 'matiin': 7073, 'circletoonshd': 7074, 'shocked': 7075, 'jakepaul': 7076, 'braindead': 7077, 'subhuman': 7078, 'horse': 7079, 'hannacantrell': 7080, 'categories': 7081, 'male': 7082, 'jorah': 7083, 'tie': 7084, 'margins': 7085, 'stevenwaynea': 7086, 'rian': 7087, 'sexism': 7088, 'origin': 7089, '1973': 7090, 'trek': 7091, 'parody': 7092, 'caylahhhh': 7093, 'simplybenlogica': 7094, 'shaaaade': 7095, 'thomass4217': 7096, '1pinkfridayy': 7097, 'psychopaths': 7098, 'uscis': 7099, 'window': 7100, 'leshh': 7101, 'luxlori': 7102, 'thefestivalsuk': 7103, 'competition': 7104, 'playlists': 7105, 'pairs': 7106, 'gioteck': 7107, 'rypaffo': 7108, 'elrubiusop': 7109, 'hiramslodges': 7110, 'normal': 7111, 'palestinians': 7112, 'diarrea': 7113, 'dragoncon': 7114, 'several': 7115, 'gotham': 7116, 'residents': 7117, 'atlanta': 7118, 'cobblepot': 7119, 'gifted': 7120, 'deception': 7121, 'easiest': 7122, 'quiltnerd': 7123, 'pandamagazine': 7124, 'hunt': 7125, 'reddit': 7126, 'swelling': 7127, 'clotted': 7128, 'samantharkelly': 7129, 'ch': 7130, 'beefing': 7131, 'scousericey': 7132, 'fountain': 7133, 'continent': 7134, 'australasia': 7135, 'oceania': 7136, 'arguing': 7137, 'fiona': 7138, 'irony': 7139, 'buffyblogs': 7140, 'mueller': 7141, 'written': 7142, 'prosecutable': 7143, 'offenses': 7144, 'sealed': 7145, 'jolyonmaugham': 7146, 'yougov': 7147, 'revealed': 7148, 'leavers': 7149, 'futbolbible': 7150, 'tgif': 7151, 'briawnasitar': 7152, 's8n': 7153, 'traitors': 7154, 'genderqueer': 7155, 'triptojaitown': 7156, 'moral': 7157, 'poppin': 7158, 'caseyyjenae': 7159, 'almay93': 7160, 'muntered': 7161, 'posing': 7162, 'cycling': 7163, 'vis': 7164, 'jackets': 7165, 'gimp': 7166, 'esljobfeed': 7167, 'esl': 7168, 'nationwide': 7169, 'kevinmkruse': 7170, 'ial': 7171, 'pardoning': 7172, 'evidence': 7173, 'timcast': 7174, 'demonetized': 7175, 'nbc': 7176, 'blurring': 7177, 'aurora': 7178, 'scientologists': 7179, 'sinks': 7180, 'loumerloni': 7181, 'wrigley': 7182, 'cubsoverreac': 7183, 'malarcon': 7184, 'nosoyunpiedrero': 7185, 'wan': 7186, 'fireanddsiah': 7187, 'carlosfdecossio': 7188, 'failures': 7189, 'violate': 7190, 'spiderlingdaya': 7191, 'tom': 7192, 'yoongibemysuga': 7193, 'launching': 7194, 'bounce': 7195, 'walks': 7196, 'richyxez': 7197, 'laleng': 7198, 'tap': 7199, 'phuthadithjaba': 7200, 'justonevoice4': 7201, 'finger': 7202, 'scale': 7203, 'intentionaly': 7204, 'activity': 7205, 'nonentity': 7206, 'vagabond': 7207, 'satanthrone': 7208, 'ugaconqueso': 7209, 'seth': 7210, 'altercation': 7211, 'prisonplanet': 7212, 'enabler': 7213, 'example': 7214, 'hysteria': 7215, 'maryciel0': 7216, 'cloesy': 7217, 'alphaomegasin': 7218, 'battery': 7219, 'dragonflyjonez': 7220, 'corpse': 7221, 'confessed': 7222, 'murdering': 7223, 'honorable': 7224, 'sometim': 7225, 'itsluke5sos': 7226, 'albasarachnid': 7227, 'concern': 7228, 'dilettante': 7229, 'whisperers': 7230, 'desse': 7231, 'warm': 7232, 'glaze': 7233, 'jimcarrey': 7234, 'paint': 7235, 'bizzlecrownz': 7236, 'acoustic': 7237, 'justinbieber': 7238, 'skinnyy': 7239, 'tanishiaxo': 7240, 'orrrrrr': 7241, 'loss': 7242, 'mrosheaa': 7243, 'angola': 7244, 'nacho': 7245, 'oryttt': 7246, 'transcripts': 7247, 'whale': 7248, 'existence': 7249, 'tank': 7250, 'tiny': 7251, 'lukeoneil47': 7252, 'clue': 7253, 'regarding': 7254, 'daddies': 7255, 'pet': 7256, 'iamsteveharvey': 7257, 'vaultempowers': 7258, 'conference': 7259, 'sheraton': 7260, 'universal': 7261, 'angeles': 7262, 'speakers': 7263, 'mattgaetz': 7264, 'manuel': 7265, 'oliver': 7266, 'parkland': 7267, 'joaquin': 7268, 'violenc': 7269, 'daydavonne': 7270, 'tribunal': 7271, 'unbothered': 7272, 'gregoryttaylor2': 7273, 'aaronsojourner': 7274, 'filmsbybenedict': 7275, 'rogers': 7276, 'bustfatnuts': 7277, 'yxngxiaothong': 7278, 'namusunrise': 7279, 'exams': 7280, 'languages': 7281, 'apps': 7282, 'kingdupuis17': 7283, 'ooo': 7284, 'plzz': 7285, 'barliv': 7286, 'ynwa': 7287, 'jaidachan': 7288, 'choking': 7289, 'dammit': 7290, 'thebiggestloser': 7291, 'whitewash': 7292, 'roopalisriv': 7293, 'grandfather': 7294, 'narendra': 7295, 'hahah': 7296, 'abdulmahmud01': 7297, 'payooo': 7298, 'laughter': 7299, 'punches': 7300, 'pius': 7301, 'adesanmi': 7302, 'memo': 7303, 'angelz': 7304, 'jdnsjsnsnsjsn': 7305, 'aaaaaaaaaaaaaaaaa': 7306, 'umineko': 7307, 'plush': 7308, 'init': 7309, 'podcastsjudge': 7310, 'eleldllwlrlwkrkks': 7311, 'gukksbunny': 7312, 'vines': 7313, 'overhual': 7314, 'telelance': 7315, 'lesser': 7316, 'svfuuu': 7317, '170': 7318, 'cm': 7319, 'height': 7320, '152cm': 7321, 'gremlin': 7322, 'thedailyedge': 7323, 'realdonaldtru': 7324, 'nitegame': 7325, 'sooo': 7326, 'dumass': 7327, 'pump': 7328, 'dcweatherman': 7329, 'ight': 7330, 'downloading': 7331, 'fools': 7332, 'bigot': 7333, 'pets': 7334, 'ibjiyongi': 7335, 'natashatynes': 7336, 'miniwhiskk': 7337, 'vlog': 7338, 'nyctspts': 7339, 'ayfernyc': 7340, 'cspan': 7341, 'sadiecashbat': 7342, 'equal': 7343, 'transphobes': 7344, 'smokescreen': 7345, 'promot': 7346, 'processing': 7347, 'pooramor': 7348, 'admit': 7349, 'thehoney': 7350, 'sonofpink': 7351, 'giggled': 7352, 'playfully': 7353, 'squirm': 7354, 'grasp': 7355, 'arronorriss': 7356, 'bikes': 7357, 'peyton': 7358, 'dollbanger': 7359, 'bella': 7360, 'roze': 7361, 'flashing': 7362, 'pounded': 7363, 'mikasa': 7364, 'milezdas': 7365, 'cris': 7366, 'dani': 7367, 'skamespa': 7368, 'ethehustla': 7369, 'gofundm': 7370, 'astroiogywhore': 7371, 'deer': 7372, 'breast': 7373, 'greens': 7374, 'ham': 7375, 'hocks': 7376, 'chan9xi': 7377, 'xlnran': 7378, 'au': 7379, 'teenagerlilac': 7380, 'sexually': 7381, 'assaulted': 7382, 'pacothasavage': 7383, 'ally': 7384, 'lip': 7385, '03': 7386, 'goth': 7387, 'ashleynic0le': 7388, 'shed': 7389, 'bih': 7390, 'woolimusic': 7391, 'dubstep': 7392, 'lineups': 7393, 'stacked': 7394, 'headliner': 7395, 'probation': 7396, 'slatt': 7397, 'tnichelleee': 7398, 'sjws': 7399, 'ape': 7400, 'andersen': 7401, 'nodded': 7402, 'smiled': 7403, 'thousand': 7404, 'hans': 7405, 'stubborn': 7406, 'k33yuh': 7407, 'jacuzzi': 7408, 'brittknowsbestt': 7409, 'swimtosafety1st': 7410, 'morocco': 7411, 'arkloster': 7412, 'ferretvillager': 7413, 'iamelijah97': 7414, 'rachellshantal': 7415, 'ladies': 7416, 'brash': 7417, 'limp': 7418, 'wristed': 7419, 'acekatana': 7420, 'cliffe': 7421, 'elegant': 7422, 'sufficiency': 7423, 'famil': 7424, 'shalini': 7425, 'patel14': 7426, 'favorites': 7427, 'applicable': 7428, 'defeat': 7429, 'debt': 7430, 'owed': 7431, 'bladeweiser': 7432, 'blast': 7433, 'gaon': 7434, 'weekly': 7435, '102': 7436, '162': 7437, '699': 7438, '026': 7439, '606': 7440, 'sifill': 7441, 'ldf': 7442, 'amos': 7443, 'coates': 7444, 'dnc': 7445, 'spokesmen': 7446, 'conduct': 7447, 'followingval': 7448, 'flattered': 7449, 'cuffing': 7450, 'sekusa1': 7451, 'refugees': 7452, 'sweden': 7453, 'hansuniverse': 7454, 'scorpios': 7455, 'rosylikerosie': 7456, 'itch': 7457, 'fym': 7458, 'shao': 7459, 'dayum': 7460, 'tied': 7461, 'billowing': 7462, 'sligh': 7463, 'grizz': 7464, 'campbell': 7465, 'snapped': 7466, 'barrresign': 7467, 'weenie': 7468, 'meanie': 7469, 'hosthetics': 7470, 'killakow': 7471, 'airbagmoments': 7472, 'notorious': 7473, 'flouter': 7474, 'obligations': 7475, 'oaths': 7476, 'prices': 7477, 'listed': 7478, 'gates': 7479, 'bedrooms': 7480, 'gate': 7481, 'towyn': 7482, 'rider': 7483, 'goranger': 7484, 'pipcsmith': 7485, 'goodweekendmag': 7486, 'musician': 7487, 'pash': 7488, 'jemelehill': 7489, 'mattwalshblog': 7490, 'embarrassingly': 7491, 'wil': 7492, 'realdenman': 7493, 'bluerobotdesign': 7494, 'aiidyyl': 7495, 'qasimrashid': 7496, '5t': 7497, 'immigrating': 7498, 'pulgasboxeo': 7499, 'daire': 7500, 'nugent': 7501, 'yib': 7502, 'cameron': 7503, 'voice': 7504, 'whiney': 7505, 'sucide': 7506, 'firmeprincess': 7507, 'humility': 7508, 'edward': 7509, 'hulse': 7510, 'baguette': 7511, 'licht': 7512, '2084': 7513, 'talaga': 7514, 'si': 7515, 'dawn': 7516, 'rationa': 7517, 'earjordan': 7518, 'slaw': 7519, 'finktristan': 7520, 'giannifarley': 7521, 'gatorgay7': 7522, 'iamkevingates': 7523, '90': 7524, 'multnomahcounty': 7525, 'alissonfiair': 7526, 'dbinea': 7527, 'senatordurbin': 7528, 'carefully': 7529, 'colleagues': 7530, 'coordinate': 7531, 'ohokaysuree': 7532, 'britishvogue': 7533, 'givenchy': 7534, 'arianagrande': 7535, 'itsrally': 7536, '450': 7537, 'pppapin': 7538, 'alexgarcia': 7539, 'wx': 7540, 'showers': 7541, 'gusty': 7542, 'winds': 7543, 'tangled': 7544, 'complicatedly': 7545, 'tangle': 7546, 'plausible': 7547, 'precociousism': 7548, 'grace': 7549, 'bnha': 7550, 'oc': 7551, 'detailed': 7552, 'noahr84': 7553, 'conclusively': 7554, 'spied': 7555, 'reall': 7556, 'neonflag': 7557, 'cynetsystems': 7558, 'firm': 7559, 'consultancy': 7560, 'rolandscahill': 7561, 'unsatisfied': 7562, 'unsat': 7563, 'geniusbastard': 7564, 'wasted': 7565, 'inconsistent': 7566, 'loml': 7567, 'enjoyable': 7568, 'doughyums': 7569, '1the9': 7570, 'fs': 7571, 'nims': 7572, 'editing': 7573, 'provides': 7574, 'trusttheprocess': 7575, 'ali': 7576, 'zafar': 7577, 'danpfeiffer': 7578, 'hannahh': 7579, 'mx': 7580, 'vibetickets': 7581, 'hoodie': 7582, 'difficulties': 7583, 'choosetobfree': 7584, 'adjunctprofessr': 7585, 'uranium': 7586, 'triniteee': 7587, 'lexxxxoooo': 7588, 'parent': 7589, 'joinery': 7590, 'outdoo': 7591, 'kou': 7592, 'nokardash': 7593, 'rabbit': 7594, 'ines': 7595, 'durao': 7596, 'itsgav86': 7597, 'une': 7598, 'canucksplace': 7599, 'bimjenning69': 7600, 'petey': 7601, 'bros': 7602, 'silenced': 7603, 'dinne': 7604, 'dimplesjoons': 7605, 'jawnhouston': 7606, 'rappers': 7607, 'indie': 7608, 'coochielomein': 7609, 'stall': 7610, 'oath': 7611, 'pherrosb': 7612, 'doooomed': 7613, 'waz': 7614, 'oso': 7615, '2x': 7616, 'senategop': 7617, 'feinstein': 7618, 'chinese': 7619, 'swayjimenez': 7620, 'lurking': 7621, 'oti': 7622, 'psychological': 7623, 'lancemaraj': 7624, 'iggy': 7625, 'vulture': 7626, 'unlike': 7627, 'foxy': 7628, 'frees': 7629, 'goodshepherd316': 7630, 'scare': 7631, 'invited': 7632, 'suffer': 7633, 'unto': 7634, 'parentscigang': 7635, 'visitor': 7636, 'interactive': 7637, 'guidance': 7638, 'fantastic': 7639, 'tool': 7640, 'skygio': 7641, '12000': 7642, 'yanslay': 7643, 'nkirukanistoran': 7644, 'shell': 7645, 'majeure': 7646, 'expo': 7647, 'echothecall': 7648, 'vp': 7649, 'affairs': 7650, 'abundanc': 7651, 'imamofpeace': 7652, 'designates': 7653, 'org': 7654, 'dollfacebeautii': 7655, 'barf': 7656, 'thanksgiving': 7657, 'inhabitants': 7658, 'mdlrlrnz': 7659, 'hehehe': 7660, 'daniellafrella': 7661, 'bah': 7662, 'nominating': 7663, 'decisio': 7664, 'fella': 7665, 'stella': 7666, 'ohteenquotes': 7667, 'temporary': 7668, 'justinamash': 7669, 'persona': 7670, 'shoul': 7671, 'daniela': 7672, 'florezz': 7673, 'mgrant76308': 7674, 'pence': 7675, 'embattled': 7676, 'ilha': 7677, 'twad': 7678, 'income': 7679, 'coun': 7680, 'shaniyaaroxy': 7681, 'phase': 7682, 'stormclaudi': 7683, 'eliminating': 7684, 'ninawest': 7685, 'thelaurenchen': 7686, 'lordcaccioepepe': 7687, 'racial': 7688, 'supremacist': 7689, 'colon': 7690, 'demonlomolatile': 7691, 'deliverance': 7692, 'bleachernation': 7693, 'humor': 7694, 'soiomamacitas': 7695, 'belle': 7696, 'jlou': 7697, 'yukittyzen': 7698, 'fancams': 7699, 'freeme93': 7700, 'smoked': 7701, 'shisha': 7702, 'rented': 7703, 'whips': 7704, 'eid': 7705, 'montlake': 7706, 'bridge': 7707, 'reopened': 7708, 'traffic': 7709, '01': 7710, 'byeeee': 7711, 'cells': 7712, 'doonaught': 7713, 'ultramom': 7714, 'whamen': 7715, 'minuets': 7716, 'jasmiths': 7717, 'mileydimension': 7718, 'tamed': 7719, 'miley': 7720, 'apex': 7721, 'sunset99': 7722, 'criticizing': 7723, 'fxckhairin': 7724, 'sexylouis123': 7725, 'sexyasfuck': 7726, 'drgaysex': 7727, 'maxkonnorxxx': 7728, 'ensure': 7729, 'dominant': 7730, 'piercing': 7731, 'wal': 7732, 'dianelong22': 7733, 'thief': 7734, 'valor': 7735, 'charged': 7736, 'annaolympiaa': 7737, 'finstas': 7738, '200': 7739, 'finsta': 7740, 'service': 7741, 'evesluisa': 7742, 'howaboutno424': 7743, 'papadioum': 7744, 'peachsaliva': 7745, 'muster': 7746, 'sayori': 7747, 'avi': 7748, 'arianat06518296': 7749, 'bbmasachievement': 7750, 'hwolfauthor': 7751, 'nicholas': 7752, 'winton': 7753, '669': 7754, 'occupied': 7755, 'czechoslovakia': 7756, 'wwii': 7757, 'operatio': 7758, 'liu': 7759, 'qingge': 7760, 'wyominghippie1': 7761, 'antivaxxer': 7762, 'indies': 7763, 'vaccination': 7764, 'starkindxstries': 7765, 'sweatsxstew': 7766, 'finer': 7767, 'destination': 7768, 'bootcamp': 7769, 'workshop': 7770, 'liveatream': 7771, 'honzogonzo': 7772, 'chrisceg': 7773, 'thegunrun': 7774, 'panda': 7775, 'beenasty': 7776, 'notifying': 7777, 'ancestors': 7778, 'derronesho': 7779, 'soyeonschild': 7780, 'dissing': 7781, 'disses': 7782, 'slept': 7783, 'painful': 7784, 'vm2': 7785, 'bobbie': 7786, 'gz': 7787, 'smother': 7788, 'bobrae48': 7789, 'aprae': 7790, 'safety': 7791, 'piiidgeon': 7792, 'kpoppies': 7793, 'insult': 7794, 'landlordsgas': 7795, 'drchaeed': 7796, 'label': 7797, 'englandcricket': 7798, 'skill': 7799, 'engvpak': 7800, 'spit': 7801, 'marbles': 7802, 'wipe': 7803, 'lrozen': 7804, 'counsel': 7805, 'kyle': 7806, 'meen': 7807, 'inno': 7808, 'twz': 7809, 'bogummy': 7810, 'idkbrosorry': 7811, 'egged': 7812, 'ellevargaz': 7813, 'ilsanb0i': 7814, 'kayajones': 7815, 'wice': 7816, 'couldnt': 7817, 'emrazz': 7818, 'narrow': 7819, 'lemming': 7820, 'celesia6': 7821, 'sholl': 7822, 'ssy': 7823, 'freak': 7824, 'min71747333': 7825, 'mathematics': 7826, 'chlobunnyy': 7827, 'cousin': 7828, 'leukemias': 7829, 'factsvixx': 7830, 'hakyeon': 7831, 'supervising': 7832, 'taekwoon': 7833, 'snack': 7834, 'purchases': 7835, 'soda': 7836, 'sn': 7837, 'obagnai': 7838, 'ironwidovv': 7839, 'nat': 7840, 'forgotten': 7841, 'janine': 7842, 'mentalhealthawareness': 7843, 'pig': 7844, 'subversive': 7845, 'masters': 7846, 'belong': 7847, 'richtoomey3': 7848, 'officialmckell': 7849, 'wood': 7850, 'requires': 7851, 'takesright': 7852, 'threw': 7853, 'balloon': 7854, 'grade': 7855, 'domestic': 7856, 'cowardly': 7857, 'maltese': 7858, 'plead': 7859, 'fifth': 7860, 'amendment': 7861, 'incriminate': 7862, 'taexiss': 7863, 'wambulogy': 7864, 'heidi': 7865, 'grody': 7866, 'tanyacompas': 7867, 'lgbtq': 7868, 'esy': 7869, 'collec': 7870, 'parallels': 7871, 'kymcgs': 7872, 'kneezurr': 7873, 'briphwilson': 7874, 'chrissynantz': 7875, 'rouse': 7876, 'theothermandela': 7877, 'kendrick': 7878, 'castillo': 7879, 'onepegmg': 7880, 'dethroned': 7881, 'pigletit': 7882, 'hulu': 7883, 'noticing': 7884, 'deej6x': 7885, 'iont': 7886, 'dressed': 7887, 'amandaasette': 7888, 'beauwatson18': 7889, 'afragmento': 7890, 'alannized': 7891, 'lonjas': 7892, 'reconocer': 7893, 'pendeja': 7894, 'powerscoach': 7895, 'goggans': 7896, '800m': 7897, '5a': 7898, '07': 7899, '39': 7900, 'misterrudeman': 7901, 'core': 7902, 'pathological': 7903, 'anoth': 7904, 'jordanuhl': 7905, 'shares': 7906, 'jamesmcc': 7907, 'therapymemesftw': 7908, 'damnit': 7909, 'staerrynight23': 7910, 'asthma': 7911, 'inhalator': 7912, 'stade': 7913, 'ahh': 7914, 'newark': 7915, 'gra': 7916, 'laurynsmith66': 7917, 'lauryn': 7918, 'knjight': 7919, 'braezellee': 7920, 'wha': 7921, 'txtinfos': 7922, 'tucker': 7923, 'carlson': 7924, '190501': 7925, '067': 7926, '72': 7927, 'clo': 7928, 'legohaechan': 7929, 'babyjunseoo': 7930, 'parkjunhee': 7931, 'taeminfvx': 7932, 'looolsss': 7933, 'shock': 7934, 'viscountbraith1': 7935, 'fyaz': 7936, 'greggsofficial': 7937, 'introduce': 7938, 'cabinets': 7939, 'stores': 7940, 'pasties': 7941, 'blowing': 7942, 'rose': 7943, 'mewoishkjdv': 7944, 'davidjharrisjr': 7945, 'can2geterdone': 7946, 'dawnf1re': 7947, 'awfully': 7948, 'follower': 7949, 'prefer': 7950, 'raff': 7951, 'euphoriagnes': 7952, 'googling': 7953, 'episiotomy': 7954, 'forceps': 7955, 'physiology': 7956, 'conservativma': 7957, 'rightful': 7958, 'heir': 7959, 'shamrockbibby': 7960, 'tmulah': 7961, 'hates': 7962, 'rippedguys': 7963, 'psvxxx': 7964, 'rippedmen': 7965, 'squeakgod': 7966, 'educational': 7967, 'simp1n': 7968, 'hows': 7969, 'gf': 7970, 'blendzz': 7971, 'tweakin': 7972, 'tcbrissette': 7973, 'refs': 7974, 'minaandmaya': 7975, 'coachella': 7976, 'barebakassassin': 7977, 'majored': 7978, 'itativcs': 7979, 'frustrated': 7980, 'clinging': 7981, 'apolitical': 7982, 'smacked': 7983, 'ja': 7984, 'katiepavlich': 7985, 'magically': 7986, 'thedarkknight07': 7987, 'xbxdvibesxx': 7988, 'suicidal': 7989, 'rockintrump': 7990, 'wardrobe': 7991, 'carries': 7992, 'thegreatkhalid': 7993, 'ndvrob': 7994, 'lindz46': 7995, 'zatzi': 7996, 'drivel': 7997, 'rickkrauhl': 7998, 'justin': 7999, 'journals': 8000, 'buckle': 8001, 'baeby': 8002, 'ticketnetph': 8003, 'updates': 8004, 'baejinyoun': 8005, 'kateoneil75': 8006, 'tapat': 8007, 'sauce': 8008, 'proper': 8009, 'noises': 8010, 'movements': 8011, 'gravity': 8012, 'glob': 8013, 'bat': 8014, 'wwg1wga': 8015, 'stix': 8016, 'fixerguy2': 8017, 'kylekulinski': 8018, 'crazycdn123': 8019, '2m': 8020, 'jeffreyguterman': 8021, 'sniffy': 8022, 'economic': 8023, 'garbhum': 8024, 'actors': 8025, 'airpo': 8026, 'hrs': 8027, 'wreck': 8028, 'solenepersona': 8029, 'hdksjsjs': 8030, 'jh': 8031, 'sg': 8032, 'cypher': 8033, 'sayian': 8034, 'issues': 8035, 'armylo3e': 8036, 'obl': 8037, 'therickydavila': 8038, 'memeber': 8039, 'fourens': 8040, 'kokerepo': 8041, 'allegations': 8042, 'terminate': 8043, 'corru': 8044, 'rhiannamator': 8045, 'taker': 8046, 'freshness': 8047, 'expired': 8048, 'dailycaller': 8049, 'tlc': 8050, 'thc': 8051, 'hennessy': 8052, 'laylanicksss': 8053, 'feminist': 8054, 'sagarcasm': 8055, 'soty2': 8056, 'flores59': 8057, 'poll': 8058, 'forexover': 8059, 'exovotingsquad': 8060, 'yassss': 8061, 'sizt': 8062, 'mccauley713': 8063, 'pimp': 8064, 'snoop': 8065, 'lauren': 8066, 'lopezz': 8067, 'theonlycleoluna': 8068, 'thors': 8069, 'exceptionally': 8070, 'cbeebies': 8071, 'proms': 8072, 'itslilbaby': 8073, 'weirdmoviebros': 8074, '73': 8075, 'kimsrustywheels': 8076, 'figuratively': 8077, 'tourists': 8078, 'nurse': 8079, 'pinpointincorp': 8080, 'collaboration': 8081, 'boroughs': 8082, 'nhsba': 8083, 'shealth': 8084, 'engaging': 8085, 'peice': 8086, 'fucki': 8087, 'thelovebelow': 8088, 'drawling': 8089, 'depressiin': 8090, 'vondoviak': 8091, 'interest': 8092, 'prequels': 8093, 'opinions': 8094, 'mayor': 8095, 'sweets': 8096, 'roger': 8097, 'inez': 8098, 'nico': 8099, 'joonive': 8100, 'deflated': 8101, 'ynbc': 8102, 'jonjon': 8103, 'lou': 8104, 'luvbaek4': 8105, 'decrease': 8106, '14000': 8107, 'nose': 8108, 'catsexuall': 8109, 'shallah': 8110, 'panzaalt': 8111, 'correct': 8112, 'comments': 8113, 'imanmuejaza': 8114, 'kvngcharlie': 8115, 'meannnn': 8116, 'wouldnt': 8117, 'tgalessa': 8118, 'oregon': 8119, 'fatherrayo': 8120, 'forehead': 8121, 'taiwobusolami': 8122, 'prepared': 8123, 'governance': 8124, 'niger': 8125, 'chickpeajimin': 8126, 'recording': 8127, 'kingheiric': 8128, 'nighttek': 8129, 'stanning': 8130, 'recently': 8131, 'boston': 8132, 's18': 8133, 'aristafbabi': 8134, 'coworker': 8135, 'lasagna': 8136, 'interpret': 8137, 'chooses': 8138, 'fulfill': 8139, 'deepest': 8140, 'nohasalah': 8141, 'ba7bek': 8142, 'fash5': 8143, 'noha': 8144, 'knowww': 8145, 'gabrielitaa8': 8146, 'famous': 8147, 'kee': 8148, 'sloth': 8149, 'tien0': 8150, '3ohblack': 8151, 'vanillaxtrees': 8152, 'archive93': 8153, 'concise': 8154, 'lights': 8155, 'malign': 8156, 'darkness': 8157, 'raymella': 8158, 'xo': 8159, 'skincare': 8160, 'salodua': 8161, 'livecam': 8162, 'salo': 8163, 'dildo': 8164, 'lovense': 8165, 'pops': 8166, 'mdl': 8167, 'genocide': 8168, 'pd8557': 8169, 'pariah': 8170, 'removed': 8171, 'create': 8172, 'mime': 8173, 'appeared': 8174, 'original': 8175, 'theyoungestkeys': 8176, 'pok': 8177, 'characteristics': 8178, '98chamberlain': 8179, 'horses': 8180, 'whycondo': 8181, 'sleeper': 8182, 'mdotbrown': 8183, 'kevin': 8184, 'durant': 8185, 'woulda': 8186, 'coulda': 8187, 'kevon': 8188, 'jerebko': 8189, 'm888': 8190, 'choi': 8191, 'hba1c': 8192, 'levels': 8193, 'som': 8194, 'minscosmos': 8195, 'discover': 8196, 'marxiuminlow': 8197, 'misinterpreting': 8198, 'byyron': 8199, 'nipsey': 8200, 'itsraimisyahmi': 8201, 'ouh': 8202, 'bahan': 8203, 'cs': 8204, 'geli': 8205, 'orang': 8206, 'lain': 8207, 'pun': 8208, 'ada': 8209, 'lawa': 8210, 'kau': 8211, 'thos': 8212, 'kimkai04148488': 8213, 'cisndkjdjaj': 8214, 'bitterrootguy': 8215, 'x98': 8216, 'debrun63': 8217, 'bearers': 8218, 'imperial': 8219, 'imaginifier': 8220, '3d': 8221, 'emperor': 8222, 'atop': 8223, 'pole': 8224, 'budgetapo': 8225, 'twitchsuppo': 8226, 'corp': 8227, 'gallivanscott': 8228, 'matthewstorey6': 8229, 'authoritarian': 8230, 'babyadjacent': 8231, 'billionaire': 8232, 'eunbjns': 8233, 'whos': 8234, 'eaten': 8235, 'nevadajack2': 8236, 'pleaee': 8237, 'brainwash': 8238, 'spittinchiclets': 8239, 'hook': 8240, 'authe': 8241, 'aarroonn': 8242, 'endured': 8243, 'vicious': 8244, 'opponents': 8245, 'reco': 8246, 'druck': 8247, 'nysohollywood': 8248, 'folded': 8249, 'thedollquay': 8250, 'chicagotribune': 8251, 'rumbunter': 8252, 'activated': 8253, 'perpetual': 8254, 'horseshoe': 8255, 'inning': 8256, 'pedophiles': 8257, 'map': 8258, 'nomap': 8259, 'orientation': 8260, 'pride': 8261, 'fla': 8262, 'flyaway': 8263, 'suspended': 8264, 'regularly': 8265, 'blustering': 8266, 'actua': 8267, 'onnat': 8268, 'theyarenotyou': 8269, 'wetmop3': 8270, 'mekaremadness': 8271, 'penguwingu': 8272, 'victoriatsao': 8273, 'ironwoodriver': 8274, 'fangirl': 8275, 'uranus': 8276, 'jguido14': 8277, 'saviour': 8278, 'ketengahketepi': 8279, 'bevchester': 8280, 'brooklyn': 8281, 'soft': 8282, 'khaleesi': 8283, 'harlem': 8284, 'midwifery': 8285, 'lecturers': 8286, 'jcu': 8287, 'bachelor': 8288, 'degre': 8289, 'mommyinadaze': 8290, '39th': 8291, 'boneless': 8292, 'spareribs': 8293, 'shrimp': 8294, 'smileeee': 8295, 'offenders': 8296, 'likelihood': 8297, 'vulnerable': 8298, 'broadcast': 8299, '500': 8300, 'fjdanieletto': 8301, 'arnoldspo': 8302, 'secure': 8303, 'headache': 8304, 'offlinemalek': 8305, 's1': 8306, 'theoa': 8307, 'exceptional': 8308, 's2': 8309, 'thx': 8310, 'britmarling': 8311, 'et': 8312, 'mikrotear': 8313, 'bh': 8314, 'debuted': 8315, 'seanbaggett81': 8316, 'masturbation': 8317, 'relief': 8318, 'calm': 8319, 'mahoupriest': 8320, 'iambumblebee': 8321, 'snerpent': 8322, 'ingenuity': 8323, 'ericholder': 8324, 'weapons': 8325, 'chapo': 8326, 'jhsmlcdrop': 8327, 'western': 8328, 'marke': 8329, 'alysabethspeice': 8330, 'lyctomas': 8331, 'aaron': 8332, 'population': 8333, 'mista71': 8334, 'xguccihes': 8335, 'mesmerized': 8336, 'fedoras': 8337, 'logan': 8338, 'terrazas': 8339, 'sensitive': 8340, 'offends': 8341, 'abbixstannard': 8342, 'prescriptions': 8343, 'mrstobon': 8344, 'michaelbd': 8345, 'shutdown': 8346, 'starchitects': 8347, 'navinbrah': 8348, 'interned': 8349, 'vanessa': 8350, 'rocked': 8351, 'edge': 8352, 'infield': 8353, 'outfield': 8354, 'lizcamp': 8355, 'nel': 8356, 'lob': 8357, 'oce': 8358, 'espo': 8359, 'stucam7771': 8360, 'mitchell': 8361, '1969': 8362, 'conspiracy': 8363, 'fazesway23': 8364, 'lilbabyfae': 8365, 'boners': 8366, 'chrissyteigen': 8367, 'sympathy': 8368, 'hawaii': 8369, 'blitheri': 8370, 'dasharez0ne': 8371, 'admin': 8372, 'dirkschwenk': 8373, 'butina': 8374, 'kinglrg': 8375, 'meme': 8376, 'lemme': 8377, 'lustlavishly': 8378, 'momm': 8379, 'phat': 8380, 'fitting': 8381, 'lolla': 8382, 'junitoboy1': 8383, 'embodying': 8384, 'oshun': 8385, 'goddess': 8386, 'river': 8387, 'iamcardib': 8388, 'saffiya': 8389, 'khan1': 8390, 'lucyfrown': 8391, 'radicaliser': 8392, 'robinson': 8393, 'safeagain1': 8394, 'documented': 8395, 'began': 8396, '140': 8397, 'contacts': 8398, 'oneandonly': 8399, 'izonethings': 8400, 'yujin': 8401, 'brittanyterry': 8402, 'dets': 8403, 'momagic': 8404, 'andrizzzydrea': 8405, 'overcome': 8406, 'challenges': 8407, 'faced': 8408, 'crazyantguy1970': 8409, 'gals': 8410, 'amazingly': 8411, 'shanis': 8412, 'hovitaaa': 8413, 'aahhhhhh': 8414, 'mustdiemusic': 8415, 'doubles': 8416, 'skinnyhyun': 8417, 'intricate': 8418, 'press': 8419, 'chucktodd': 8420, 'hick': 8421, 'seaglasssiren': 8422, 'pavy': 8423, 'pokedex': 8424, 'ridemyagustd': 8425, 'subspace': 8426, 'dom': 8427, 'hunger': 8428, 'allah': 8429, 'yasa': 8430, 'dace': 8431, 'ameen': 8432, 'malam': 8433, 'leavenwo': 8434, 'ks': 8435, 'clerical': 8436, 'julesno': 8437, 'identify': 8438, 'electronically': 8439, 'bl': 8440, 'chuuzus': 8441, 'rocky': 8442, 'arjmxrell': 8443, 'tsukl666': 8444, 'fazekay': 8445, 'typically': 8446, 'cod4': 8447, 'wars': 8448, 'surround': 8449, 'limited': 8450, 'mainemendoza': 8451, 'beep': 8452, 'aling': 8453, 'puring': 8454, 'convention': 8455, 'purc': 8456, 'rgpoulussen': 8457, 'otd': 8458, '1941': 8459, 'rudolf': 8460, 'hess': 8461, 'negotiate': 8462, 'sublime': 8463, 'navigation': 8464, '142': 8465, '626': 8466, '84': 8467, 'audio': 8468, 'imtrulyangel': 8469, 'pattonoswalt': 8470, 'blurpppppppppp': 8471, 'aiming': 8472, 'forbes': 8473, 'disowned': 8474, 'dictate': 8475, 'aavasquez': 8476, 'sweetest': 8477, 'sensational': 8478, 'wingsofkings1': 8479, 'material': 8480, 'xpixeliex': 8481, 'gang': 8482, 'edgarrawdon': 8483, 'wb': 8484, 'baskerville': 8485, 'disse': 8486, 'ation': 8487, 'victor': 8488, 'clercq': 8489, 'odatsg': 8490, 'yessssssgigi': 8491, 'callher': 8492, 'lola': 8493, 'woesenpai69': 8494, 'dinodinosdino': 8495, 'joshuaplayspkmn': 8496, 'bother': 8497, 'shiny': 8498, 'pokemon': 8499, 'rewards': 8500, 'loudre': 8501, 'nowplaying': 8502, 'morrell': 8503, 'paulmorrell': 8504, 'mbn': 8505, 'threesome': 8506, 'rgrich15': 8507, 'bluejays': 8508, 'furocious': 8509, 'knights': 8510, 'shutout': 8511, 'keoni': 8512, 'ipolani': 8513, 'thepride200018': 8514, 'bash': 8515, 'leaks': 8516, 'disrespect': 8517, 'rehab': 8518, 'indecisechaos': 8519, 'hmmmm': 8520, 'sparkling': 8521, 'modelling': 8522, 'shayzee10': 8523, 'europa': 8524, 'msblairewhite': 8525, 'glamlifeguru': 8526, 'truths': 8527, 'privately': 8528, 'macdaddysammy': 8529, 'bniceloco': 8530, 'crazynomura': 8531, 'microwaves': 8532, 'hopepeacelove': 8533, 'isteyzian': 8534, 'lauv': 8535, 'hits': 8536, 'approached': 8537, 'mansion': 8538, 'mrarsenictm': 8539, 'zil': 8540, 'mesut': 8541, '2021': 8542, 'gossip': 8543, 'zzariiinaa': 8544, 'keepmesecret333': 8545, 'theknickerfairy': 8546, 'shattering': 8547, 'wel': 8548, 'tamannainsan15': 8549, 'hqpornhq': 8550, 'negotiations': 8551, 'lunastar': 8552, 'bigtitsatwork': 8553, 'brazzers': 8554, 'highpockets84': 8555, 'luldre': 8556, 'rodcraiga': 8557, 'postcard': 8558, 'twitra': 8559, 'exhibit': 8560, 'edinburgh': 8561, 'gs': 8562, 'kaycesmith': 8563, 'tylerstroud2': 8564, 'feminists': 8565, 'darickr': 8566, 'modeled': 8567, 'modeling': 8568, 'soulljahhh': 8569, 'passpo': 8570, 'europe': 8571, 'drkindeya': 8572, 'kindu': 8573, 'thenoelmiller': 8574, 'douche': 8575, 'highschool': 8576, 'christahjay': 8577, 'itll': 8578, 'andrewge': 8579, 'ler': 8580, 'juicegawdupnext': 8581, 'russ': 8582, 'reidgirl5': 8583, 'jensen': 8584, 'ackles': 8585, 'lopsided': 8586, 'blown': 8587, 'masterdevwi': 8588, 'delivering': 8589, 'threddy': 8590, 'mailboxes': 8591, 'diaanavarg': 8592, 'banksonabbey': 8593, 'messages': 8594, 'fron': 8595, 'thotgenic': 8596, 'nonexistent': 8597, 'mihlalii': 8598, 'jinjoonies': 8599, 'booking': 8600, 'floorpunched': 8601, 'twee': 8602, 'stupidity': 8603, 'alexis': 8604, 'hallmark': 8605, 'cinder5byefive': 8606, 'kvnghdz': 8607, 'augur': 8608, 'bbcquestiontime': 8609, 'minded': 8610, 'remoaners': 8611, 'insufferable': 8612, 'twats': 8613, 'amorousangg': 8614, 'beanies': 8615, 't6thestars': 8616, 'bin': 8617, 'jinjin': 8618, 'moviescontext': 8619, 'squad': 8620, 'mcdonald': 8621, 'mac': 8622, 'chaelicendgame': 8623, 'princemarcus': 8624, 'fulla': 8625, 'recover': 8626, 'misses': 8627, 'contag': 8628, 'jaybilas': 8629, 'lucked': 8630, 'liverpoolleft': 8631, 'coppers': 8632, 'solidarity': 8633, 'bootle': 8634, 'nanimill21': 8635, 'lovey': 8636, 'dovey': 8637, 'imthereasonwhy': 8638, 'evesjoyce': 8639, 'cotton': 8640, 'randpaul': 8641, 'gxddessdeath': 8642, 'accepting': 8643, 'affection': 8644, 'smitter11': 8645, 'teamcornyn': 8646, 'conning': 8647, 'pornstar': 8648, 'rawdogging': 8649, 'kianaalol': 8650, 'simonjwooduk': 8651, 'bereaved': 8652, 'londonmarathon': 8653, 'charity': 8654, 'griefencounter': 8655, 'emeraldxmoon': 8656, 'voters': 8657, 'prolly': 8658, 'grades': 8659, 'judiirene': 8660, 'starl1ght7i5': 8661, 'jacquieswears': 8662, 'misunderstood': 8663, 'referencing': 8664, 'ninapineda7': 8665, 'billritter7': 8666, 'nina': 8667, 'fugazy': 8668, 'lamp': 8669, 'tlodadon7': 8670, 'convos': 8671, 'sojourner': 8672, 'miqdaad': 8673, 'wildpaws': 8674, 'govtwine': 8675, 'violatin': 8676, 'resignin': 8677, 'practicin': 8678, 'catholic': 8679, 'multiple': 8680, 'marriages': 8681, 'sinnin': 8682, 'nabuhay': 8683, 'ulit': 8684, 'ang': 8685, 'faizalmalasahi': 8686, 'choosing': 8687, 'tataschimmy': 8688, 'presenter': 8689, 'niece': 8690, 'punk': 8691, 'eptic': 8692, 'agnes': 8693, 'ton': 8694, 'taetards': 8695, 'para': 8696, 'hindi': 8697, 'kayo': 8698, 'pvvult': 8699, 'maxine': 8700, 'vocabulary': 8701, 'mcspocky': 8702, 'kjblisss': 8703, 'bob': 8704, 'sierras': 8705, 'odenkirk': 8706, 'jeez': 8707, 'horizontal': 8708, 'barney': 8709, 'tail': 8710, 'captainkalvis': 8711, 'pulse': 8712, 'clarky': 8713, 'pulsemynx': 8714, 'fazeclan': 8715, 'testyment': 8716, 'wholesome': 8717, 'bigbosstunna': 8718, 'heydonworks': 8719, 'shanesheehy': 8720, 'baboom': 8721, 'hotel': 8722, 'mamataofficial': 8723, 'petyr': 8724, 'protagonist': 8725, 'deader': 8726, 'hopes': 8727, 'sexist': 8728, 'sore': 8729, 'uro182': 8730, 'hackadayio': 8731, 'absolutebritney': 8732, 'blumenthal': 8733, 'blooming': 8734, 'unreleased': 8735, 'emote': 8736, 'kathea': 8737, 'pokemonchal': 8738, 'blackroseangel6': 8739, 'jubileeblais': 8740, 'adrive': 8741, 'tk': 8742, 'shush': 8743, 'phattilabelle': 8744, 'salary': 8745, 'vontesrealm': 8746, 'phoebe': 8747, 'zsuns': 8748, 'notjm': 8749, 'cub98': 8750, 'oye': 8751, 'suns': 8752, 'edited': 8753, 'chimineer': 8754, 'mv': 8755, 'shootings': 8756, 'youth': 8757, 'znik91': 8758, 'sekshiiie': 8759, 'fierygreenflame': 8760, 'kanye': 8761, 'daysdreamln': 8762, 'screenshots': 8763, 'photoshopped': 8764, 'fatheradz': 8765, 'explicitly': 8766, 'initiate': 8767, 'smokin': 8768, 'winginit': 8769, 'rows': 8770, 'perv': 8771, 'streetrenegade': 8772, 'wittlenessa': 8773, 'disturb': 8774, 'finall': 8775, 'chipotletweets': 8776, 'burritos': 8777, 'yal': 8778, 'nadximra': 8779, 'flickrstatus': 8780, 'clarify': 8781, 'magnificent': 8782, 'taekooknsfw1': 8783, 'shoved': 8784, 'deepthroat': 8785, 'btsnsfw': 8786, 'nsfwbts': 8787, 'rgoodlaw': 8788, 'censure': 8789, 'gene': 8790, 'september': 8791, 'november': 8792, 'capturing': 8793, 'giant': 8794, 'sepncer': 8795, 'default': 8796, 'stonyshome': 8797, 'raisin': 8798, 'fought': 8799, 'stevetony': 8800, 'joannes': 8801, 'feargal': 8802, 'sharkey': 8803, 'secrets': 8804, 'southern': 8805, 'flooded': 8806, 'rhay1991': 8807, 'untouchable': 8808, 'adapt': 8809, 'beellleeeeeee': 8810, 'essay': 8811, 'kadinoooo': 8812, 'mindful': 8813, 'amy': 8814, 'siskind': 8815, 'outpouring': 8816, 'misogyny': 8817, 'adamcbest': 8818, 'vkook': 8819, 'jealous': 8820, 'darrell': 8821, 'lota': 8822, 'probability': 8823, 'crumbs': 8824, 'motherhood': 8825, 'listened': 8826, 'senorprepotente': 8827, 'cuffs': 8828, 'maven': 8829, 'mistress': 8830, 'igd': 8831, 'riley': 8832, 'grisar': 8833, 'las': 8834, 'aegis': 8835, 'kdramaworlld': 8836, 'enlist': 8837, 'goodbye': 8838, 'iammatthewa2': 8839, 'bond': 8840, 'wack': 8841, 'icedoutomnitrix': 8842, 'pouya': 8843, 'chrisbanx': 8844, 'shbups': 8845, 'cutie': 8846, 'unchangin': 8847, 'greta': 8848, 'phoenixbmeadows': 8849, 'ditto': 8850, 'brfootball': 8851, 'wildest': 8852, 'duolingo': 8853, 'shootmyselfinmyfoot': 8854, 'athletes': 8855, 'millions': 8856, 'gear': 8857, 'concessions': 8858, 'noone': 8859, 'buys': 8860, 'leeminhoarchive': 8861, 'mischief': 8862, '190429': 8863, 'monmouths': 8864, 'zack': 8865, 'humanitarian': 8866, 'monarchy': 8867, 'paulcherry69': 8868, 'invented': 8869, 'sequel': 8870, 'alm': 8871, 'colorless': 8872, 'bow': 8873, 'intsys': 8874, 'cosmosdior': 8875, 'goverment': 8876, 'treats': 8877, 'nra': 8878, 'levis': 8879, 'habits': 8880, 'pov': 8881, 'basednas': 8882, 'awww': 8883, 'gtf': 8884, 'ashy': 8885, 'swim': 8886, 'ashleycaban': 8887, 'kellimorrisonxo': 8888, 'flawedradiance': 8889, 'smooch': 8890, 'kitty': 8891, 'ears': 8892, 'tressabeu': 8893, 'btsgiobal': 8894, 'btson': 8895, 'eoinhiggins': 8896, 'thebradfordfile': 8897, 'nkosi': 8898, 'milton': 8899, '8pm': 8900, 'stands': 8901, 'thickheaded': 8902, 'anythin': 8903, 'um': 8904, 'ginashkeda': 8905, 'catvalente': 8906, 'comprehensive': 8907, 'subsidized': 8908, 'adoption': 8909, 'refor': 8910, 'onlykimbora': 8911, 'vlogs': 8912, 'gahyeon': 8913, 'jiu': 8914, 'yoohyeon': 8915, 'siyeon': 8916, 'aremond2': 8917, 'weathered': 8918, 'frail': 8919, 'ye': 8920, 'sunchimm': 8921, 'myfavstrash': 8922, 'sweat': 8923, 'jenniferwwo': 8924, 'mcgahn': 8925, 'obstruct': 8926, 'masterjunkyu': 8927, 'removing': 8928, 'bokeh': 8929, 'yedam': 8930, 'jyeristia': 8931, 'baru': 8932, 'baca': 8933, 'separuh': 8934, 'luckily': 8935, 'je': 8936, 'lah': 8937, 'malukan': 8938, 'isaac': 8939, 'wright': 8940, 'justbreathe': 8941, 'oo4l': 8942, 'starrin': 8943, 'timescanner': 8944, 'sufficiently': 8945, 'resou': 8946, 'parkjlmin': 8947, 'armylegends': 8948, 'rmpics': 8949, 'ki': 8950, 'ng': 8951, 'fb': 8952, 'woahitsjojo': 8953, 'rhcp': 8954, 'amlwael98': 8955, 'tgrande7': 8956, 'enemy': 8957, 'ahoycarolina': 8958, 'imprison': 8959, 'maternity': 8960, 'peteforamerica': 8961, 'livepeteordie': 8962, 'rimac': 8963, 'approaching': 8964, 'peka': 8965, 'melbourne': 8966, 'ksjdimple': 8967, 'fist': 8968, 'wellbeing': 8969, 'boards': 8970, 'on23': 8971, 'mog7546': 8972, 'intimidation': 8973, 'donjr': 8974, 'intend': 8975, 'defying': 8976, 'subpoena': 8977, 'heeeeeeell': 8978, 'taahir98': 8979, 'audi': 8980, 'minhijoearmy': 8981, 'wallpaper': 8982, 'faith': 8983, 'rabbi': 8984, 'yisroel': 8985, 'goldstein': 8986, 'sinsationall': 8987, 'nutted': 8988, 'ahemdanvers': 8989, 'peps': 8990, 'yimikaowoaje': 8991, 'aang': 8992, 'deflect': 8993, 'rasengans': 8994, 'rasenshurikens': 8995, 'attac': 8996, 'gisfiyumi91': 8997, 'amosc': 8998, 'hornydm': 8999, 'dmme': 9000, 'snapme': 9001, 'bigdick': 9002, 'bigcock': 9003, 'iamforgelance': 9004, 'onebigloveforangela': 9005, 'itunes': 9006, 'sighed': 9007, 'xuxisglow': 9008, 'nedarbnagrom': 9009, 'specified': 9010, 'craft': 9011, 'cuzzz': 9012, 'awayyyy': 9013, 'bffjeongguk': 9014, 'stating': 9015, 'scrubs': 9016, 'doingright1': 9017, 'falsely': 9018, 'sonic': 9019, 'parkerstaars': 9020, 'rdj': 9021, 'oscar': 9022, 'senseiamal': 9023, 'assault': 9024, 'roof': 9025, 'sharing': 9026, 'intimate': 9027, 'madness': 9028, 'andyostroy': 9029, 'inappropriate': 9030, 'zephanijong': 9031, 'cinematic': 9032, 'kellysherlock1': 9033, 'sljbikelover': 9034, '46': 9035, 'clout': 9036, 'syamilaaa': 9037, 'january': 9038, 'honmugwagwat': 9039, 'varakashi': 9040, 'mdc': 9041, 'electorate': 9042, 'july': 9043, 'wher': 9044, 'receive': 9045, 'kayleeburris': 9046, 'fancy': 9047, 'temi': 9048, 'starksrhodes': 9049, 'russo': 9050, 'jimmy': 9051, 'woo': 9052, 'deadmii': 9053, 'jontron': 9054, 'apologized': 9055, 'sabribsa': 9056, 'eijirou': 9057, 'kirishima': 9058, 'fin': 9059, 'retrokims': 9060, 'grabbed': 9061, 'manipulator': 9062, 'bradsherman': 9063, 'mkerbi': 9064, 'inse': 9065, 'selena': 9066, 'gomez': 9067, 'sheckwes': 9068, 'yesssir': 9069, 'owwwwe': 9070, 'scouts': 9071, 'heywood98': 9072, 'sidewalk': 9073, 'microphone': 9074, 'speaker': 9075, 'aduitprobs': 9076, 'reese': 9077, 'traciemac': 9078, 'bmore': 9079, 'sequels': 9080, 'julian': 9081, 'butler': 9082, 'hughrbennett': 9083, 'semitism': 9084, 'racism': 9085, 'kiddomarv': 9086, 'vincentdesmond': 9087, 'gama': 9088, 'noonan': 9089, 'discusses': 9090, 'void': 9091, 'lamp19': 9092, 'therealshamiam': 9093, 'springer': 9094, 'droggon': 9095, 'gehrig38': 9096, 'aodespair': 9097, 'alyssaschoener': 9098, 'boob': 9099, 'brucevh': 9100, 'eli': 9101, 'lunaa': 9102, 'goodwood': 9103, 'surrogacy': 9104, 'andrewdhaner': 9105, 'fredthegeminiii': 9106, 'fourevathuggin': 9107, 'wijnaldum': 9108, 'sydneycrespo': 9109, 'feu': 9110, 'araneta': 9111, 'wearher': 9112, 'yara': 9113, 'streak': 9114, 'roulette': 9115, 'load': 9116, 'lik': 9117, 'hanging': 9118, 'justinmcelroy': 9119, 'snsachinnandu': 9120, 'rj': 9121, 'gj': 9122, 'mh': 9123, 'ncp': 9124, 'goa': 9125, 'sardesairajdeep': 9126, 'channels': 9127, 'kedarnath': 9128, 'yatra': 9129, 'varanasi': 9130, 'ezmayt': 9131, 'craignolan928': 9132, 'bowenchris': 9133, '36': 9134, 'qwe': 9135, 'ee': 9136, '12hr': 9137, 'tee': 9138, 'modifiedvikas': 9139, 'custody': 9140, 'photoshop': 9141, 'woma': 9142, 'didyouknow': 9143, 'futureofwork': 9144, 'fawfulfan': 9145, 'blacks': 9146, 'spokeswoman': 9147, 'adeleoutdid': 9148, 'adele': 9149, 'stephlitty': 9150, 'appreciating': 9151, 'stephone': 9152, 'gilmore': 9153, 'clueless': 9154, '2gs': 9155, 'offensive': 9156, 'boysofbelami': 9157, 'belami': 9158, 'cocksucking': 9159, 'externalization': 9160, 'laundry': 9161, 'allstarjr2724': 9162, 'celestielleva': 9163, 'alittlemid': 9164, 'upsetting': 9165, 'enzodzns': 9166, 'cooph12': 9167, 'johndzns': 9168, 'firefighters': 9169, 'dues': 9170, 'sucking': 9171, 'union': 9172, 'jamsjamsjim': 9173, 'mtracey': 9174, 'passingly': 9175, 'familiar': 9176, 'bombshell': 9177, 'cycles': 9178, 'kn': 9179, 'semester': 9180, 'mnstrkaty': 9181, 'hiii': 9182, 'katy': 9183, 'lov': 9184, 'cityboyxo': 9185, 'tfw': 9186, 'mlb': 9187, 'katana': 9188, 'grid': 9189, 'sux': 9190, 'elseanomac': 9191, 'targarcyn': 9192, 'hints': 9193, 'norangalal': 9194, 'extended': 9195, 'dir': 9196, 'kyc': 9197, '30th': 9198, 'lenasxo': 9199, 'gallery': 9200, 'peacebeyondme': 9201, 'foreveryoung': 9202, 'thobbyvincent': 9203, 'stonebwoyb': 9204, 'nextbaseus': 9205, 'shandenf': 9206, 'franchise': 9207, 'jksingula': 9208, 'getzlerchem': 9209, 'polymers': 9210, 'barneygrubbs': 9211, 'csr': 9212, 'plastics': 9213, 'ti': 9214, 'koosnamgi': 9215, 'pt2': 9216, 'performances': 9217, 'namgi': 9218, 'spring': 9219, 'commencement': 9220, 'recap': 9221, 'nornadzrah': 9222, 'flm22': 9223, 'takeaway': 9224, 'commmittee': 9225, 'acuity': 9226, 'realrinaldi91': 9227, 'kingsdalefc': 9228, 'apprentice': 9229, 'whilst': 9230, 'diseree': 9231, 'mannn': 9232, '904skrilla': 9233, 'det': 9234, 'pinnyloketch': 9235, 'jakedenver6': 9236, 'jackbradylyons': 9237, 'replacement': 9238, 'eriksen': 9239, 'winger': 9240, 'alli': 9241, 'decent': 9242, 'cdm': 9243, 'singles': 9244, 'rustydawg19': 9245, 'sandersaurus': 9246, 'afl': 9247, 'brai': 9248, 'oiivrstark': 9249, 'joelosteen': 9250, 'amaz': 9251, 'belmting': 9252, 'journey': 9253, 'sybrennaa': 9254, 'gupton68': 9255, 'pied': 9256, 'piper': 9257, 'flute': 9258, 'maple': 9259, 'syrup': 9260, 'rats': 9261, 'infamousminded': 9262, 'handler': 9263, 'lo': 9264, 'succeeding': 9265, 'ironsheik': 9266, 'suffers': 9267, 'meltdown': 9268, 'leverage': 9269, 'headking': 9270, 'legends': 9271, 'senatemajldr': 9272, 'majority': 9273, 'int': 9274, 'juanpman23': 9275, 'hmu': 9276, 'mashwaniazhar': 9277, 'sadqa': 9278, 'jariya': 9279, 'disappointed': 9280, 'intolerant': 9281, 'astounding': 9282, 'ridejiwon': 9283, 'jiyong': 9284, 'shazdgttv': 9285, 'keyboard': 9286, 'xbuttercupally': 9287, 'ameliamelody': 9288, 'brownandbella': 9289, 'depravity': 9290, 'prevent': 9291, 'chainbody': 9292, 'lately': 9293, 'stasiaxo': 9294, 'trifling': 9295, 'minds1': 9296, 'temples': 9297, 'complicated': 9298, 'philosophy': 9299, 'employee': 9300, 'rizzzy': 9301, 'javeigh': 9302, 'relatable': 9303, 'majornelson': 9304, 'xbox': 9305, 'enhanced': 9306, 'sekiro': 9307, 'shadows': 9308, '44': 9309, '59': 9310, 'generichoe': 9311, 'elmattico': 9312, 'wheelchair': 9313, 'liciaa': 9314, 'renee': 9315, 'justcallmeally': 9316, 'nothin': 9317, 'ruins': 9318, 'hittin': 9319, 'incorrectkchk': 9320, 'ochako': 9321, 'insulted': 9322, 'amen': 9323, 'adulterer': 9324, 'christians': 9325, 'families': 9326, 'sofiasilvestrii': 9327, 'protected': 9328, 'sadityyyb': 9329, 'vacations': 9330, 'surprises': 9331, 'wordmercenary': 9332, 'khou': 9333, 'whooping': 9334, 'gravestone': 9335, 'lexkeyk': 9336, 'lulyasx': 9337, 'sade': 9338, 'agoon': 9339, 'dat': 9340, 'fictional': 9341, 'wormhole': 9342, 'junkiebjones': 9343, 'deserving': 9344, 'gcfshobi': 9345, 'gloomy': 9346, 'arrived': 9347, 'baela': 9348, 'trick': 9349, 'spoiled': 9350, 'badbjafia': 9351, 'roasting': 9352, 'lclutterbuckx': 9353, 'draining': 9354, 'mile': 9355, 'johnnyh1857': 9356, 'bleached': 9357, 'blonde': 9358, 'bomber': 9359, 'specially': 9360, 'mrsip78': 9361, 'ughtea': 9362, 'bassinggal': 9363, 'babysit': 9364, 'puppy': 9365, 'jonasbrothers': 9366, 'happinessbegins': 9367, 'eternalsunseoks': 9368, 'zantestrays': 9369, 'starving': 9370, 'massimo': 9371, 'gentle': 9372, 'volunteers': 9373, 'lad': 9374, 'thatsmisslisa2u': 9375, 'crawling': 9376, 'charliewall': 9377, 'bpdbanter': 9378, 'te': 9379, 'kendralust': 9380, 'wcw': 9381, 'beaut': 9382, 'lovekendra': 9383, 'lustarmy': 9384, 'pods': 9385, 'buddies': 9386, 'hef4caerphilly': 9387, 'nationcymru': 9388, 'hef': 9389, 'nextehbsjsk': 9390, 'mrpops': 9391, 'iv': 9392, 'aww': 9393, 'dya': 9394, 'realrussiacollusion': 9395, 'ukraine': 9396, 'probes': 9397, 'breathed': 9398, 'nichols': 9399, 'targaryen': 9400, 'breaker': 9401, 'chains': 9402, 'watcher': 9403, 'jaeparkhk': 9404, 'version': 9405, 'whuts': 9406, 'jae': 9407, 'collymore': 9408, 'dreaming': 9409, 'bailona2019': 9410, 'babyyyhoneyy': 9411, 'ghostinlottie': 9412, 'clarissaleeann': 9413, 'dirkjordan2': 9414, 'marycheneve': 9415, 'betsygervasi': 9416, 'deadbunnyfrank': 9417, 'hopefully': 9418, 'sire': 9419, 'liljosh': 9420, 'loaned': 9421, 'thre': 9422, 'demolition': 9423, 'babie': 9424, 'growned': 9425, 'blackandbrave45': 9426, 'kimberlyannwi12': 9427, 'typicalaussie30': 9428, 'phychoenigma95': 9429, 'realizing': 9430, 'slew': 9431, 'childhood': 9432, 'a1': 9433, 'shxt': 9434, 'package': 9435, 'prayingmedic': 9436, 'furiously': 9437, 'discredit': 9438, 'smear': 9439, 'krishgm': 9440, 'williamson': 9441, 'wrecked': 9442, 'sacked': 9443, 'breach': 9444, 'conf': 9445, 'stewlicker': 9446, 'eer': 9447, 'leots13': 9448, 'evolved': 9449, 'ken19512': 9450, 'illegally': 9451, 'financial': 9452, 'sup': 9453, 'jul': 9454, 'sybrina': 9455, 'fulton': 9456, 'trayvon': 9457, 'lmfksjznwhz': 9458, 'mgm': 9459, 'willfully': 9460, 'moronic': 9461, 'spideymemoir': 9462, 'superpowers': 9463, 'gerry': 9464, 'conway': 9465, 'saviuk': 9466, 'keith': 9467, 'snitchpuff': 9468, 'oliviafics': 9469, 'bigmoneyyk': 9470, 'threee': 9471, 'brazy': 9472, 'brendon': 9473, 'bbcbreakfast': 9474, 'bbcnews': 9475, 'upper': 9476, 'awkwardness': 9477, 'theremin': 9478, 'mychemsolobot': 9479, 'intentions': 9480, 'bearing': 9481, 'seussterhoff': 9482, 'llebrun': 9483, 'seuss': 9484, 'prolife': 9485, 'survivethetrap': 9486, 'cover': 9487, 'studying': 9488, 'nasa': 9489, 'closet': 9490, 'chock': 9491, 'anissa': 9492, 'yannanrsc': 9493, 'reverse': 9494, 'uno': 9495, 'favs': 9496, 'claiming': 9497, 'libe': 9498, 'arian': 9499, 'grotes': 9500, 'mcnaughtona': 9501, 'kfor': 9502, 'cab': 9503, 'drivers': 9504, 'bb': 9505, 'jaime': 9506, 'emerging': 9507, 'rubble': 9508, 'da7e': 9509, 'lowering': 9510, 'hurandmera': 9511, 'boopy': 9512, 'forward': 9513, 'tracynarboneta': 9514, 'eno': 9515, 'jdm': 9516, 'serving': 9517, 'barackobama': 9518, 'privilege': 9519, 'share': 9520, 'ordi': 9521, 'tututullip': 9522, 'indieray02': 9523, 'redbird2fly': 9524, 'kakar': 9525, 'harsha': 9526, 'christiano': 9527, 'lopez': 9528, 'meothmans': 9529, 'maymunatu': 9530, 'umargee00': 9531, 'yusufabdallahh': 9532, 'dama': 9533, 'boil': 9534, 'thelastrm': 9535, 'heterosexual': 9536, 'interacting': 9537, 'lean': 9538, 'cuisine': 9539, 'sierra': 9540, '93000bjiaer': 9541, 'survived': 9542, 'resisted': 9543, 'roversbob': 9544, 'writer': 9545, 'senpai': 9546, 'bonuses': 9547, 'sameerahmdd': 9548, 'marr': 9549, 'cuhh': 9550, 'atuknature': 9551, 'nangoi': 9552, 'utara': 9553, 'budak': 9554, 'hingusan': 9555, 'babi': 9556, 'yang': 9557, 'beranak': 9558, 'seokjinbit': 9559, 'fucjfgshd': 9560, 'ensembles': 9561, 'fudnwjdbdi': 9562, 'ezlikeurgurl': 9563, 'michelleeesays': 9564, 'disregarding': 9565, 'getausright': 9566, 'preetismenon': 9567, 'filthy': 9568, 'misogynist': 9569, 'bjp4india': 9570, 'sordid': 9571, 'fak': 9572, 'wataugaboosters': 9573, 'nchsaa': 9574, 'whs': 9575, 'wataugawsoccer': 9576, 'belie': 9577, 'ambeer': 9578, 'mandigmaii1': 9579, 'trotters': 9580, 'resting': 9581, 'volunteering': 9582, 'aralovebts90': 9583, 'wowww': 9584, 'speechless': 9585, 'yg': 9586, 'maksum': 9587, 'bjaya': 9588, 'burukkan': 9589, 'nama': 9590, 'malaysia': 9591, 'agama': 9592, 'sided': 9593, 'eversinceaiz': 9594, 'parkermccollum': 9595, 'bobs': 9596, 'cowtown': 9597, 'gums': 9598, 'scraped': 9599, 'shifts': 9600, 'chile': 9601, 'esper': 9602, 'isabella': 9603, 'cultures': 9604, 'technology': 9605, 'jrockfrmnbc': 9606, 'neva': 9607, 'bendstudio': 9608, 'triumph': 9609, 'chiinedu': 9610, 'thecheekyginger': 9611, 'hvmcca': 9612, 'ney': 9613, 'ps2': 9614, 'rain': 9615, 'clouds': 9616, 'ajisafemayowa': 9617, 'noisemakers': 9618, 'gatordave': 9619, 'sec': 9620, 'hindsight': 9621, 'commercial': 9622, 'hammered': 9623, 'aleghostls': 9624, 'bitil': 9625, 'bionicle': 9626, 'atrocious': 9627, 'doieiuvr': 9628, 'storyyyyyyy': 9629, 'aaaaaaaaaaa': 9630, 'mrchrisnewton': 9631, 'bluffing': 9632, 'teresa': 9633, 'monique2': 9634, 'wipes': 9635, 'colouryclouds': 9636, 'oneahgasegot7': 9637, 'aghase': 9638, 'security': 9639, 'odd': 9640, 'deny': 9641, 'cookie': 9642, 'intruder': 9643, 'nairobi': 9644, 'yh': 9645, 'dxygx': 9646, 'visualstravel': 9647, 'clearer': 9648, 'iamhea': 9649, 'iess': 9650, 'heavydusk': 9651, 'pose': 9652, 'sends': 9653, 'stance': 9654, 'yeti': 9655, 'gonzalezgabbiee': 9656, 'cuteee': 9657, 'solid1lance': 9658, 'restinpeaceyaya': 9659, 'gedio10': 9660, 'shatta': 9661, 'wale': 9662, 'lyk': 9663, 'den': 9664, 'dominatin': 9665, 'cathhfm': 9666, 'yankeeperson2': 9667, 'brexitcentral': 9668, 'davidtcdavies': 9669, 'remainer': 9670, 'nataiiesglitter': 9671, 'zac': 9672, 'efron': 9673, 'catgirlsbot': 9674, 'tesseract': 9675, 'begin': 9676, 'formed': 9677, '46pm': 9678, 'katie': 9679, 'banner': 9680, 'mums': 9681, 'purposely': 9682, 'scenarios': 9683, 'skullnaught80': 9684, 'lauging': 9685, 'joenbc': 9686, 'nohycuk': 9687, 'difference': 9688, 'karlaxsvm': 9689, 'sooijimochi': 9690, 'search': 9691, 'jho': 9692, 'muscles': 9693, 'muscled': 9694, 'mymixtapez': 9695, 'mymixtapezapp': 9696, 'burr': 9697, 'registers': 9698, 'troye': 9699, 'bloom': 9700, 'noooooo': 9701, 'bloomtourmnl': 9702, 'denisholton': 9703, 'daelovee': 9704, 'identified': 9705, 'rapey': 9706, 'homeboys': 9707, 'britishrogues': 9708, 'overlordexo': 9709, 'related': 9710, 'dictionary': 9711, 'davinci': 9712, 'nigg': 9713, 'cuts': 9714, 'flipping': 9715, 'uninterrupted': 9716, 'medusa': 9717, 'stheno': 9718, 'euryale': 9719, 'luciferseason4': 9720, 'chloe': 9721, 'feared': 9722, 'nazygold2': 9723, 'oooooo': 9724, 'markdice': 9725, 'ourabsolutebern': 9726, 'pretentious': 9727, 'dylanmsmitty': 9728, 'namgistudio': 9729, 'idgafitri': 9730, 'mbulelombali': 9731, 'yuhp': 9732, 'wena': 9733, 'nowwwww': 9734, 'mattymidland': 9735, 'burnthwaitejun1': 9736, 'lgbtseokie': 9737, 'deanslist': 9738, 'liennegrc': 9739, 'amosjonah': 9740, 'evacuated': 9741, 'minds': 9742, 'banged': 9743, 'gba': 9744, 'trum': 9745, 'duppytech': 9746, 'ccp': 9747, 'surveillance': 9748, 'darwin': 9749, 'drumscup': 9750, 'weddings': 9751, '21marvelousmarv': 9752, 'nd': 9753, 'mineifiwildout': 9754, 'lanez': 9755, 'hairline': 9756, 'adewalepresh': 9757, 'patient': 9758, 'relax': 9759, 'sexyspo': 9760, 'sfan': 9761, 'dylanjacksonnba': 9762, 'maryhar92096144': 9763, 'schoolgirl': 9764, 'cumslut': 9765, 'cockslut': 9766, 'cumwhore': 9767, 'jailbait': 9768, 'freaky': 9769, 'spitting': 9770, 'darealbuttercup': 9771, 'exchanged': 9772, 'vows': 9773, '1061bli': 9774, 'btsfam': 9775, 'miguel': 9776, 'r1finesse': 9777, 'wakes': 9778, 'boyy': 9779, 'gigglegguk': 9780, 'hobi': 9781, 'pronouncing': 9782, 'ich': 9783, 'werde': 9784, 'diesen': 9785, 'abend': 9786, 'nicht': 9787, 'vergessen': 9788, 'tiniest': 9789, 'fon': 9790, 'lisamei62': 9791, 'acejordan23': 9792, 'gc': 9793, 'buzz': 9794, 'delegate': 9795, 'devon': 9796, 'atmosphere': 9797, 'calrootsfest': 9798, 'ericgarland': 9799, 'deutsche': 9800, 'kushner': 9801, 'blcksiren': 9802, 'uneat': 9803, 'leegregory4367': 9804, 'faker': 9805, 'sims': 9806, 'aly': 9807, 'kylekallgren': 9808, 'shove': 9809, 'concept': 9810, 'unprofessional': 9811, 'bust': 9812, 'tcmpa': 9813, 'dababy': 9814, 'yawns': 9815, 'midterm': 9816, 'reyserna': 9817, 'jaketapper': 9818, 'kinkywaveoral': 9819, 'miamercy': 9820, 'eater': 9821, 'gonzalovaa': 9822, 'stoopid': 9823, 'briwillsss': 9824, 'leilargerstein': 9825, 'horrific': 9826, 'designed': 9827, 'kavanaugh': 9828, 'notgivenchyass': 9829, 'blatantly': 9830, 'skytokki': 9831, 'stories': 9832, 'threat': 9833, 'csulb': 9834, 'juggernaut': 9835, 'deadpool': 9836, 'bothered': 9837, 'cowards': 9838, 'aboit': 9839, 'forgiveness': 9840, 'richthekid': 9841, 'badder': 9842, 'differently': 9843, 'skj1sung': 9844, 'woojin': 9845, 'ep': 9846, 'jeremy': 9847, 'zucker': 9848, 'comethru': 9849, 'sheeran': 9850, 'dive': 9851, 'ft': 9852, 'disco': 9853, 'jona': 9854, 'feeds': 9855, 'recipe': 9856, 'pot': 9857, 'jigg': 9858, 'coloradoscenery': 9859, 'comprehen': 9860, 'guyh': 9861, 'afcmedium': 9862, 'rehnato': 9863, 'nonamepat': 9864, 'asherjpal': 9865, 'supdogsecu': 9866, 'rooftop': 9867, 'defender': 9868, 'unhinged': 9869, 'samuelzhr': 9870, 'larson': 9871, 'exhausted': 9872, 'satisfied': 9873, 'stuffed': 9874, 'eatsma': 9875, 'eathealthy': 9876, 'ionsize': 9877, 'ginajoseph': 9878, 'constant': 9879, 'hardinscott': 9880, 'aftermovie': 9881, 'afterpassion': 9882, 'xi': 9883, 'jinping': 9884, 'powerxsurge': 9885, 'guckytae': 9886, 'utmost': 9887, 'seats': 9888, 'comfey': 9889, 'texted': 9890, 'stephen': 9891, 'taestilyy': 9892, 'controller': 9893, 'carpooling': 9894, 'gotten': 9895, 'waaaay': 9896, 'lighskindevil': 9897, 'trish': 9898, 'regan': 9899, 'trishregan': 9900, 'bangladesh': 9901, 'commerce': 9902, '52am': 9903, '54f': 9904, '45f': 9905, 'current': 9906, 'cloudy': 9907, 'ariantheworld': 9908, 'adamturneraf': 9909, 'amvetsuppo': 9910, 'morality': 9911, 'suppor': 9912, 'ailaaathefirst': 9913, 'anifanatical': 9914, 'heads': 9915, 'jerker': 9916, 'kibvan': 9917, 'jes': 9918, 'chastain': 9919, 'ends': 9920, 'iron': 9921, 'daenery': 9922, 'exceptions': 9923, 'telushk': 9924, 'activists': 9925, 'sadgirlrosa': 9926, 'nerve': 9927, 'voting': 9928, 'stricter': 9929, 'ashore': 9930, 'questi': 9931, 'farwell': 9932, 'ohl': 9933, 'disappoint': 9934, 'granvillea': 9935, 'lah3309': 9936, 'ar15m4mid': 9937, 'patriotjenn': 9938, 'roxxxygurl': 9939, 'southerngirl151': 9940, 'mammalon': 9941, 'cutefunnyanimal': 9942, 'carlhigbie': 9943, 'sadeadekugbe': 9944, 'derogatory': 9945, 'disabilities': 9946, 'mrva1n': 9947, 'royalstreamers': 9948, 'twnn': 9949, '0riles': 9950, 'fx': 9951, 'jiminspromlse': 9952, 'jeon': 9953, 'millionaire': 9954, 'fentybeauty': 9955, 'calicovkook': 9956, 'hsksbsjs': 9957, 'btsxmetlifed1': 9958, 'allkpop': 9959, '127': 9960, 'mcucevans': 9961, 'scpyulia': 9962, 'hop': 9963, 'jeep': 9964, 'armor': 9965, 'affected': 9966, 'literall': 9967, 'tinyhands': 9968, '1776stonewall': 9969, 'primaries': 9970, 'elysianjae': 9971, 'ahgases': 9972, 'comeback': 9973, 'teaser': 9974, 'jyp': 9975, 'ahgase': 9976, 'peachyabstract': 9977, 'ndtv': 9978, 'suriya': 9979, 'kodakvon': 9980, 'instructions': 9981, 'clit': 9982, 'whirl': 9983, 'retards': 9984, 'wig': 9985, 'translated': 9986, 'jordankyys': 9987, 'hyunjinlsm': 9988, 'jaehyun': 9989, 'hsjsj': 9990, 'nct127insj': 9991, 'nct127inusa': 9992, 'doomnotnice': 9993, 'realization': 9994, 'nuh': 9995, 'gi': 9996, 'hce': 9997, 'ificate': 9998, 'fakepresi': 9999, 'goofy': 10000, 'sucker': 10001, 'pmueller14': 10002, 'pregnancy': 10003, 'larrysabato': 10004, 'readers': 10005, 'reelection': 10006, 'slogan': 10007, 'jenndeon': 10008, 'pint': 10009, 'newfoundland': 10010, 'celebates': 10011, 'dccc': 10012, 'repeal': 10013, 'obamacare': 10014, 'olodo': 10015, 'tranmere': 10016, 'offs': 10017, 'therealkacydash': 10018, 'benevolentarm': 10019, 'dazai': 10020, 'dostoevsky': 10021, 'sapiosexual': 10022, 'foreground': 10023, 'gogol': 10024, 'justinbobby': 10025, 'destroyer': 10026, 'satirical': 10027, 'taradublinrocks': 10028, 'barrcoverup': 10029, 'subp': 10030, 'maritzialopez': 10031, 'negativity': 10032, 'pepperpixie': 10033, 'positive': 10034, 'feminism': 10035, 'randomers': 10036, 'libera': 10037, '190502': 10038, '969': 10039, '82': 10040, '652': 10041, '80000': 10042, 'm3lchyyy': 10043, 'mdepo': 10044, 'iwe': 10045, 'nadai': 10046, 'kuwashow': 10047, 'petite': 10048, 'sunkissjimiin': 10049, 'writingcommunity': 10050, 'amwriting': 10051, 'fiction': 10052, 'nuanced': 10053, 'developed': 10054, 'villains': 10055, 'dcexaminer': 10056, 'lipped': 10057, 'bends': 10058, 'turd': 10059, 'mikunknown': 10060, 'barrel': 10061, 'maybot': 10062, 'grayling': 10063, 'wis': 10064, 'mblockedu': 10065, 'j23app': 10066, 'deepfillernyc': 10067, 'ftm': 10068, 'quisling': 10069, 'cretinous': 10070, 'fcbarcelona': 10071, 'jasonscheer': 10072, '1290': 10073, 'rated': 10074, 'tucson': 10075, 'ironmikeluke': 10076, 'mchunted': 10077, 'fma': 10078, 'axiir0': 10079, 'trapgawdd': 10080, 'decides': 10081, 'heigl': 10082, 'judy53103493': 10083, 'screamin': 10084, 'tarsierorosanna': 10085, 'sdlplive': 10086, 'preference': 10087, 'hobistark': 10088, 'fluent': 10089, 'pres': 10090, 'spreading': 10091, 'jailing': 10092, 'nautibynature': 10093, 'daycare': 10094, 'degrees': 10095, 'earned': 10096, 'badge': 10097, 'untappd': 10098, 'hobismujer': 10099, 'taes': 10100, 'rayneadrianax': 10101, 'fridge': 10102, 'snacks': 10103, 'titties': 10104, 'lovedbyhyunjin': 10105, 'grimezsz': 10106, 'equality': 10107, 'gabrih6ll': 10108, 'silky': 10109, 'lipsync': 10110, 'dragrace': 10111, 'notspmulvihill': 10112, 'arrow': 10113, 'perve': 10114, 'zains96': 10115, 'inferior': 10116, 'quincyolivari': 10117, 'satomiisenpaii': 10118, 'lighting': 10119, 'hangover': 10120, 'mckelvhey': 10121, 'spectacle': 10122, 'cujohaze': 10123, 'defended': 10124, 'village': 10125, 'freaking': 10126, 'mkent108': 10127, 'hethrone': 10128, 'phew': 10129, 'rickygervais': 10130, 'brilliant': 10131, 'alltingzchunli': 10132, 'offset': 10133, 'remy': 10134, 'parole': 10135, 'brittany': 10136, 'xceleste': 10137, 'esp': 10138, 'seatfillers': 10139, 'ashlikesramen': 10140, 'overhyped': 10141, 'coherent': 10142, 'wavesgod': 10143, 'mikehth': 10144, 'phelans': 10145, 'assistant': 10146, 'krizless': 10147, '11inchking': 10148, 'stripes': 10149, 'bussy': 10150, 'tigger': 10151, 'privatetoall': 10152, 'odirilesoul': 10153, 'leteng': 10154, 'rdp': 10155, 'lower': 10156, 'mikey': 10157, 'oldfield': 10158, 'mileycyrus': 10159, 'glastonbury': 10160, '29th': 10161, 'corn': 10162, 'scoot': 10163, 'chop': 10164, 'axisf': 10165, 'voluptuous': 10166, 'weedmaps': 10167, 'frostiest': 10168, 'trichomes': 10169, 'cannabinoids': 10170, 'thca': 10171, 'terpenes': 10172, 'jaclynhstrauss': 10173, 'tvrget': 10174, 'peado': 10175, 'serietv46': 10176, 'munchieeee': 10177, 'silverblakpride': 10178, 'calll': 10179, 'beastmode': 10180, 'thalafcteam': 10181, 'ajithians': 10182, 'hbdiconicthalaajith': 10183, 'unspokentext': 10184, 'bitop30': 10185, 'kallmenox': 10186, 'strict': 10187, 'cannibal': 10188, 'nsterberg': 10189, 'karle': 10190, 'denke': 10191, 'cannibalized': 10192, '42': 10193, 'villagers': 10194, 'mitt': 10195, 'winelady19aolc1': 10196, 'joeconchatv': 10197, 'crowded': 10198, 'nearlyxhappier': 10199, 'ww2': 10200, '8s': 10201, 'tarzan': 10202, 'intechs': 10203, 'cruce': 10204, 'unrational': 10205, 'clowncursed': 10206, 'wander': 10207, 'shall': 10208, 'boast': 10209, 'vague': 10210, 'deje1000': 10211, 'florida': 10212, 'floridaing': 10213, 'eplbible': 10214, 'easierharry': 10215, 'sarahbaska': 10216, 'powder': 10217, 'hatred': 10218, 'rivercitylabs': 10219, 'rcl': 10220, 'porns': 10221, 'gutted': 10222, 'rfrass': 10223, 'whyy': 10224, 'pikachulita': 10225, 'excellence': 10226, 'astroad': 10227, 'dvd': 10228, 'jinjins': 10229, 'ssshut': 10230, '1cathymckeown': 10231, 'chadasitiyy': 10232, 'rawww': 10233, 'baesuals': 10234, 'yunojng': 10235, 'kingushbal': 10236, 'considered': 10237, 'peas': 10238, 'metrosexual': 10239, 'infernoomni': 10240, 'blazing': 10241, 'buried': 10242, 'concrete': 10243, 'yeen': 10244, 'snout': 10245, 'sheets': 10246, 'mattress': 10247, 'breaths': 10248, 'growling': 10249, 'softly': 10250, 'kingkayle': 10251, 'motherfcker': 10252, 'atrinileah': 10253, 'ramping': 10254, 'ages': 10255, 'gulzaari': 10256, 'acne': 10257, 'dontgetfried': 10258, 'getbak': 10259, 'temper': 10260, 'loooool': 10261, 'soroyalcouncil': 10262, 'nicest': 10263, 'pure': 10264, 'akeuapex': 10265, 'peesh': 10266, 'playapex': 10267, 'rotation': 10268, 'selahrainxxx': 10269, 'clips4sale': 10270, 'richardmannprod': 10271, 'babe': 10272, 'surgery': 10273, 'unborn': 10274, 'bestforbritain': 10275, 'verywishywashy': 10276, 'finalfrantasy': 10277, 'proudresister': 10278, 'mitch': 10279, 'mcconnell': 10280, 'mitchellford91': 10281, 'flower': 10282, 'vase': 10283, 'problems': 10284, 'russdiemon': 10285, 'audacity': 10286, 'narcissism': 10287, 'manifested': 10288, 'realit': 10289, 'shannonrwatts': 10290, 'sen': 10291, 'igetloudtf': 10292, 'clingy': 10293, 'bite': 10294, 'tickle': 10295, 'punkie': 10296, 'translate': 10297, 'exceptively': 10298, 'peculiar': 10299, 'penlight': 10300, 'polosec': 10301, 'diamondandsilk': 10302, 'laughed': 10303, 'ily': 10304, 'banimonica': 10305, 'sylvester': 10306, 'unique': 10307, 'hav': 10308, 'ggukly': 10309, 'kidneys': 10310, 'lillymsantiago': 10311, 'homeboy': 10312, 'existentialvibe': 10313, 'falsettour': 10314, 'bushcamp2': 10315, 'givingmejinki': 10316, 'elitetuans': 10317, 'wiggle': 10318, 'mnuchin': 10319, 'antmcfchht': 10320, 'vvd': 10321, 'five': 10322, 'yards': 10323, 'bloke': 10324, 'silkymilkvoices': 10325, 'lorafrimanee': 10326, 'corny': 10327, 'lathellg': 10328, 'bloodborne': 10329, 'combat': 10330, '90scaroidanvers': 10331, 'cdopeland': 10332, 'fav': 10333, 'def': 10334, 'embarrass': 10335, 'afteraddic': 10336, 'ear': 10337, 'seulgi': 10338, 'extremely': 10339, 'susanri38784627': 10340, 'bilderberg': 10341, 'gp': 10342, 'barrattpeter': 10343, 'hilarybennmp': 10344, 'amiss': 10345, 'kathygriffin': 10346, 'california': 10347, 'forests': 10348, 'jurisdiction': 10349, 'shfi': 10350, 'billboards': 10351, 'sonnygr4h4m': 10352, 'whoa': 10353, 'juggling': 10354, 'lotufodunrin': 10355, 'precious': 10356, 'awwwww': 10357, 'mssnewbooty': 10358, 'zufats': 10359, 'trevdaddy69': 10360, 'ducks': 10361, 'ellegenerico': 10362, 'holly': 10363, 'ihave3testiclez': 10364, 'sk': 10365, 'ontia': 10366, 'nickiminaj': 10367, 'trinarockstarr': 10368, 'andypratt81': 10369, 'remarkable': 10370, 'toake': 10371, 'gabbi': 10372, 'mello': 10373, 'whistleirl': 10374, '700': 10375, 'gaza': 10376, 'amputations': 10377, 'warns': 10378, 'mention': 10379, 'thevictorianels': 10380, 'tweeps': 10381, 'sell': 10382, 'wears': 10383, 'dope': 10384, 'unisex': 10385, 'hoodies': 10386, 'sweatshi': 10387, 'pants': 10388, 'sneaks': 10389, 'cubano1955': 10390, 'djt': 10391, 'vultures': 10392, 'chasefoland': 10393, 'grandfeddy': 10394, 'agenda': 10395, 'darerising': 10396, 'anivies': 10397, 'castz': 10398, 'chubztr': 10399, 'oneupthreads': 10400, 'gamersupps': 10401, 'rodemics': 10402, 'respawnproducts': 10403, 'yesss': 10404, 'gloriousguard': 10405, 'fcking': 10406, 'thegame': 10407, 'smoovth': 10408, 'meetings': 10409, 'atta': 10410, 'swarzak': 10411, 'awhile': 10412, '180': 10413, 'tapos': 10414, 'ipapadraw': 10415, 'yung': 10416, 'anticipate': 10417, 'astridologia': 10418, 'suits': 10419, 'juicetoowavie': 10420, 'purchase': 10421, '92': 10422, '57': 10423, 'helenzille': 10424, 'journalism': 10425, 'sabc': 10426, 'iqbal': 10427, 'surve': 10428, 'han': 10429, 'katja': 10430, 'thieme': 10431, 'affiliations': 10432, 'signatories': 10433, 'reputations': 10434, 'bulletln': 10435, 'stairfax': 10436, 'sean': 10437, 'youll': 10438, 'uphold': 10439, 'tetris99': 10440, 'nintendoswitch': 10441, 'shawnyhoney': 10442, 'confusing': 10443, 'pacific': 10444, 'ocean': 10445, 'zerohedge': 10446, 'powell': 10447, 'dovish': 10448, 'stocks': 10449, 'soar': 10450, 'xandermobusvo': 10451, 'woooooow': 10452, 'bbma': 10453, 'website': 10454, 'lesleyallan': 10455, 'karl': 10456, 'crilly': 10457, 'beautyyyyyyyyyyyyyyy': 10458, 'kelvi': 10459, 'surprised': 10460, 'deresute': 10461, 'kr': 10462, 'romm': 10463, 'larry': 10464, 'damaged': 10465, 'markgatiss': 10466, 'slimy': 10467, 'anglicised': 10468, 'cultured': 10469, 'policemen': 10470, 'gorillas': 10471, 'juniorebong': 10472, 'ini': 10473, 'obong': 10474, 'whitedswife': 10475, 'hopefuls': 10476, '63': 10477, 'hypocrit': 10478, 'jayshon': 10479, '511': 10480, 'diesmilingg': 10481, 'koyakeyan': 10482, 'scratching': 10483, 'cd': 10484, 'fepeooh': 10485, 'k0lax': 10486, 'neo': 10487, 'tilted': 10488, 'naming': 10489, 'system': 10490, 'uni': 10491, 'priced': 10492, 'shesadarkskln': 10493, 'hoodieclone': 10494, 'janaecambraa': 10495, 'irritating': 10496, 'anonymously': 10497, 'clap': 10498, 'birdsforu': 10499, '90000g': 10500, 'sos': 10501, 'gatinhos': 10502, 'asianwolfhound': 10503, 'gloss': 10504, 'pspc': 10505, 'spac': 10506, 'ising': 10507, 'agencies': 10508, 'agency': 10509, 'oan': 10510, 'sons': 10511, 'tvietor08': 10512, 'killergaming22': 10513, 'gaigadsot': 10514, 'yousef': 10515, 'noora': 10516, 'sana': 10517, 'sofiane': 10518, 'carlasbarlow': 10519, 'carla': 10520, 'mizflagpin': 10521, 'snapboogielady': 10522, 'saturdays': 10523, 'lacrosse': 10524, 'oflpaa': 10525, 'ordunlee': 10526, 'insults': 10527, 'prettyflowergal': 10528, 'fearful': 10529, 'sound': 10530, 'witness': 10531, 'ablannar': 10532, 'continents': 10533, 'pangea': 10534, 'fear': 10535, 'relaxed': 10536, 'yell': 10537, 'lungs': 10538, 'stuttering': 10539, 'lyrics': 10540, 'seesaw': 10541, 'letstalkvivian': 10542, 'patnspankme': 10543, 'pounds': 10544, 'exercises': 10545, 'noyelue': 10546, 'bathing': 10547, 'iskaba': 10548, 'iskelebete': 10549, 'iskolo': 10550, 'zanerasp': 10551, 'attracted': 10552, 'munching': 10553, 'mfs': 10554, 'chowing': 10555, 'cackled': 10556, 'pehledesh': 10557, 'psychopath': 10558, 'aa': 10559, 'ic02': 10560, 'immak02': 10561, 'aam': 10562, 'nationalist': 10563, 'epicrofldon': 10564, 'aacharyasahiil': 10565, 'biggeorgeaz': 10566, 'pigsandplans': 10567, 'ferrari': 10568, 'danieljohnsalt': 10569, 'impossible': 10570, 'brex': 10571, 'episodes': 10572, 'yyvibes': 10573, 'cocky': 10574, 'commiedobbs': 10575, 'trdavis8337': 10576, 'brheabc13': 10577, 'abc13houston': 10578, 'ore': 10579, 'theboss': 10580, 'rambobiggs': 10581, 'giossyerim': 10582, 'tissue': 10583, 'likeejdjsjzjjss': 10584, 'anatescott': 10585, 'ewarren': 10586, 'unredacted': 10587, 'activities': 10588, 'strengths': 10589, 'personali': 10590, 'arrives': 10591, 'iashutosh23': 10592, 'morally': 10593, 'ethically': 10594, 'msisodia': 10595, 'atishiaap': 10596, 'arvindk': 10597, 'bern': 10598, 'emerah': 10599, 'justxhenry': 10600, 'jaeminpic': 10601, 'taipei': 10602, 'nangang': 10603, 'exhibition': 10604, 'hall': 10605, 'llorona': 10606, 'otsopeare': 10607, '2019capitalstb': 10608, 'inwhat123': 10609, 'censorship': 10610, 'finest': 10611, 'lmaoo': 10612, 'blogger': 10613, 'scowling': 10614, 'doe': 10615, 'causticbob': 10616, '2ndlightdiv': 10617, '1upgames': 10618, 'casualdragongms': 10619, 'sapirmizrahi2': 10620, 'itisdxvid': 10621, 'asgard': 10622, 'vivid': 10623, 'anal': 10624, 'plumbing': 10625, 'lowe': 10626, 'terriaxoxo': 10627, 'yalls': 10628, 'clombardioso': 10629, 'yourlocalgaymom': 10630, 'stalls': 10631, 'congregation': 10632, 'ufcnmir1': 10633, 'refluxgate': 10634, 'lpr': 10635, 'reflux': 10636, 'taniaarpa': 10637, 'cebuano': 10638, 'host': 10639, 'terribly': 10640, 'diane': 10641, 'abbott': 10642, 'unreal': 10643, 'riaaror09913771': 10644, 'relijoon': 10645, 'gentlerubs': 10646, 'heh': 10647, 'php': 10648, 'files': 10649, 'technica': 10650, 'dvatw': 10651, 'homosexuality': 10652, 'teache': 10653, 'clinthenderson7': 10654, 'active': 10655, 'held': 10656, 'candid': 10657, 'constructive': 10658, 'conversations': 10659, 'karlbubi': 10660, 'afford': 10661, 'lowry': 10662, 'nrlonnine': 10663, 'wwos': 10664, 'channel9': 10665, 'cronulla': 10666, 'dugan': 10667, 'jennyletellier': 10668, 'canonavengers': 10669, 'compelling': 10670, 'realised': 10671, 'demonstr': 10672, 'crickets': 10673, 'display': 10674, 'dreamy': 10675, 'daya': 10676, 'jacob': 10677, 'batalon': 10678, 'recognition': 10679, 'theasiachanelle': 10680, 'experienced': 10681, 'whiteley': 10682, 'lfcvine': 10683, 'matip': 10684, 'venusoleil': 10685, 'thejuicyjolene': 10686, 'stoooooop': 10687, 'shivanikesarwa2': 10688, 'betu': 10689, 'shivi': 10690, 'coolpadsma': 10691, 'mycoolmom': 10692, 'mycoolname': 10693, 'orchid': 10694, 'endwalespinal': 10695, 'focusing': 10696, 'coutinho': 10697, 'kokomothegreat': 10698, '7m': 10699, '106000': 10700, 'fists': 10701, 'sial': 10702, 'gallovoa': 10703, 'downplaying': 10704, 'moratorium': 10705, 'range': 10706, 'missiles': 10707, 'lerexxhd': 10708, 'chrislhayes': 10709, 'laying': 10710, 'groundwork': 10711, 'sic': 10712, 'deathjuicetrapa': 10713, 'lex': 10714, 'supergirl': 10715, 'scorer': 10716, 'scottbalf': 10717, 'sock': 10718, 'destroyers': 10719, 'menace2anxiety': 10720, 'models': 10721, 'rayne': 10722, 'jointhebreed': 10723, 'nobrosmo': 10724, 'rochelle': 10725, 'meyer1': 10726, 'khanya': 10727, 'joeynocollusion': 10728, 'previously': 10729, 'truck': 10730, 'payment': 10731, 'affiliated': 10732, 'hells': 10733, 'angels': 10734, 'motorcycle': 10735, 'hbcufessions': 10736, 'clock': 10737, '05': 10738, '32am': 10739, 'jumper': 10740, 'cables': 10741, 'propane': 10742, 'dang': 10743, 'anyother': 10744, 'andrearatkovic': 10745, 'dchris114': 10746, 'pmarshwx': 10747, 'nwsspc': 10748, 'bullseye': 10749, 'songadaymann': 10750, 'comprised': 10751, 'hyyhgguk': 10752, 'attend': 10753, 'vcrs': 10754, 'attendin': 10755, 'thefauxsynder': 10756, 'reached': 10757, 'sora': 10758, 'riku': 10759, 'laelluke': 10760, 'lesley': 10761, 'misleading': 10762, 'raises': 10763, 'sheep': 10764, 'pictur': 10765, 'lilglizzy12': 10766, 'glizz': 10767, 'sufy2': 10768, '7btsaf': 10769, 'lv': 10770, 'whe': 10771, 'samiraa1000': 10772, 'takers': 10773, 'unserious': 10774, 'epidemic': 10775, 'bandopopp': 10776, 'geofflambe': 10777, '77': 10778, 'periscope': 10779, 'armchair': 10780, 'fantasy': 10781, 'predictions': 10782, 'jinx': 10783, 'myfantasy': 10784, 'kanasous': 10785, 'whim': 10786, 'fluffy': 10787, 'ot3': 10788, 'fic': 10789, 'wwe': 10790, 'samizayn': 10791, 'vince': 10792, 'liam': 10793, 'hope1987': 10794, 'although': 10795, 'cauldron': 10796, 'gaff': 10797, 'piff': 10798, 'admiralaegis': 10799, 'solo': 10800, 'rout': 10801, 'djkhaled': 10802, 'myth': 10803, 'rememberin': 10804, 'thobykov': 10805, 'stairs': 10806, 'trocolopoderoso': 10807, 'chilling': 10808, 'scrotum': 10809, '2cm': 10810, 'push': 10811, 'thoug': 10812, 'lightsaber': 10813, 'ltec3010': 10814, 'resource': 10815, 'negotiation': 10816, 'hosseeb': 10817, 'jobsearch': 10818, 'careeradvice': 10819, 'rainydaywoman': 10820, 'gretahansen12': 10821, 'justintweets4': 10822, 'krunner': 10823, 'fancier': 10824, 'breadsticks': 10825, 'fabric': 10826, 'sack': 10827, 'cbs': 10828, 'yeetodacheeto': 10829, 'calimosthated': 10830, 'solitudekth': 10831, 'canf': 10832, 'laughinf': 10833, 'namjoons': 10834, 'taheyungs': 10835, 'wt': 10836, 'kms': 10837, 'noratoriou5': 10838, 'liars': 10839, 'bernie2020': 10840, 'beyondlegends': 10841, 'roman': 10842, 'nfkrz': 10843, 'alzheimers': 10844, 'misbegottenman': 10845, 'skullfucking': 10846, 'bulge': 10847, 'uchepokoye': 10848, 'gathering': 10849, 'finland': 10850, 'gabon': 10851, 'ghana': 10852, 'trigge': 10853, 'sheedchapo': 10854, 'havin': 10855, 'taps22565': 10856, 'gma': 10857, 'abcworldnews': 10858, 'mattgutmanabc': 10859, 'shoots': 10860, 'whatthe': 10861, 'tease': 10862, 'skynewsaust': 10863, 'chriskkenny': 10864, 'ol': 10865, 'rupe': 10866, 'arse': 10867, 'zerlinamaxwell': 10868, 'nickname': 10869, 'kyngkhokhas': 10870, 'dembele': 10871, 'masethabamalek1': 10872, 'investor': 10873, 'ideologies': 10874, 'veritaamore87': 10875, 'adding': 10876, 'iammald': 10877, 'cuddling': 10878, 'clark': 10879, 'gasm': 10880, 'letters': 10881, 'insider': 10882, 'bybuccellati': 10883, 'bruno': 10884, 'aunt': 10885, 'jlm601': 10886, 'gvqz': 10887, 'fuckboy': 10888, 'davidfrawleyved': 10889, 'tackled': 10890, 'upa': 10891, 'crucial': 10892, 'claptoncfc': 10893, 'subscrib': 10894, 'urias': 10895, 'lilkeefquotes': 10896, 'wondering': 10897, 'theologoolutoye': 10898, '8m': 10899, '473000': 10900, '52000': 10901, '6000': 10902, 'lhhny': 10903, 'britney': 10904, 'rat': 10905, 'bioodfetish': 10906, 'sindivanzyl': 10907, 'mbali': 10908, 'mihuowo': 10909, 'doctor': 10910, 'stretch': 10911, 'zpxlng': 10912, 'committal': 10913, 'truongasm': 10914, '315': 10915, '827': 10916, 'priory': 10917, 'samantha': 10918, 'shannon': 10919, 'desperation': 10920, 'setting': 10921, 'thedemocrats': 10922, 'caudronma': 10923, 'euopenhouse': 10924, 'whiiizdom': 10925, 'shreds': 10926, 'tianabarajas': 10927, 'aubrianadicarlo': 10928, 'firstnamegabby': 10929, 'guuuuuurlll': 10930, 'petras': 10931, 'sneel': 10932, 'crawfish': 10933, 'coloring': 10934, 'evewhite5500': 10935, 'pajaritodeivan': 10936, 'ivan': 10937, 'notviking': 10938, 'gummy': 10939, 'vitamins': 10940, 'alexisscarrasco': 10941, 'ey': 10942, 'bleach': 10943, 'djoness': 10944, 'teary': 10945, 'eyed': 10946, 'consistent': 10947, 'retain': 10948, 'viewers': 10949, 'risquebrat': 10950, 'featdios': 10951, 'harp': 10952, '1984': 10953, 'milnesd': 10954, 'juliahb1': 10955, 'loon': 10956, 'bawilo': 10957, 'mirror': 10958, 'walkers': 10959, 'swooshgod': 10960, 'promotion': 10961, 'soak': 10962, 'humorandanimals': 10963, 'puppiesclub': 10964, 'trishmorrison15': 10965, 'grants4usa': 10966, 'powderpuff': 10967, 'fiame': 10968, 'fastcarspete': 10969, 'nonce': 10970, 'tommeh': 10971, 'chuckled': 10972, 'encyclopedia': 10973, 'dramatica': 10974, 'godawful': 10975, 'kiwifarms': 10976, 'hallmarks': 10977, 'documentation': 10978, 'individu': 10979, 'sarugetchuu': 10980, 'torress': 10981, 'karenn': 10982, 'kylie': 10983, 'flexx': 10984, 'fnm': 10985, 'ashgriffo': 10986, 'napping': 10987, 'justicedems': 10988, 'djxve': 10989, 'igo': 10990, 'checking': 10991, 'jayjohnsofresh': 10992, 'gigz': 10993, 'crucible': 10994, 'gigi01wilson': 10995, 'sigh': 10996, 'lok': 10997, 'sabha': 10998, 'ironsighten': 10999, 'ironsightcentral': 11000, 'realmonatepizza': 11001, 'woodfired': 11002, 'monate': 11003, 'scotteweinberg': 11004, 'cosmopolis': 11005, 'pattinson': 11006, 'wayne': 11007, 'clydessb': 11008, 'timothy': 11009, 'weah': 11010, 'lilnasx': 11011, 'forcing': 11012, 'cowboy': 11013, 'yee': 11014, 'haw': 11015, 'domyoonji': 11016, 'diana': 11017, 'oyaro': 11018, 'delete': 11019, 'hardly': 11020, 'mayday': 11021, '831c': 11022, 'stevehasatweet': 11023, 'tatabwa': 11024, 'drgpradhan': 11025, 'island': 11026, 'rahulgandhi': 11027, 'choppers': 11028, 'ships': 11029, 'melanindaj': 11030, 'lyin': 11031, 'downnnn': 11032, 'lukedyson': 11033, 'jhailess4': 11034, 'brownbullymanko': 11035, 'simranattree': 11036, 'ayuni': 11037, 'sund': 11038, 'theta': 11039, 'adddisonc': 11040, 'blake': 11041, 'lively': 11042, 'doggodating': 11043, 'daveockop': 11044, 'klopp': 11045, 'millie': 11046, 'fabinho': 11047, 'junior': 11048, 'deano': 11049, 'vw': 11050, 'mogulbaggins': 11051, 'spine': 11052, 'doththedoth': 11053, 'demon': 11054, 'possess': 11055, 'afcajax': 11056, 'consecutive': 11057, 'khaledbeydoun': 11058, '1925': 11059, '94': 11060, '600breezy': 11061, 'russell': 11062, 'wilson': 11063, 'ended': 11064, 'respectfully': 11065, 'kelsimwalker': 11066, 'gardettos': 11067, 'ara': 11068, 'minta': 11069, 'filthya': 11070, 'floetic': 11071, 'fusion': 11072, 'showcase': 11073, 'nw': 11074, 'abbn0rmal': 11075, 'flytpa': 11076, 'tpa': 11077, 'plans': 11078, 'rooms': 11079, 'filling': 11080, 'harapper': 11081, 'hr': 11082}
###Markdown
encoding or sequencing
###Code
encoded_clean_text_stem = tok_all.texts_to_sequences(clean_text_stem)
print(clean_text_stem[1])
print(encoded_clean_text_stem[1])
###Output
airjunebug : bay really ny nigga hea w suppo caleon
[3164, 3165, 30, 1367, 114, 192, 75, 202, 3166]
###Markdown
Pre-padding
###Code
from keras.preprocessing import sequence
max_length = 100
padded_clean_text_stem = sequence.pad_sequences(encoded_clean_text_stem, maxlen=max_length, padding='pre')
###Output
_____no_output_____
###Markdown
Test Data Pre-processing Data test Reading
###Code
data_t = pd.read_csv('drive/My Drive/HASOC Competition Data/english_test_1509.csv')
pd.set_option('display.max_colwidth',150)
data_t.head(10)
data_t.shape
print(data_t.dtypes)
###Output
tweet_id int64
text object
task1 object
task2 object
ID object
dtype: object
###Markdown
Making of "label" Variable
###Code
label_t = data_t['task1']
label_t.head()
###Output
_____no_output_____
###Markdown
Checking Dataset Balancing
###Code
print(label_t.value_counts())
import matplotlib.pyplot as plt
label_t.value_counts().plot(kind='bar', color='red')
###Output
HOF 423
NOT 391
Name: task1, dtype: int64
###Markdown
Convering label into "0" or "1"
###Code
import numpy as np
classes_list_t = ["HOF","NOT"]
label_t_index = data_t['task1'].apply(classes_list_t.index)
final_label_t = np.asarray(label_t_index)
print(final_label_t[:10])
from keras.utils.np_utils import to_categorical
label_twoDimension_t = to_categorical(final_label_t, num_classes=2)
print(label_twoDimension_t[:10])
###Output
[[0. 1.]
[1. 0.]
[0. 1.]
[1. 0.]
[1. 0.]
[0. 1.]
[1. 0.]
[1. 0.]
[1. 0.]
[1. 0.]]
###Markdown
Making of "text" Variable
###Code
text_t = data_t['text']
text_t.head(10)
###Output
_____no_output_____
###Markdown
**Dataset Pre-processing**1. Remove unwanted words2. Stopwords removal3. Stemming4. Tokenization5. Encoding or Sequencing6. Pre-padding 1. Removing Unwanted Words
###Code
import re
def text_clean(text):
''' Pre process and convert texts to a list of words '''
text=text.lower()
# Clean the text
text = re.sub(r"[^A-Za-z0-9^,!.\/'+-=]", " ", text)
text = re.sub(r"what's", "what is ", text)
text = re.sub(r"I'm", "I am ", text)
text = re.sub(r"\'s", " ", text)
text = re.sub(r"\'ve", " have ", text)
text = re.sub(r"can't", "cannot ", text)
text = re.sub(r"wouldn't", "would not ", text)
text = re.sub(r"shouldn't", "should not ", text)
text = re.sub(r"shouldn", "should not ", text)
text = re.sub(r"didn", "did not ", text)
text = re.sub(r"n't", " not ", text)
text = re.sub(r"i'm", "i am ", text)
text = re.sub(r"\'re", " are ", text)
text = re.sub(r"\'d", " would ", text)
text = re.sub(r"\'ll", " will ", text)
text = re.sub('https?://\S+|www\.\S+', "", text)
text = re.sub(r",", " ", text)
text = re.sub(r"\.", " ", text)
text = re.sub(r"!", " ! ", text)
text = re.sub(r"\/", " ", text)
text = re.sub(r"\^", " ^ ", text)
text = re.sub(r"\+", " + ", text)
text = re.sub(r"\-", " - ", text)
text = re.sub(r"\=", " = ", text)
text = re.sub(r"'", " ", text)
text = re.sub(r"(\d+)(k)", r"\g<1>000", text)
text = re.sub(r":", " : ", text)
text = re.sub(r" e g ", " eg ", text)
text = re.sub(r" b g ", " bg ", text)
text = re.sub(r" u s ", " american ", text)
text = re.sub(r"\0s", "0", text)
text = re.sub(r" 9 11 ", "911", text)
text = re.sub(r"e - mail", "email", text)
text = re.sub(r"j k", "jk", text)
text = re.sub(r"\s{2,}", " ", text)
text = re.sub(r"rt", " ", text)
return text
clean_text_t = text_t.apply(lambda x:text_clean(x))
clean_text_t.head(10)
###Output
_____no_output_____
###Markdown
2. Removing Stopwords
###Code
import nltk
from nltk.corpus import stopwords
nltk.download('stopwords')
def stop_words_removal(text1):
text1=[w for w in text1.split(" ") if w not in stopwords.words('english')]
return " ".join(text1)
clean_text_t_ns=clean_text_t.apply(lambda x: stop_words_removal(x))
print(clean_text_t_ns.head(10))
###Output
0 delmiyaa : samini resetting show moving things along nothing happened need know greatness
1 swxnsea know left
2 tried get divock origi free seeing club loan accepted offer actual
3 nutclusteruwu : yalls stupid white ass reactions meeting tom holland disneyland fucking kidding would
4 amp; bitch got big girls things
5 need hash browns
6 thefrankcomin fuck like end world
7 stoned2thabones : high shit
8 nothoopoverhoes : losing nice guy losing lame lmao
9 sammyyyk12 ummmmm excuse bitch
Name: text, dtype: object
###Markdown
3. Stemming
###Code
# Stemming
from nltk.stem import PorterStemmer
stemmer = PorterStemmer()
def word_stemmer(text):
stem_text = "".join([stemmer.stem(i) for i in text])
return stem_text
clean_text_t_stem = clean_text_t_ns.apply(lambda x : word_stemmer(x))
print(clean_text_t_stem.head(10))
###Output
0 delmiyaa : samini resetting show moving things along nothing happened need know greatness
1 swxnsea know left
2 tried get divock origi free seeing club loan accepted offer actual
3 nutclusteruwu : yalls stupid white ass reactions meeting tom holland disneyland fucking kidding would
4 amp; bitch got big girls things
5 need hash browns
6 thefrankcomin fuck like end world
7 stoned2thabones : high shit
8 nothoopoverhoes : losing nice guy losing lame lmao
9 sammyyyk12 ummmmm excuse bitch
Name: text, dtype: object
###Markdown
4. Tokenization
###Code
import keras
import tensorflow
from keras.preprocessing.text import Tokenizer
tok_test = Tokenizer(filters='!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~', lower=True, char_level = False)
tok_test.fit_on_texts(clean_text_t_stem)
vocabulary_all_test = len(tok_test.word_counts)
print(vocabulary_all_test)
test_list = tok_test.word_index
print(test_list)
###Output
{'fuck': 1, 'shit': 2, 'get': 3, 'need': 4, 'fucking': 5, 'go': 6, 'like': 7, 'ass': 8, 'want': 9, 'people': 10, 'know': 11, 'bitch': 12, 'never': 13, 'think': 14, 'ever': 15, 'today': 16, 'bts': 17, 'see': 18, 'would': 19, 'president': 20, 'got': 21, 'damn': 22, 'going': 23, 'u': 24, 'good': 25, 'look': 26, 'getting': 27, 'man': 28, 'tell': 29, 'away': 30, 'one': 31, 'big': 32, 'stop': 33, 'stupid': 34, 'time': 35, 'sick': 36, '2': 37, 'trump': 38, 'even': 39, 'everything': 40, 'really': 41, 'oh': 42, 'b': 43, 'realdonaldtrump': 44, 'right': 45, 'better': 46, 'work': 47, 'gonna': 48, 'come': 49, 'show': 50, 'said': 51, 'die': 52, 'say': 53, 'sta': 54, 'make': 55, 'could': 56, 'little': 57, '19': 58, 'twt': 59, 'old': 60, 'give': 61, 'rest': 62, 'still': 63, '3': 64, '1': 65, 'let': 66, 'amp': 67, 'pa': 68, 'someone': 69, 'morning': 70, 'found': 71, 'probably': 72, 'hea': 73, 'bbmas': 74, 'ready': 75, 'fine': 76, 'thought': 77, 'love': 78, 'hate': 79, 'put': 80, 'two': 81, 'things': 82, 'year': 83, 'im': 84, 'gets': 85, 'back': 86, 'day': 87, '2019': 88, 'f': 89, 'live': 90, 'always': 91, 'world': 92, 'tonight': 93, 'help': 94, 'keep': 95, 'america': 96, 'country': 97, 'gotta': 98, 'fact': 99, 'father': 100, 'hand': 101, 'women': 102, 'years': 103, 'everyone': 104, 'dumb': 105, 'music': 106, 'guy': 107, 'niggas': 108, 'days': 109, 'talking': 110, 'cannot': 111, 'bullshit': 112, 'way': 113, 'lol': 114, 'face': 115, 'school': 116, 'bi': 117, 'h': 118, 'tired': 119, 'run': 120, 'coming': 121, 'ask': 122, 'god': 123, 'calling': 124, 'left': 125, 'white': 126, 'dead': 127, 'take': 128, 'life': 129, 'w': 130, 'saying': 131, 'thing': 132, 'money': 133, 'name': 134, 'office': 135, 'son': 136, 'sorry': 137, 'yet': 138, 'mother': 139, '10': 140, 'kids': 141, 'bbmastopsocial': 142, 'hope': 143, 'idiot': 144, 'call': 145, 'gt': 146, 'feel': 147, 'gross': 148, 'free': 149, 'us': 150, 'last': 151, 'else': 152, 'whole': 153, 'followed': 154, 'ya': 155, 'cause': 156, 'words': 157, 'mean': 158, 'may': 159, 'checked': 160, 'worst': 161, 'c': 162, 'brother': 163, 'bit': 164, 'rn': 165, 'mental': 166, 'well': 167, 'men': 168, 'th': 169, '8': 170, 'able': 171, 'https': 172, 'times': 173, 'g': 174, 'please': 175, 'making': 176, 'mad': 177, 'state': 178, 'happy': 179, 'something': 180, 'card': 181, '15': 182, 'bc': 183, 'done': 184, 'suck': 185, 'tho': 186, 'game': 187, 'best': 188, 'find': 189, 'wo': 190, 'literally': 191, 'person': 192, 'beautiful': 193, 'omg': 194, 'wow': 195, 'remember': 196, 'wanna': 197, 'happened': 198, 'high': 199, 'lmao': 200, 'piece': 201, 'stand': 202, 'also': 203, 'somebody': 204, 'night': 205, 'kim': 206, 'knew': 207, 'wish': 208, 'bout': 209, 'automatically': 210, 'took': 211, 'funny': 212, 'nigga': 213, 'change': 214, 'calls': 215, 'health': 216, 'ok': 217, 'home': 218, 'ed': 219, 'long': 220, 'trash': 221, 'mood': 222, 'facts': 223, 'instead': 224, 'girl': 225, 'gone': 226, 'hday': 227, 'hot': 228, 'ing': 229, 'anyone': 230, 'holy': 231, 'games': 232, 'weak': 233, 'wit': 234, 'almost': 235, 'met': 236, 'dont': 237, '6': 238, 'baby': 239, 'boys': 240, 'pm': 241, 'vote': 242, 'either': 243, 'told': 244, 'watch': 245, 'cant': 246, 'dad': 247, 'much': 248, 'af': 249, 'case': 250, 'record': 251, 'congratulations': 252, 'tried': 253, 'club': 254, 'loser': 255, '5': 256, 'talk': 257, 'tiny': 258, 'question': 259, 'allowed': 260, 'rape': 261, 'actually': 262, 'barr': 263, 'fat': 264, 'paying': 265, 'jungkook': 266, 'deserves': 267, 'worry': 268, 'later': 269, 'first': 270, 'thinking': 271, 'new': 272, 'care': 273, 'glad': 274, 'ah': 275, 'racist': 276, 'pls': 277, 'gameofthrones': 278, 'perfect': 279, 'amazing': 280, 'pay': 281, 'thank': 282, 'seen': 283, 'trying': 284, 'cool': 285, 'hard': 286, '000': 287, 'looking': 288, 'lies': 289, 'makes': 290, 'ur': 291, 'wtf': 292, 'job': 293, 'rather': 294, 'twitter': 295, 'yall': 296, 'thanks': 297, 'virgin': 298, 'sex': 299, '7': 300, 'quite': 301, 'p': 302, 'video': 303, 'bad': 304, 'close': 305, 'thick': 306, 'jobs': 307, 'total': 308, 'lie': 309, 'next': 310, 'friday': 311, 'family': 312, 'another': 313, 'hu': 314, 'shout': 315, 'wild': 316, 'imma': 317, 'terrorist': 318, 'modi': 319, 'great': 320, 'hi': 321, 'r': 322, 'truth': 323, 'planning': 324, 'every': 325, 'fuckin': 326, 'bitches': 327, 'mom': 328, 'trust': 329, 'repo': 330, 'abo': 331, 'cut': 332, 'fake': 333, 'cuz': 334, 'house': 335, 'act': 336, 'co': 337, 'national': 338, 'okay': 339, 'children': 340, 'miss': 341, 'along': 342, 'seeing': 343, 'meeting': 344, 'end': 345, 'honestly': 346, 'many': 347, 'stfu': 348, 'traitor': 349, 'update': 350, 'available': 351, 'yo': 352, 'tickets': 353, 'late': 354, 'government': 355, 'course': 356, 'smh': 357, 'crying': 358, 'defend': 359, 'clear': 360, 'skin': 361, 'happen': 362, 'beat': 363, 'fucked': 364, 'without': 365, 'win': 366, 'absolute': 367, 'summer': 368, 'dare': 369, 'ca': 370, 'ce': 371, 'congress': 372, 'comes': 373, 'snapchat': 374, 'boy': 375, 'month': 376, 'went': 377, 'made': 378, 'future': 379, 'realized': 380, 'soul': 381, 'means': 382, 'since': 383, 'real': 384, 'sir': 385, 'young': 386, 'tryna': 387, 'enjoying': 388, 'biggest': 389, 'turned': 390, 'decided': 391, 'paul': 392, 'dog': 393, 'mans': 394, 'yeah': 395, 'figure': 396, 'wrong': 397, 'thrones': 398, 'finale': 399, 'self': 400, 'shitty': 401, 'truly': 402, 'living': 403, 'names': 404, 'born': 405, 'ppl': 406, '20': 407, 'mind': 408, 'police': 409, 'jamie': 410, 'awesome': 411, 'though': 412, 'info': 413, 'tariffs': 414, 'caught': 415, 'corrupt': 416, 'sad': 417, 'despite': 418, 'fo': 419, 'bjp': 420, 'funder': 421, 'ago': 422, 'fucks': 423, 'pretty': 424, 'drama': 425, 'fight': 426, 'eat': 427, 'feeling': 428, 'hell': 429, 'try': 430, 'kill': 431, 'body': 432, 'lot': 433, 'small': 434, 'coffee': 435, 'sometimes': 436, 'friend': 437, 'teller': 438, 'neither': 439, 'messi': 440, 'friends': 441, 'goal': 442, 'obligated': 443, 'smoke': 444, 'shut': 445, 'giving': 446, 'international': 447, 'cashier': 448, 'welcome': 449, 'phone': 450, 'less': 451, 'n': 452, 'attorney': 453, 'general': 454, 'speak': 455, 'wont': 456, 'hes': 457, 'minute': 458, 'stomach': 459, 'liar': 460, 'past': 461, 'apple': 462, 'needs': 463, 'dear': 464, 'tweet': 465, 'least': 466, 'ion': 467, 'hair': 468, 'e': 469, 'offended': 470, 'kylie': 471, 'service': 472, 'l': 473, 'cutest': 474, 'thehill': 475, 'admits': 476, 'nut': 477, 'football': 478, 'used': 479, 'sehun': 480, 'moving': 481, 'nothing': 482, 'accepted': 483, 'actual': 484, 'yalls': 485, 'girls': 486, 'losing': 487, 'nice': 488, 'lame': 489, 'excuse': 490, 'upsetting': 491, 'matter': 492, 'political': 493, 'super': 494, 'bruh': 495, 'lady': 496, 'anymore': 497, 'pieces': 498, 'bought': 499, 'bar': 500, 'room': 501, 'investigation': 502, 'russian': 503, 'consensual': 504, 'guns': 505, 'bands': 506, 'groups': 507, 'delta': 508, 'tracked': 509, 'breaking': 510, 'called': 511, 'smith': 512, 'side': 513, 'diamond': 514, 'pronounce': 515, 'btches': 516, 'bag': 517, '4': 518, 'madness': 519, 'town': 520, 'tour': 521, 'retired': 522, 'use': 523, 'treated': 524, 'pull': 525, 'justinamash': 526, 'garbage': 527, 'crew': 528, 'lindseygrahamsc': 529, 'proud': 530, 'fall': 531, 'apa': 532, 'dame': 533, 'lillard': 534, 'isnt': 535, 'pressed': 536, 'couple': 537, 'marry': 538, 'lessons': 539, 'jennie': 540, 'icon': 541, 'island': 542, 'barbershop': 543, 'hobo': 544, 'ghost': 545, 'kick': 546, 'traffic': 547, 'noel': 548, 'moron': 549, 'daughter': 550, 'cat': 551, 'countries': 552, 'came': 553, 'quick': 554, 'iam': 555, 'taken': 556, 'jail': 557, 'vs': 558, 'claim': 559, 'idk': 560, 'already': 561, 'ellen': 562, 'sh': 563, 'advice': 564, 'mr': 565, 'bada': 566, '12': 567, 'stan': 568, 'songs': 569, 'playin': 570, 'towards': 571, 'corny': 572, 'psa': 573, 'nobody': 574, 'respond': 575, 'saw': 576, 'bully': 577, 'keeping': 578, 'alone': 579, 'robe': 580, 'taking': 581, 'sound': 582, 'rap': 583, 'tea': 584, 'ente': 585, 'pissed': 586, 'throw': 587, 'asses': 588, 'suppo': 589, 'guys': 590, 'movie': 591, 'thoughts': 592, 'opinions': 593, 'fa': 594, '21': 595, 'social': 596, 'media': 597, 'hoes': 598, 'periodt': 599, 'working': 600, 'quit': 601, 'omgggg': 602, 'haha': 603, 'buddy': 604, '90': 605, 'musicals': 606, 'jin': 607, 'mouth': 608, 'touch': 609, 'hoseok': 610, '11': 611, 'seminar': 612, 'attend': 613, 'georgia': 614, 'theyre': 615, 'cute': 616, 'aoc': 617, 'child': 618, 'married': 619, 'business': 620, 'gay': 621, 'violated': 622, 'acting': 623, 'innocent': 624, 'bum': 625, 'half': 626, 'wanted': 627, 'sweater': 628, 'luck': 629, 'wants': 630, 'exo': 631, 'feed': 632, 'straight': 633, 'criminals': 634, 'hahah': 635, 'billboard': 636, 'hour': 637, 'chance': 638, 'balls': 639, 'link': 640, 'bio': 641, 'americans': 642, 'exactly': 643, 'jus': 644, 'watching': 645, 'charles': 646, 'emo': 647, 'giraffe': 648, 'puts': 649, 'extra': 650, 'watched': 651, 'single': 652, 'episode': 653, 'fit': 654, 'works': 655, 'loving': 656, 'ner': 657, 'bottom': 658, 'continue': 659, 'election': 660, 'republicans': 661, 'religion': 662, 'dick': 663, 'meanwhile': 664, 'text': 665, 'post': 666, 'woman': 667, 'porn': 668, 'bus': 669, 'turn': 670, 'asking': 671, 'trouble': 672, 'whether': 673, 'rate': 674, 'color': 675, 'humbled': 676, 'caused': 677, 'imagine': 678, 'ima': 679, 'cap': 680, 'easily': 681, 'gave': 682, 'restaurant': 683, 'smile': 684, 'daenerys': 685, 'killed': 686, 'gameofthronesfinale': 687, 'apparently': 688, 'confirmed': 689, 'add': 690, 'drunk': 691, 'blue': 692, 'sleep': 693, 'meddling': 694, 'brought': 695, 'ck': 696, 'join': 697, 'gas': 698, 'knocking': 699, 'check': 700, 'absolutely': 701, 'jojo': 702, 'telling': 703, 'follows': 704, 'kind': 705, 'months': 706, 'hear': 707, 'read': 708, 'teach': 709, 'september': 710, 'opposition': 711, 'credit': 712, 'diplomatic': 713, 'designate': 714, 'chuck': 715, 'boomer': 716, 'valverde': 717, 'middle': 718, 'library': 719, 'ugly': 720, 'seriously': 721, 'wait': 722, 'sister': 723, '14': 724, 'far': 725, 'reply': 726, 'er': 727, 'war': 728, 'machine': 729, 'depressed': 730, 'wipe': 731, 'week': 732, 'peace': 733, 'needed': 734, 'hold': 735, 'underrated': 736, 'listen': 737, 'full': 738, 'miller': 739, 'dinner': 740, 'character': 741, 'deserve': 742, 'fanfic': 743, 'previous': 744, 'raised': 745, '25': 746, 'dream': 747, 'training': 748, 'dude': 749, 'form': 750, 'butt': 751, 'prison': 752, 'food': 753, 'break': 754, 'class': 755, 'pic': 756, 'tattoos': 757, 'reps': 758, 'weird': 759, 'brain': 760, 'chill': 761, 'seagriculture': 762, 'shefvaidya': 763, 'ashamed': 764, 'lying': 765, 'thanos': 766, 'shoot': 767, 'hoe': 768, 'jesus': 769, 'complete': 770, 'filter': 771, 'emails': 772, 'ea': 773, 'insurance': 774, 'daily': 775, 'ignore': 776, 'anything': 777, 'law': 778, 'hours': 779, 'tv': 780, 'candy': 781, 'rock': 782, 'htt': 783, 'cus': 784, 'cheat': 785, 'fetus': 786, 'coat': 787, 'justice': 788, 'cold': 789, 'pure': 790, 'power': 791, 'sea': 792, 'arya': 793, 'reason': 794, 'spend': 795, 'putting': 796, 'scroll': 797, 'scrolling': 798, 'universe': 799, 'monbebes': 800, 'broke': 801, 'head': 802, 'drop': 803, 'txt': 804, 'govt': 805, 'none': 806, 'clearly': 807, 'dey': 808, 'agree': 809, 'history': 810, 'line': 811, 'handsome': 812, 'blocking': 813, 'questions': 814, 'hearing': 815, 'millionaire': 816, 'lannister': 817, 'nastiest': 818, 'skank': 819, 'fugly': 820, 'slut': 821, 'aint': 822, 'relationship': 823, 'simple': 824, 'gesture': 825, 'ako': 826, 'daddy': 827, 'guitarmoog': 828, 'black': 829, 'pro': 830, 'return': 831, 'seem': 832, 'lindsey': 833, 'graham': 834, 'television': 835, 'hillary': 836, 'former': 837, 'word': 838, 'sumi': 839, 'judge': 840, 'crochet': 841, 'peoplesvote': 842, 'uk': 843, 'asshole': 844, 'false': 845, 'pop': 846, 'mine': 847, 'crazy': 848, 'announced': 849, 'israeli': 850, 'lost': 851, 'joe': 852, 'biden': 853, 'geaninec': 854, 'local': 855, 'vinyl': 856, 'sure': 857, 'adam': 858, 'makeup': 859, 'american': 860, 'race': 861, 'sign': 862, 'heard': 863, 'peep': 864, 'mates': 865, 'min': 866, 'lil': 867, 'fcbarcelona': 868, 'cunt': 869, 'sunnienoel': 870, 'goals': 871, 'tune': 872, 'husband': 873, '33': 874, 'john': 875, 'boutta': 876, 'ifk': 877, 'islam': 878, 'thousands': 879, 'number': 880, 'hanbin': 881, 'ikon': 882, 'twice': 883, 'yes': 884, 'dumbass': 885, 'tomorrow': 886, '0': 887, 'forget': 888, 'customer': 889, 'na': 890, 'shift': 891, 'retail': 892, 'rich': 893, 'paid': 894, 'lmaooo': 895, 'sexy': 896, 'huaisang': 897, 'slept': 898, 'mum': 899, 'master': 900, 'tough': 901, 'oss': 902, 'downy': 903, 'queen': 904, 'bo': 905, 'ga': 906, '1cr': 907, 'delmiyaa': 908, 'samini': 909, 'resetting': 910, 'greatness': 911, 'swxnsea': 912, 'divock': 913, 'origi': 914, 'loan': 915, 'offer': 916, 'nutclusteruwu': 917, 'reactions': 918, 'tom': 919, 'holland': 920, 'disneyland': 921, 'kidding': 922, 'hash': 923, 'browns': 924, 'thefrankcomin': 925, 'stoned2thabones': 926, 'nothoopoverhoes': 927, 'sammyyyk12': 928, 'ummmmm': 929, 'tedcruz': 930, 'houstonrockets': 931, 'warriors': 932, 'wins': 933, 'ted': 934, 'cruz': 935, 'gigihadid': 936, 'advocate': 937, 'figures': 938, 'bein': 939, 'boii': 940, 'mccoy': 941, 'confident': 942, 'constant': 943, 'owillis': 944, 'shapiro': 945, 'jiminslovr': 946, 'cleveland': 947, 'sammynososa': 948, 'crystal': 949, 'mitten': 950, 'illinois': 951, 'dropping': 952, 'currently': 953, 'plenty': 954, 'april47512': 955, 'wife': 956, 'deerhunter': 957, 'earl': 958, 'realslokhova': 959, 'third': 960, 'contact': 961, 'thebeardlyben': 962, 'djynnflyssa': 963, 'hang': 964, 'hyena': 965, 'happening': 966, 'sickening': 967, 'handcuffed': 968, 'aiming': 969, 'liking': 970, 'eoinhiggins': 971, 'ee': 972, 'airlines': 973, 'nightmare': 974, 'dang': 975, 'serious': 976, 'defeated': 977, 'phryxus': 978, 'iamcardib': 979, 'moms': 980, 'randomly': 981, 'babies': 982, 'lord': 983, 'edkrassen': 984, 'senator': 985, 'mazie': 986, 'hirono': 987, 'william': 988, 'attempts': 989, 'leaves': 990, 'speechless': 991, 'moodluv': 992, 'naturallin1000ki': 993, 'himordaphim': 994, 'glosssyboss': 995, 'stephen': 996, 'youre': 997, 'wrooong': 998, 'ittt': 999, 'booplaks': 1000, 'sleeping': 1001, 'expectations': 1002, 'ffviir': 1003, 'massive': 1004, 'explorable': 1005, 'midgar': 1006, 'quests': 1007, 'ultima': 1008, 'mama2rowan': 1009, 'ugh': 1010, 'obsessed': 1011, 'moonstone': 1012, 'rings': 1013, 'spalding': 1014, 'loverr': 1015, 'recoding': 1016, 'talm': 1017, 'agitha': 1018, 'bigtha': 1019, 'dcaflight757': 1020, 'satirclalx': 1021, 'vapoliticalmeme': 1022, 'naralvirginia': 1023, 'yonnnsssbear': 1024, 'choose': 1025, 'wisely': 1026, 'gkeile': 1027, 'deficit': 1028, 'illegals': 1029, 'sending': 1030, 'jeonlvr': 1031, 'hispanic': 1032, 'cumberbatch': 1033, 'schwarzenegger': 1034, 'jimin': 1035, 'tutori': 1036, 'lucycsmith4': 1037, 'joke': 1038, 'dying': 1039, 'donnadon9': 1040, 'resign': 1041, 'sekusa1': 1042, 'tucker': 1043, 'carlson': 1044, 'ilhan': 1045, 'omar': 1046, 'failed': 1047, 'immigration': 1048, 'system': 1049, 'dislikes': 1050, 'survivethejive': 1051, 'cornwall': 1052, 'hobby': 1053, 'horses': 1054, 'signify': 1055, 'tear': 1056, 'nygovcuomo': 1057, 'trip': 1058, 'hudson': 1059, 'river': 1060, 'dismantling': 1061, 'final': 1062, 'tapp': 1063, 'cjohnsonspider': 1064, 'douche': 1065, 'karma': 1066, 'spelling': 1067, 'error': 1068, 'doesnt': 1069, 'differing': 1070, 'opinion': 1071, 'oon': 1072, 'gag': 1073, 'imjust': 1074, 'asiay': 1075, 'jakeevans283': 1076, 'sake': 1077, 'kobemays': 1078, 'notice': 1079, 'fading': 1080, 'falling': 1081, 'hesitant': 1082, 'prideful': 1083, 'reach': 1084, 'tianathefirst': 1085, 'collector': 1086, 'named': 1087, 'amash': 1088, 'dbongino': 1089, 'yeal': 1090, 'fire': 1091, 'yrs': 1092, 'finally': 1093, 'decide': 1094, 'votes': 1095, 'afrenchfli': 1096, 'jolt': 1097, 'clean': 1098, 'china': 1099, 'behavior': 1100, 'stands': 1101, 'freezing': 1102, 'tacos': 1103, 'random': 1104, 'gripped': 1105, 'nowhere': 1106, 'mfnugent': 1107, 'aetherflare': 1108, 'yikes': 1109, 'yiking': 1110, 'valarrrrrrry': 1111, 'justmwihaki': 1112, 'tayooye': 1113, 'bear': 1114, 'death': 1115, 'colleague': 1116, 'frie': 1117, 'supremeevon': 1118, 'incarceratedbob': 1119, 'justinleise': 1120, 'espn': 1121, 'ironic': 1122, 'awareness': 1123, 'realitysusu69': 1124, 'surgery': 1125, 'anesthesia': 1126, 'misses': 1127, 'itsfxrz': 1128, 'husbands': 1129, 'prolly': 1130, 'peng': 1131, 'ting': 1132, 'hunny': 1133, 'double': 1134, 'chin': 1135, 'y3': 1136, 'ukelele': 1137, 'lovemusic': 1138, 'bromleybymt': 1139, 'hpabe': 1140, 'bringinglearningtolife': 1141, 'rvinyourarea1': 1142, 'prettier': 1143, 'accent': 1144, 'faint': 1145, 'scout': 1146, 'adventures': 1147, 'aventurers': 1148, 'whe': 1149, 'bugusdiemon': 1150, 'enews': 1151, 'kuzquestionmark': 1152, 'jessphillips': 1153, 'created': 1154, 'pleasure': 1155, 'minutes': 1156, 'miles': 1157, 'repre': 1158, 'solmundisonline': 1159, 'ragtime': 1160, 'chorus': 1161, 'hauntin': 1162, 'sillyoon': 1163, 'evilmopacatx': 1164, 'austin': 1165, 'study': 1166, 'stats': 1167, '82': 1168, 'texting': 1169, 'dignityindying': 1170, 'motor': 1171, 'neurone': 1172, 'disease': 1173, 'pushing': 1174, 'huft': 1175, 'sumansh58123278': 1176, 'italian': 1177, 'looteri': 1178, 'bipolar': 1179, 'disorder': 1180, 'affected': 1181, 'insane': 1182, 'awwwwcats': 1183, 'dusty': 1184, 'concept': 1185, 'personal': 1186, 'space': 1187, 'butterflybowz3': 1188, 'rjcmxrell': 1189, 's3nsimolly': 1190, 'itsbigbrook': 1191, 'magagwen': 1192, 'forcing': 1193, 'teachings': 1194, 'customs': 1195, 'trendychriss': 1196, 'clingy': 1197, 'distant': 1198, 'juhi': 1199, 'letting': 1200, 'grab': 1201, 'copy': 1202, 'tljluxurymag': 1203, 'edition': 1204, 'loveconweezy27': 1205, 'blocked': 1206, 'mensaintshit': 1207, 'njackerman1': 1208, 'safe': 1209, 'juicyc': 1210, 'situation': 1211, 'lifestyle': 1212, 'lowest': 1213, 'stronger': 1214, 'zones': 1215, 'soakedry': 1216, 'cheered': 1217, 'outrotear': 1218, 'joon': 1219, 'netflixfilm': 1220, 'asked': 1221, 'amy': 1222, 'poehler': 1223, 'female': 1224, 'directors': 1225, 'samuel': 1226, 'verson1': 1227, 'ccsantini': 1228, 'girlfriend': 1229, 'trinitystclair': 1230, 'shemalesurfer2': 1231, 'adultparody': 1232, 'jockosrocket': 1233, 'tgirlsaddict': 1234, 'meninaasafa': 1235, 'summoned': 1236, 'usinger': 1237, 'yelled': 1238, 'retard': 1239, 'tunnel': 1240, 'abracadabra': 1241, 'bing': 1242, 'boom': 1243, 'bigbang': 1244, 'top': 1245, 'gd': 1246, 'fairiecult': 1247, 'oomf': 1248, 'ggs': 1249, 'ghostlyminded': 1250, 'targarcyn': 1251, 'dankruptdev': 1252, 'ladies': 1253, 'wandering': 1254, 'festivals': 1255, 'group': 1256, 'edc': 1257, 'jeonggukpics': 1258, 'disturb': 1259, 'snacks': 1260, 'dance': 1261, 'polarbearyoongi': 1262, 'yoongi': 1263, 'giggles': 1264, 'anaisnaee': 1265, '70': 1266, 'viagogo': 1267, 'ambeofficial': 1268, 'age': 1269, 'experience': 1270, 'industry': 1271, 'allow': 1272, 'praise': 1273, 'sides': 1274, 'ppact': 1275, 'terrifying': 1276, 'light': 1277, 'violent': 1278, 'rhetoric': 1279, 'immigrants': 1280, 'refugees': 1281, 'fami': 1282, 'slowest': 1283, 'theatre': 1284, 'normally': 1285, 'criticism': 1286, 'ie': 1287, 'femmevillain': 1288, 'jon': 1289, 'snow': 1290, 'punk': 1291, 'catelyn': 1292, 'alonestfeels': 1293, 'around': 1294, 'wondering': 1295, 'lindsay': 1296, 'izatt': 1297, 'matthew': 1298, 'malkin': 1299, 'sydney': 1300, 'initiative': 1301, 'organizing': 1302, 'choi': 1303, 'bts2': 1304, 'lighting': 1305, 'camerawork': 1306, 'vocal': 1307, 'outfits': 1308, 'audiences': 1309, 'beating': 1310, 'bes': 1311, 'lotives': 1312, 'puiginho': 1313, 'boni59465268': 1314, 'hua': 1315, 'ful': 1316, 'amarixxah': 1317, 'clown': 1318, 'tashaahrens': 1319, 'ive': 1320, 'loved': 1321, 'kayriuh': 1322, 'ig': 1323, 'pancakes': 1324, 'puasa': 1325, 'sincerest': 1326, 'apologies': 1327, 'lobster5227': 1328, 'blonde': 1329, 'redhead': 1330, 'randpaul': 1331, 'rand': 1332, 'snake': 1333, 'cardboard': 1334, 'dropsofmauve': 1335, 'fleurdrouh': 1336, 'ye': 1337, 'bhagwan': 1338, 'hai': 1339, 'shitsfuckt': 1340, 'vision4000': 1341, 'japanese': 1342, 'wrestling': 1343, 'flnessaa': 1344, 'hulk': 1345, 'asf': 1346, 'thts': 1347, 'usual': 1348, 'seculars': 1349, 'spilleroftea': 1350, 'posted': 1351, 'unpleasant': 1352, '5000': 1353, 'landonromano': 1354, 'hating': 1355, 'distract': 1356, 'pick': 1357, 'boo': 1358, 'yoshi': 1359, 'lab863': 1360, 'cuss': 1361, 'itsgreat2000': 1362, '2000': 1363, 'titties': 1364, 'liters': 1365, 'zzolwkyyy': 1366, 'fix': 1367, 'patient': 1368, 'gabbie': 1369, 'yebbs': 1370, 'daisy': 1371, 'contrarosy': 1372, '16mil': 1373, 'ayaaannnaaa': 1374, 'jeremyfrankly': 1375, 'musical': 1376, 'theater': 1377, 'nerds': 1378, 'suspend': 1379, 'disbelief': 1380, 'defendi': 1381, 'giossmv': 1382, 'jk': 1383, 'kikiadine': 1384, 'tolerable': 1385, 'favorite': 1386, 'hamberde': 1387, 'susanhenshaw50': 1388, 'gej': 1389, 'uncompleted': 1390, 'projects': 1391, '16': 1392, 'catastrophic': 1393, 'thievery': 1394, 'jax': 1395, 'persists': 1396, 'usmcliberal': 1397, 'brokenscales': 1398, 'tr': 1399, 'realistcontent': 1400, 'write': 1401, 'message': 1402, 'halfway': 1403, 'delete': 1404, 'forward': 1405, 'unable': 1406, 'twe': 1407, 'keithawynn': 1408, 'non': 1409, 'spoiler': 1410, 'catha': 1411, 'ic': 1412, 'poetic': 1413, 'oftentimes': 1414, 'gopchairwoman': 1415, 'towers': 1416, 'mode': 1417, 'breathtaking': 1418, 'rfl': 1419, 'sim': 1420, 'league': 1421, 'disgusted': 1422, 'governers': 1423, 'fistingraid': 1424, 'colorful': 1425, 'whimsical': 1426, 'term': 1427, 'tone': 1428, '901savageash': 1429, 'broken': 1430, 'lotta': 1431, 'forgot': 1432, 'conce': 1433, 'carleigh1985': 1434, 'ran': 1435, 'professional': 1436, 'nev': 1437, 'bucky': 1438, 'devfromdededo': 1439, 'mf': 1440, 'wack': 1441, 'males': 1442, 'peterquillsi': 1443, 'coltenpearson': 1444, 'harden': 1445, 'stans': 1446, 'emotional': 1447, 'unfollower': 1448, '50': 1449, 'followers': 1450, 'denki': 1451, 'eve': 1452, 'realtanyatay': 1453, 'soviet': 1454, 'union': 1455, 'immigrated': 1456, 'land': 1457, 'ctholla': 1458, 'outlier': 1459, 'cearastewa': 1460, '60': 1461, 'wallet': 1462, 'raniovemaii': 1463, 'comparing': 1464, 'casual': 1465, 'swimsuit': 1466, 'minor': 1467, 'literal': 1468, 'killing': 1469, 'drawn': 1470, 'angle': 1471, 'expos': 1472, 'fea': 1473, 'hemilitia': 1474, 'kinda': 1475, 'fell': 1476, 'ride': 1477, 'hopefully': 1478, 'revived': 1479, 'exunini': 1480, 'friendly': 1481, 'reminder': 1482, 'jongin': 1483, 'ls': 1484, 'peaceful': 1485, 'among': 1486, 'fandoms': 1487, 'ar': 1488, 'nothin': 1489, 'reeeeeaaaaaal': 1490, 'losputoshellacopters': 1491, 'hoy': 1492, 'en': 1493, 'madrid': 1494, 'uppittynegress': 1495, 'religions': 1496, 'specifically': 1497, 'command': 1498, 'couples': 1499, 'afford': 1500, 'rent': 1501, '850': 1502, 'justn4z': 1503, 'catalogue': 1504, 'doujima': 1505, 'booth': 1506, 'c14': 1507, 'friendos': 1508, 'weekend': 1509, 'bday': 1510, 'tat': 1511, 'maybe': 1512, 'arm': 1513, 'entire': 1514, 'loriemeacham': 1515, 'presstv': 1516, 'appointed': 1517, 'dc': 1518, 'yourdimpleisil1': 1519, 'lilshishia': 1520, 'michael': 1521, 'house9': 1522, 'mcdonald': 1523, 'hurry': 1524, 'tf': 1525, 'nudibelle': 1526, 'knighted': 1527, 'virginity': 1528, 'knight': 1529, 'zipur15': 1530, 'briansnewhea': 1531, 'lsd122070': 1532, 'sofiaa': 1533, 'gut': 1534, 'countdown': 1535, 'red': 1536, 'carpet': 1537, 'stream': 1538, 'kst': 1539, 'edt': 1540, 'pdt': 1541, 'str': 1542, 'carra23': 1543, 'weekly': 1544, 'rant': 1545, 'woodward': 1546, 'glazers': 1547, 'lianamurphy': 1548, 'socialm85897394': 1549, 'labour': 1550, 'definitely': 1551, 'democratic': 1552, 'idiots': 1553, 'generation': 1554, 'nikelondon': 1555, 'byheatherlong': 1556, 'level': 1557, 'factually': 1558, 'chi': 1559, 'stumpfo': 1560, 'rump': 1561, 'gall': 1562, 'elected': 1563, 'kshdab': 1564, 'sneaky': 1565, 'billratchet': 1566, 'youtube': 1567, 'exposing': 1568, 'james': 1569, 'wiidfeeis': 1570, 'seanmdav': 1571, 'orange': 1572, 'effo': 1573, 'oust': 1574, 'darkenedsabers': 1575, 'althur': 1576, 'helment': 1577, 'bike': 1578, 'hyped': 1579, 'midnightride21': 1580, 'quote': 1581, 'joebiden': 1582, 'deal': 1583, 'telegraph': 1584, 'claims': 1585, 'mi5': 1586, 'mi6': 1587, 'briefed': 1588, 'steeledossier': 1589, '201': 1590, 'lunch': 1591, 'sporf': 1592, 'zinedine': 1593, 'zidane': 1594, 'garethbale11': 1595, 'player': 1596, 'team': 1597, 'queenan85014220': 1598, 'kaira': 1599, 'swift00': 1600, 'jasjazzy': 1601, 'ot7': 1602, 'vkook': 1603, 'goodnight': 1604, 'laila': 1605, 'nrzalnh': 1606, 'bcs': 1607, 'aining': 1608, 'atrupar': 1609, 'bensasse': 1610, 'oleg': 1611, 'deripaska': 1612, 'feeding': 1613, 'scum': 1614, 'sucker': 1615, 'muralikrishnae1': 1616, 'amit': 1617, 'shah': 1618, 'extremely': 1619, 'winning': 1620, 'rosemonroelive': 1621, 'scene': 1622, 'rosexmonroex': 1623, 'rosemonroe': 1624, 'bootyqueen': 1625, 'realrkofficial': 1626, 'realitykings': 1627, 'bigbooty': 1628, 'thetnholler': 1629, 'valentineshow': 1630, 'glencasada': 1631, 'jeremyfaison4tn': 1632, 'princexmfr': 1633, 'showering': 1634, 'classist': 1635, 'sjws': 1636, 'breakup': 1637, 'ariannaaa': 1638, '217': 1639, 'thug': 1640, 'purelyfootball': 1641, 'roy': 1642, 'keane': 1643, 'matthijs': 1644, 'de': 1645, 'ligt': 1646, 'captaining': 1647, 'ajax': 1648, 'earning': 1649, 'modest': 1650, 'wage': 1651, 'rashf': 1652, 'dm': 1653, 'methat': 1654, 'stinker': 1655, 'papi': 1656, 'stephencurry30': 1657, 'edited': 1658, 'reads': 1659, 'wednesday': 1660, 'pussy': 1661, 'skinny': 1662, 'naked': 1663, 'lesbians': 1664, 'fam': 1665, 'tsmayumisays': 1666, 'officer': 1667, 'crash': 1668, 'newforestsafari': 1669, 'waiting': 1670, 'throwbackthursday': 1671, 'campervan': 1672, 'campervanhire': 1673, 'vanlife': 1674, 'morse': 1675, 'lane': 1676, 'ohhhh': 1677, 'rickyberwick': 1678, 'misscocodeluxe': 1679, 'minding': 1680, 'react': 1681, 'ewitssamantha': 1682, 'alizejacquez': 1683, 'rokhanna': 1684, 'added': 1685, 'unemployment': 1686, 'rural': 1687, 'communities': 1688, 'continu': 1689, 'untitld': 1690, 'documnt': 1691, 'kicked': 1692, 'goober': 1693, 'yoongimylil': 1694, 'dust': 1695, 'scrunch': 1696, 'whyyyyyyyyyyyyyyyyy': 1697, 'boi': 1698, 'naaaav12': 1699, 'sac': 1700, 'shoulda': 1701, 'jumped': 1702, 'breex': 1703, 'omm': 1704, 'itsnicksnider': 1705, 'jackieaina': 1706, 'obviously': 1707, 'tweets': 1708, 'sent': 1709, 'deashay': 1710, 'understanding': 1711, 'nudecelebsnude': 1712, 'gwyneth': 1713, 'paltrow': 1714, 'offering': 1715, 'tit': 1716, 'gown': 1717, 'hazulezah': 1718, 'push': 1719, 'walk': 1720, 'knowing': 1721, 'myboy': 1722, 'sgrstk': 1723, 'treat': 1724, 'staff': 1725, 'ridiculous': 1726, 'chose': 1727, 'callie': 1728, 'ainly': 1729, 'gir': 1730, 'theblackercaleb': 1731, 'cersei': 1732, 'memeber': 1733, 'pinkish666': 1734, 'ryaanngfield': 1735, 'liyummm': 1736, 'jawline': 1737, 'lifeasaswiftie': 1738, 'ts7': 1739, 'promo': 1740, 'list': 1741, 'language': 1742, 'chuckgrassley': 1743, 'speaking': 1744, 'english': 1745, 'raisshion': 1746, 'andrew1albe': 1747, 'veterinarian': 1748, 'comfo': 1749, 'assistant': 1750, 'helps': 1751, 'patients': 1752, 'alright': 1753, 'koojjunies': 1754, 'bb': 1755, 'easier': 1756, 'ian': 1757, 'ochii': 1758, 'keyboysteve': 1759, 'appreciate': 1760, 'navy': 1761, 'guruanaerobic': 1762, '58yrs': 1763, 'regard': 1764, 'chronic': 1765, 'lack': 1766, 'faulty': 1767, 'chrismegerian': 1768, 'memorable': 1769, 'kimmylou7': 1770, 'lifetothemax1': 1771, 'princessbravato': 1772, 'interservele': 1773, 'based': 1774, 'bamburgh': 1775, 'blyth': 1776, 'btearlycareers': 1777, 'newcastle': 1778, 'kic': 1779, 'robb': 1780, 'exulting': 1781, 'itsokdontbesad': 1782, 'spotify': 1783, 'playlist': 1784, 'turning': 1785, 'exes': 1786, 'stuck': 1787, 'rekindling': 1788, 'aboutmrdarcy': 1789, 'gilmore': 1790, 'gilmoregirls': 1791, 'beef': 1792, 'academic': 1793, 'incorrect': 1794, 'motivation': 1795, 'tips': 1796, 'refs': 1797, 'chant': 1798, 'original': 1799, 'rainheatherr': 1800, 'hottt': 1801, 'massssmish': 1802, 'yuck': 1803, 'mas': 1804, 'stfutony': 1805, 'prices': 1806, 'assume': 1807, 'hoegenic': 1808, 'breakdown': 1809, 'parent': 1810, 'suddenly': 1811, 'ctravi': 1812, 'demonetization': 1813, 'activities': 1814, 'maoits': 1815, 'answer': 1816, 'yesmeredithfinn': 1817, 'express': 1818, 'racism': 1819, 'bigotry': 1820, 'jimintical': 1821, 'closeup': 1822, 'puffy': 1823, 'eyes': 1824, 'nose': 1825, 'lips': 1826, 'doll': 1827, 'fedporn': 1828, 'priest': 1829, 'sermon': 1830, 'tied': 1831, 'brand': 1832, 'slogans': 1833, 'preachings': 1834, 'runs': 1835, 'dunkin': 1836, 'madddieee217': 1837, 'thats': 1838, 'americas': 1839, 'cal': 1840, 'muscle': 1841, 'daddies': 1842, 'hottie': 1843, 'pisses': 1844, 'deep': 1845, 'owenhawkxxx': 1846, 'gotfinale': 1847, 'milkygoddess': 1848, 'drug': 1849, 'compare': 1850, 'spending': 1851, 'useless': 1852, 'bomsmaid': 1853, 'vagina': 1854, 'adventure': 1855, 'tru': 1856, 'knjrklves': 1857, 'mystic': 1858, 'messenger': 1859, 'namjoon': 1860, 'locked': 1861, 'hanging': 1862, 'dumping': 1863, 'dtf': 1864, 'sprint': 1865, 'onto': 1866, 'highway': 1867, 'themomunleashed': 1868, 'worried': 1869, 'wondered': 1870, 'track': 1871, 'development': 1872, 'concer': 1873, 'hahabeej': 1874, 'feelings': 1875, 'sideways': 1876, 'dedamola': 1877, 'thepamilerin': 1878, 'izreil': 1879, 'yummy': 1880, 'namjoonie': 1881, 'oftrump': 1882, 'jennieregul': 1883, 'checking': 1884, 'thot': 1885, 'bloomslvan': 1886, 'realizing': 1887, 'boyfriend': 1888, 'mochi': 1889, 'jimin80': 1890, 'bgt': 1891, 'rxii': 1892, 'families': 1893, 'length': 1894, 'protect': 1895, 'mtracey': 1896, 'mikoosmos': 1897, 'understand': 1898, 'nails': 1899, 'une': 1900, 'ommarif': 1901, 'maria': 1902, 'delrusso': 1903, 'dating': 1904, 'received': 1905, 'con': 1906, 'davidfrawleyved': 1907, 'led': 1908, 'refuses': 1909, 'narendra': 1910, 'victory': 1911, 'un': 1912, 'sailormooncrys9': 1913, 'stowed': 1914, 'wentto': 1915, 'bed': 1916, 'altogetherhappy': 1917, 'ab84': 1918, 'trailblazers': 1919, 'schaheid': 1920, 'basically': 1921, 'dismissing': 1922, 'pakistan': 1923, 'valid': 1924, 'concerns': 1925, 'hegal': 1926, 'reflect': 1927, 'negat': 1928, 'indeed': 1929, 'empower': 1930, 'technologists': 1931, 'cyber': 1932, 'cybersecurity': 1933, 'felonies': 1934, 'malbonhumora': 1935, 'fuckbamboni': 1936, 'iicedtae': 1937, 'matt': 1938, 'tall': 1939, 'fluffy': 1940, 'chihuahua': 1941, 'footballfunnnys': 1942, 'alena': 1943, 'abt': 1944, 'annoying': 1945, 'schoolers': 1946, 'detective': 1947, 'pikachu': 1948, 'looks': 1949, 'ratedls': 1950, 'doggintrump': 1951, 'texas': 1952, 'greasy': 1953, 'beto': 1954, 'iamthecreatress': 1955, 'spoil': 1956, 'riches': 1957, 'ethiopia': 1958, 'hondadeal4vets': 1959, 'halsey': 1960, 'yh': 1961, 'heads': 1962, '0ga': 1963, 'ist': 1964, 'crack': 1965, 'animatrocities': 1966, 'dammmmn': 1967, 'therealpbarry': 1968, 'tax': 1969, 'loopholes': 1970, 'totally': 1971, 'exploited': 1972, 'mollajoon': 1973, 'taehyung': 1974, 'bias': 1975, 'wrecker': 1976, 'stinkyca': 1977, 'softest': 1978, 'uncle': 1979, 'morgan': 1980, 'stories': 1981, 'odairannies': 1982, 'winterfell': 1983, 'burned': 1984, 'cloutjefe': 1985, 'erinmhk': 1986, 'lexi': 1987, 'moh': 1988, 'kohn': 1989, 'daviegreig': 1990, 'iamkp': 1991, 'occasionally': 1992, 'python': 1993, 'sketch': 1994, 'chrismurphyct': 1995, 'shining': 1996, 'spotlight': 1997, 'anxiety': 1998, 'tears': 1999, 'mrmichaelwaxman': 2000, 'thekidmero': 2001, 'promote': 2002, 'bumped': 2003, 'episodes': 2004, 'sa': 2005, 'jamesa': 2006, 'hur23': 2007, 'cooler': 2008, 'cult': 2009, 'taught': 2010, 'decieve': 2011, 'empressfindom': 2012, 'findom': 2013, 'tributes': 2014, 'carlosr1110': 2015, 'muchachomckay': 2016, 'carlossss': 2017, 'chamber45': 2018, 'moneys': 2019, 'object': 2020, 'investing': 2021, 'properly': 2022, 'wasting': 2023, 'sumfin': 2024, 'il': 2025, 'jessbelll1': 2026, 'embarrassing': 2027, 'embarrassed': 2028, 'yerrrddd3000': 2029, 'stiffest': 2030, 'uppercut': 2031, 'poemsbycheyenne': 2032, 'wishing': 2033, 'fabulous': 2034, 'pool': 2035, 'carrie': 2036, 'turnt': 2037, 'bowman': 2038, 'purgatorie': 2039, 'petrinajc': 2040, 'lapublichealth': 2041, 'border': 2042, 'hospital': 2043, 'invest': 2044, 'expect': 2045, 'private': 2046, 'pics': 2047, 'nzwaaft01': 2048, 'taehyungyouareperfect': 2049, 'taehyungweloveyou': 2050, 'jen': 2051, 'jennnnnnnn': 2052, 'jake': 2053, 'hatred': 2054, 'cody': 2055, 'ko': 2056, 'technically': 2057, 'deck': 2058, 'cos': 2059, 'kmgthot': 2060, 'seven': 2061, 'bbclips': 2062, 'cardi': 2063, 'commented': 2064, 'truer': 2065, 'uttered': 2066, 'gop': 2067, '200': 2068, 'billion': 2069, 'chinese': 2070, 'goods': 2071, 'pr': 2072, 'elonmusk': 2073, 'sam': 2074, 'commercial': 2075, 'pilot': 2076, 'however': 2077, 'fees': 2078, '70000': 2079, 'alyiahxoxo': 2080, 'dalkomhanuwu': 2081, 'tq': 2082, 'soft': 2083, 'shownu': 2084, 'pc': 2085, 'selling': 2086, 'fansuppo': 2087, 'hpf': 2088, 'reservation': 2089, 'itshu': 2090, 's1': 2091, 'jugglinjosh': 2092, 'whooping': 2093, 'todorokispider': 2094, 'kaliea': 2095, 'xoxo': 2096, 'driver': 2097, 'raped': 2098, 'fluid': 2099, 'samples': 2100, 'evenings': 2101, 'dearmetenyearsago': 2102, 'charliekirk11': 2103, 'donald': 2104, 'moral': 2105, 'attacked': 2106, 'giovannnnnna': 2107, 'reunion': 2108, 'shopping': 2109, 'graduating': 2110, 'danandshay': 2111, 'thelifeoflane': 2112, 'plain': 2113, 'ol': 2114, 'silly': 2115, 'ab': 2116, 'bautista34': 2117, 'blunts': 2118, 'pradakookie': 2119, 'istg': 2120, 'playboy': 2121, 'realsaavedra': 2122, 'stacey': 2123, 'abrams': 2124, 'warns': 2125, 'takeover': 2126, '2030': 2127, 'voter': 2128, 'suppression': 2129, 'addressed': 2130, 'v': 2131, 'cologne': 2132, 'workout': 2133, 'plank': 2134, 'seconds': 2135, 'glute': 2136, 'bridges': 2137, 'jump': 2138, 'squats': 2139, 'kegels': 2140, 'maijakoko': 2141, 'smooth': 2142, 'legend': 2143, 'btsatmetlife': 2144, 'joy': 2145, 'tracking': 2146, 'calms': 2147, 'fuzzy': 2148, 'seaweed': 2149, 'conference': 2150, 'early': 2151, 'bird': 2152, 'register': 2153, 'aubreymaynard': 2154, 'headlights': 2155, 'stonekettle': 2156, 'military': 2157, 'appreciated': 2158, 'enough': 2159, 'shame': 2160, 'mi': 2161, 'humans': 2162, 'weirdos': 2163, 'kathleenannn': 2164, '2am': 2165, 'corybarlog': 2166, 'pfff': 2167, 'kojima': 2168, 'gstandsforgay': 2169, 'fails': 2170, 'misogyny': 2171, 'atishiaap': 2172, 'respectmyelders': 2173, 'corvus': 2174, 'glave': 2175, 'troops': 2176, 'neeeoooowww': 2177, 'n8vegrl4life': 2178, 'lightningcreeke': 2179, 'codeofvets': 2180, 'divine': 2181, 'intervention': 2182, 'healing': 2183, 'necessary3vol': 2184, 'nighas': 2185, 'etnow': 2186, 'sofiacarson': 2187, 'gettyimages': 2188, 'rude': 2189, 'prsvelvet': 2190, 'immediately': 2191, 'easpo': 2192, 'sfifa': 2193, 'tech': 2194, 'aft': 2195, 'bust': 2196, 'expensive': 2197, 'latte': 2198, 'switch': 2199, 'drip': 2200, 'smashley333333': 2201, 'realest': 2202, 'dyyying': 2203, 'imagined': 2204, 'listening': 2205, 'wholefoods': 2206, 'greene': 2207, 'brooklyn': 2208, 'location': 2209, 'checkout': 2210, 'yell': 2211, 'nanaafiq': 2212, 'trashprincessx': 2213, 'grandma': 2214, 'x': 2215, 'arizona': 2216, 'says': 2217, 'toddler': 2218, 'bitten': 2219, 'daycare': 2220, 'danrather': 2221, 'function': 2222, 'nation': 2223, 'mueller': 2224, 'letter': 2225, 'abbiewhiteex': 2226, 'creasing': 2227, 'hahaha': 2228, 'gal': 2229, 'roleplayers': 2230, 'groove': 2231, 'stillblazingtho': 2232, 'weed': 2233, 'assumptions': 2234, 'satisfied': 2235, 'ended': 2236, 'mirkw00dtauriel': 2237, 'tauriel': 2238, 'sagarikaghose': 2239, 'manowdino': 2240, 'incidence': 2241, 'lollllllllll': 2242, 'manager': 2243, 'warn': 2244, 'purelysteve': 2245, 'sif': 2246, 'asgardian': 2247, 'warrior': 2248, 'true': 2249, 'expe': 2250, 'combat': 2251, 'weaponry': 2252, 'kicks': 2253, 'barajasmehjenni': 2254, '18cho2i': 2255, 'korean': 2256, 'drinkin': 2257, 'sux': 2258, 'litterally': 2259, 'adammcguckin13': 2260, 'fella': 2261, 'dump': 2262, 'presumably': 2263, 'bluejaysdad': 2264, 'bluejays': 2265, 'rotation': 2266, 'smoking': 2267, 'aggro': 2268, 'edible': 2269, 'hateful': 2270, 'incorrectmarvel': 2271, 'hey': 2272, 'prejudice': 2273, 'lgbtq': 2274, 'dannydevlthoe': 2275, 'pov': 2276, 'topshop': 2277, 'hanger': 2278, 'hydsoengnx': 2279, 'pants': 2280, 'mstchy': 2281, 'hsi': 2282, 'sjity': 2283, 'km': 2284, 'cryigj': 2285, 'cugre': 2286, 'proudresister': 2287, 'obstruct': 2288, 'basis': 2289, 'lazyhooves': 2290, 'woke': 2291, 'sweat': 2292, 'furiously': 2293, 'shaking': 2294, 'istic': 2295, 'draw': 2296, 'cryptic': 2297, 'chileiimon': 2298, 'weeks': 2299, 'monster': 2300, 'daniel': 2301, 'rowan': 2302, 'champs': 2303, 'foxspo': 2304, 'sdet': 2305, 'philly': 2306, 'loyal': 2307, 'pair': 2308, 'offence': 2309, 'follow': 2310, 'corbynbesson': 2311, 'nowthisnews': 2312, 'voice': 2313, 'everywhereist': 2314, 'vasectomies': 2315, 'reversible': 2316, 'snytv': 2317, 'defenseman': 2318, 'puck': 2319, 'madam': 2320, 'rocileuchuk': 2321, 'retweet': 2322, 'idaho': 2323, 'gorgeous': 2324, 'kogebeolfsky': 2325, 'uhuh': 2326, 'squirm': 2327, 'inside': 2328, 'cladeek': 2329, 'riversidecrew': 2330, 'trippypip': 2331, 'blitz': 2332, 'clue': 2333, 'lifeofjay98': 2334, 'sahaelmxmbb': 2335, 'changkyun': 2336, 'minhyuk': 2337, 'raincoats': 2338, 'vip': 2339, 'mia': 2340, 'nicfromthe252': 2341, 'scary': 2342, 'bug': 2343, 'helping': 2344, 'mo': 2345, 'nyaaaajah': 2346, 'college': 2347, 'student': 2348, 'blessings': 2349, 'queendreaa': 2350, 'dents': 2351, 'youngboy': 2352, 'outside': 2353, 'delusional': 2354, 'sgsldrama': 2355, 'stinkin': 2356, 'jatarou': 2357, 'lovin': 2358, 'believe': 2359, 'prom': 2360, '10th': 2361, 'picture': 2362, 'gallery': 2363, 'date': 2364, 'recent': 2365, 'meme': 2366, 'saved': 2367, 'reacti': 2368, 'lilnasx': 2369, 'soo': 2370, 'thankyou': 2371, 'cagting': 2372, 'tlkin': 2373, 'hesitation': 2374, 'cllr': 2375, 'jefferys': 2376, 'ring': 2377, 'conservatives': 2378, 'newmarket': 2379, 'council': 2380, 'committees': 2381, '01284': 2382, '727412': 2383, 'joebuddenfit': 2384, 'ahhh': 2385, 'hands': 2386, 'joebudden': 2387, 'txtonnews': 2388, 'lookin': 2389, 'members': 2390, 'bighit': 2391, 'streaming': 2392, 'bloons': 2393, 'tower': 2394, 'defense': 2395, 'ish': 2396, 'bhandari': 2397, 'masood': 2398, 'azhar': 2399, 'designated': 2400, 'second': 2401, 'fi': 2402, 'mrjoncryer': 2403, 'anybody': 2404, 'jacob': 2405, 'wohl': 2406, 'traveling': 2407, 'pulling': 2408, 'aidenwolfe': 2409, 'cabinet': 2410, 'unfolding': 2411, 'orgy': 2412, 'debasement': 2413, 'icip': 2414, 'obnoxious': 2415, 'narcissism': 2416, 'popi': 2417, 'jo': 2418, 'sailorheck': 2419, 'wicked': 2420, 'disgraced': 2421, 'fraud': 2422, 'ebenkb': 2423, 'chat': 2424, 'order': 2425, 'enjoyable': 2426, 'botfly': 2427, 'rem': 2428, 'jakecardiff401': 2429, 'wonder': 2430, 'wuv': 2431, 'damdaminnn': 2432, 'argue': 2433, 'ju': 2434, 'jidsv': 2435, 'bill': 2436, 'fr': 2437, 'alexandergold': 2438, 'cancel': 2439, 'taylorswift13': 2440, 'ashleyxholcomb': 2441, 'wealthy': 2442, 'sideline': 2443, 'morrowkaybree': 2444, 'senatemajldr': 2445, 'seanhannity': 2446, 'omfg': 2447, 'politician': 2448, 'parkimins': 2449, 'rebels': 2450, 'piercings': 2451, 'shaves': 2452, 'whatever': 2453, 'citizentvkenya': 2454, 'po': 2455, 'santo': 2456, 'announces': 2457, '30th': 2458, 'lovelies': 2459, 'crowd': 2460, 'excited': 2461, 'perf': 2462, 'research': 2463, 'exozerofour': 2464, 'knees': 2465, 'scynewaive': 2466, 'fired': 2467, 'kid': 2468, 'livid': 2469, 'opening': 2470, 'seems': 2471, 'counter': 2472, 'intuitive': 2473, 'temyiaaa': 2474, 'irritated': 2475, 'snowden': 2476, 'venezuela': 2477, 'leader': 2478, 'launched': 2479, 'coup': 2480, 'access': 2481, 'soci': 2482, 'brifothergill': 2483, 'tin': 2484, 'hats': 2485, 'ann': 2486, 'brussels': 2487, '29': 2488, 'march': 2489, 'emeraldrobinson': 2490, 'logan': 2491, 'dangerous': 2492, 'ff7e30': 2493, 'homies': 2494, 'mwasa': 2495, 'ingabireim': 2496, 'minyouthrwanda': 2497, 'gasabo': 2498, 'district': 2499, 'rwandapolice': 2500, 'nontoxiccat': 2501, 'enjoy': 2502, 'chairmanship': 2503, 'blowing': 2504, 'amstilla': 2505, 'bossassrn': 2506, 'except': 2507, 'famous': 2508, 'mak': 2509, 'marranaacano': 2510, 'loonaonethird': 2511, 'supposed': 2512, 'jennithorburn': 2513, 'privilege': 2514, 'eurovision': 2515, 'palestinian': 2516, 'emboldening': 2517, 'heid': 2518, 'regime': 2519, 'quetzalyzenil': 2520, 'saving': 2521, 'environment': 2522, 'omw': 2523, 'gtconway3d': 2524, 'ianbassin': 2525, 'cowards': 2526, 'unwilling': 2527, 'oath': 2528, 'kolokomiks': 2529, 'pursue': 2530, 'criteria': 2531, 'dems': 2532, 'leftovergop': 2533, 'shananannomination': 2534, 'garrettxgarland': 2535, '30': 2536, 'max': 2537, 'inhabitable': 2538, 'achosenaquarius': 2539, 'aquarius': 2540, 'pero': 2541, 'gago': 2542, 'kinikilig': 2543, 'buhari': 2544, 'ayo': 2545, 'adebanjo': 2546, 'dares': 2547, 'fg': 2548, 'ep': 2549, 'introkths': 2550, 'dreamies': 2551, 'scalp': 2552, 'nataliesurely': 2553, 'researching': 2554, 'panthers': 2555, 'breakfast': 2556, 'program': 2557, 'kayluhdior': 2558, 'license': 2559, 'registration': 2560, 'papers': 2561, 'cop': 2562, 'broad': 2563, 'daylight': 2564, 'adultswim': 2565, 'jellies': 2566, 'midnight': 2567, 'tyle': 2568, 'hecreator': 2569, 'themustacheman': 2570, 'iamcarljones': 2571, 'sighbrattt': 2572, 'hilarious': 2573, 'berube': 2574, 'yeo': 2575, 'lines': 2576, 'mid': 2577, 'december': 2578, 'blues': 2579, 'passable': 2580, 'goaltending': 2581, 'blindfold': 2582, 'levy': 2583, 'memo': 2584, 'pochettino': 2585, 'problem': 2586, 'asianaidan': 2587, 'wasted': 2588, 'oppo': 2589, 'unity': 2590, 'el': 2591, 'branbran': 2592, 'sober': 2593, 'doctor': 2594, 'love11': 2595, 'attitude': 2596, 'nw1245': 2597, 'tommy': 2598, 'bullying': 2599, 'swearing': 2600, 'palmerrepo': 2601, 'deranged': 2602, 'meltdown': 2603, 'shadow': 2604, 'detrimental': 2605, 'numb': 2606, 'struck': 2607, 'nnutaella': 2608, 'ists': 2609, 'modooborahae': 2610, 'veteran': 2611, 'actress': 2612, 'cook': 2613, 'dishes': 2614, 'album': 2615, 'ohhh': 2616, 'yeahhhh': 2617, 'jayyell97': 2618, 'buy': 2619, 'plan': 2620, 'fauxmrjones': 2621, 'tm': 2622, 'foxx': 2623, 'jussie': 2624, 'smollett': 2625, 'stepped': 2626, 'aside': 2627, 'turns': 2628, 'ki': 2629, 'gga': 2630, 'tch': 2631, 'bishoppitts': 2632, 'often': 2633, 'unknownwriterm': 2634, 'xd': 2635, 'kyys4onvootwithpaniasmanan': 2636, 'asap': 2637, 'justvoot': 2638, 'bbcstudio': 2639, 'youve': 2640, 'weakest': 2641, 'human': 2642, 'id': 2643, 'intranet': 2644, 'software': 2645, 'ability': 2646, 'learn': 2647, 'visit': 2648, 'website': 2649, 'twenties': 2650, 'ny': 2651, 'flip': 2652, 'beside': 2653, 'handmadehour': 2654, 'withdrawal': 2655, 'coastal': 2656, 'seaside': 2657, 'uklabour': 2658, 'thesnp': 2659, 'plaid': 2660, 'cymru': 2661, 'millions': 2662, 'idc': 2663, 'marklevinshow': 2664, 'obama': 2665, 'spied': 2666, 'derangeddonald': 2667, 'relijoon': 2668, 'kings': 2669, 'korea': 2670, 'sold': 2671, 'stadiums': 2672, 'multiple': 2673, 'robzombie': 2674, 'remake': 2675, 'corn': 2676, 'yoonminfocus': 2677, 'signatures': 2678, 'heres': 2679, 'bacibobo1919': 2680, 'itsmikebivins': 2681, 'mrandyngo': 2682, 'easy': 2683, 'cou': 2684, 'shoulder': 2685, 'belt': 2686, 'uchihando': 2687, 'envy': 2688, 'muah': 2689, 'tyrone345345': 2690, 'airbnb': 2691, 'equality': 2692, 'delisting': 2693, 'illegal': 2694, 'settlement': 2695, 'rentals': 2696, 'hav': 2697, 'hoarsewisperer': 2698, 'gutted': 2699, 'bass': 2700, 'ex': 2701, 'kiki1ner': 2702, 'ewarren': 2703, 'suppress': 2704, 'itsmanjobruh': 2705, 'pause': 2706, 'catturd2': 2707, 'wash': 2708, 'examiner': 2709, 'jan': 2710, 'hunter': 2711, 'counseled': 2712, 'pu': 2713, 'demanded': 2714, 'upa': 2715, 'chairperson': 2716, 'soniagandhi': 2717, 'rahulgandhi': 2718, 'apologise': 2719, 'nati': 2720, 'zindagiazabhai': 2721, 'hired': 2722, 'compensate': 2723, 'dehaan': 2724, 'rabrav': 2725, 'skip': 2726, 'natural': 2727, 'selection': 2728, 'force': 2729, 'evening': 2730, 'fly': 2731, 'airpo': 2732, 'howardfineman': 2733, 'beltway': 2734, 'corrupted': 2735, 'proximity': 2736, 'farmhouse': 2737, 'style': 2738, 'ids': 2739, 'barmaid': 2740, 'pub': 2741, 'nipple': 2742, 'pierced': 2743, 'da': 2744, 'sooooooo': 2745, 'kandsltd': 2746, 'kris': 2747, 'stronge': 2748, 'lp': 2749, 'pockets': 2750, 'bowie': 2751, 'nirvana': 2752, 'recordcoat': 2753, 'markjsaltzman': 2754, 'ernstroets': 2755, 'mark': 2756, 'knowwhateyememe': 2757, 'farm': 2758, 'trumps': 2759, 'bryanasalaz': 2760, 'consent': 2761, 'ifs': 2762, 'ands': 2763, 'butts': 2764, 'coconuts': 2765, 'minded': 2766, 'ceilings': 2767, 'syon': 2768, 'west': 2769, 'london': 2770, 'bits': 2771, 'kyliejenner': 2772, 'cking': 2773, 'skincare': 2774, 'dreamt': 2775, 'soon': 2776, 'train': 2777, 'hottest': 2778, 'market': 2779, 'administration': 2780, 'tribelaw': 2781, 'hless': 2782, 'witness': 2783, 'subpoena': 2784, 'contempt': 2785, 'dretiquette': 2786, 'outlandish': 2787, 'testimony': 2788, 'aafia': 2789, 'screams': 2790, 'louder': 2791, 'baesicsarai': 2792, 'seniors': 2793, 'schools': 2794, 'grown': 2795, 'bunch': 2796, 'freshman': 2797, 'nuke': 2798, 'skill': 2799, 'peoplr': 2800, 'phat': 2801, 'phridays': 2802, 'poppin': 2803, 'wvjoe911': 2804, 'furious': 2805, 'silent': 2806, 'sleeper': 2807, 'daft': 2808, 'criticising': 2809, 'branding': 2810, 'believes': 2811, 'isr': 2812, 'tryin': 2813, 'motherofbraylon': 2814, 'bigashley': 2815, 'doctormayor': 2816, 'proposal': 2817, 'approved': 2818, 'trixbr': 2819, 'tz': 2820, 'bennn3': 2821, 'resumed': 2822, 'leave': 2823, 'stuffs': 2824, 'impressio': 2825, 'clothes': 2826, 'mindedx': 2827, 'perfectly': 2828, 'toxic': 2829, 'vibe': 2830, 'misunderstood': 2831, 'bookish': 2832, 'wiccan': 2833, 'pleasures': 2834, 'pove': 2835, 'poor': 2836, 'iced': 2837, 'mayowa': 2838, 'gregkay': 2839, 'goes': 2840, 'akin': 2841, 'agboola': 2842, 'posing': 2843, 'boot': 2844, 'discuss': 2845, 'jackposobiec': 2846, 'formally': 2847, 'muslim': 2848, 'brotherhood': 2849, 'organization': 2850, 'faipdeooiad': 2851, 'greggsofficial': 2852, 'm': 2853, 'freezer': 2854, 'fridge': 2855, 'iceland': 2856, 'jonghohugs': 2857, 'oomfs': 2858, 'minecraft': 2859, 'cozynanz': 2860, 'attached': 2861, 'rickygervais': 2862, 'lovely': 2863, 'pot': 2864, 'water': 2865, 'boiling': 2866, 'utdalii': 2867, 'answers': 2868, 'ali': 2869, 'jiminiecricketh': 2870, 'microphone': 2871, 'facetime': 2872, 'audio': 2873, 'rockets': 2874, 'newyork': 2875, 'minutee': 2876, 'nah': 2877, 'djyeo': 2878, 'facejacker': 2879, 'terrytibbs': 2880, 'kayvannovak': 2881, 'channel4': 2882, 'e4tweets': 2883, 'hattrickprod': 2884, 'milk': 2885, 'alpen': 2886, 'jonas': 2887, 'brothers': 2888, 'flew': 2889, 'paulgowdy2014': 2890, 'paultyredagh81': 2891, 'mjnokane': 2892, 'terrorism': 2893, 'gaza': 2894, 'deluded': 2895, 'jenxo': 2896, 'fresh': 2897, 'meal': 2898, 'fuckkkkkkkk': 2899, 'rip': 2900, 'alexander': 2901, 'chloe': 2902, 'cherry': 2903, 'grade': 2904, 'monsters': 2905, 'cock': 2906, 'chloecherryxxx': 2907, 'bangbrosdo': 2908, 'brag': 2909, 'focus': 2910, 'bxngtxn': 2911, 'bunny': 2912, 'trumpdumpcare': 2913, 'pence': 2914, 'removed': 2915, 'russiancollusion': 2916, 'electricalmama': 2917, 'prisonplanet': 2918, 'nitomatta': 2919, 'upgrade': 2920, 'adamlambe': 2921, 'americanidol': 2922, 'grand': 2923, '8p': 2924, '5e': 2925, 'abc': 2926, 'performing': 2927, 'neweyes': 2928, 'firs': 2929, 'espnmma': 2930, 'ufc': 2931, 'lifetime': 2932, 'tochmarc': 2933, 'eimear': 2934, 'reveal': 2935, 'explain': 2936, 'everybody': 2937, 'pettavelan10': 2938, 'jakki': 2939, 'nair': 2940, 'song': 2941, 'fom': 2942, 'uck': 2943, 'buddies': 2944, 'webseries': 2945, 'samoogavirodhi': 2946, 'surya': 2947, 'kumar777': 2948, 'trisha': 2949, 'adimai': 2950, 'trishamaami': 2951, 'tamilactre': 2952, 'gosh': 2953, 'bloggerswanted': 2954, 'uselesstat': 2955, 'faces': 2956, 'jerry': 2957, 'nadler': 2958, 'crooked': 2959, 'deleted': 2960, 'acid': 2961, 'washed': 2962, 'govmikehuckabee': 2963, 'consult': 2964, 'poll': 2965, '81': 2966, 'quandon': 2967, 'gon': 2968, 'clarencenyc': 2969, 'styled': 2970, 'stylist': 2971, 'axdryan': 2972, 'played': 2973, 'karmas': 2974, 'mineyricebox': 2975, 'cheriedeville': 2976, 'harlots': 2977, 'guidance': 2978, 'slide': 2979, 'dms': 2980, 'jimwhitegnv': 2981, 'ball': 2982, 'mascot': 2983, 'swamp': 2984, 'ocean': 2985, 'therickydavila': 2986, 'predator': 2987, 'threatening': 2988, 'abuse': 2989, 'foes': 2990, 'kerry': 2991, 'bid': 2992, 'voted': 2993, 'awards': 2994, 'ways': 2995, 'blessed': 2996, 'crisp': 2997, 'hairline': 2998, 'mytoecold': 2999, 'picked': 3000, 'ate': 3001, 'hamburgers': 3002, 'paragraph': 3003, 'ohhhdesss': 3004, '5wks': 3005, 'idea': 3006, 'kc593': 3007, 'rosedc11': 3008, 'therosegardenh1': 3009, 'genflynn': 3010, 'soldiers': 3011, 'sailors': 3012, 'airmen': 3013, 'marines': 3014, 'maga4justice': 3015, 'playing': 3016, 'disagree': 3017, 'jinjoonies': 3018, 'period': 3019, 'millennial': 3020, 'involved': 3021, 'governments': 3022, 'especially': 3023, 'republican': 3024, 'states': 3025, 'emos': 3026, 'edgy': 3027, 'yoshiko': 3028, 'camera': 3029, 'public': 3030, 'view': 3031, 'democrat': 3032, 'scientistamaggs': 3033, 'notwoofers': 3034, 'clarify': 3035, 'sources': 3036, 'consist': 3037, 'excitable': 3038, 'iraqi': 3039, 'italy': 3040, 'devondaigle9': 3041, 'incorrectjeon': 3042, 'friendship': 3043, 'bbmastopsosial': 3044, 'ricetactician': 3045, 'handzum1': 3046, 'music4': 3047, 'chevelleinatl': 3048, 'near': 3049, 'amf': 3050, 'cristiano': 3051, 'ronaldo': 3052, '801': 3053, '600': 3054, 'assists': 3055, '199': 3056, 'lionel': 3057, 'hermoncher': 3058, 'patriarch': 3059, 'addams': 3060, 'clan': 3061, 'eccentric': 3062, 'multi': 3063, 'devoted': 3064, 'latin': 3065, 'kata': 3066, 'championships': 3067, 'netherlands': 3068, 'ifknetherland': 3069, 'ifkkyokushin': 3070, 'ttcdesign': 3071, 'agreeing': 3072, 'impressive': 3073, 'lets': 3074, 'endgamebarnes': 3075, 'noise': 3076, 'bruce': 3077, 'died': 3078, 'rogers87441562': 3079, 'blumenthal': 3080, 'murphy': 3081, 'forcefully': 3082, 'committed': 3083, 'iol': 3084, 'cvh': 3085, 'sociopathy': 3086, 'http': 3087, 'kamalaharris': 3088, 'screws': 3089, 'stooge': 3090, 'timseekstruth': 3091, 'viewing': 3092, 'remains': 3093, 'doubt': 3094, 'seeks': 3095, 'domination': 3096, 'jihad': 3097, 'lexa': 3098, 'merica': 3099, 'storm': 3100, 'bigger': 3101, 'races': 3102, 'sma': 3103, 'madz': 3104, 'gemnaj415': 3105, 'planet': 3106, 'drshwetagulati': 3107, 'javedakhtar': 3108, 'happily': 3109, 'promoted': 3110, 'tipu': 3111, 'sultan': 3112, 'mercilessly': 3113, 'murdered': 3114, 'garylineker': 3115, 'scores': 3116, '600th': 3117, 'freekick': 3118, 'genius': 3119, 'jakeandamir': 3120, 'maiden': 3121, 'hyphen': 3122, 'gmail': 3123, 'password': 3124, 'jr': 3125, 'urs': 3126, 'dotdaebi': 3127, 'hq': 3128, '190427': 3129, 'icn': 3130, 'ssi': 3131, 'shxxbi': 3132, 'misayeon': 3133, 'icles': 3134, 'written': 3135, 'sana': 3136, 'clarifying': 3137, 'mentioned': 3138, 'emp': 3139, 'known': 3140, 'stark': 3141, 'jack': 3142, 'sparrows': 3143, 'badgirlsclb': 3144, 'befriend': 3145, 'snaked': 3146, 'theactualgpapi': 3147, 'kwipdraws': 3148, 'i8pichu': 3149, 'charlotte': 3150, 'gaze': 3151, 'epipheilany': 3152, 'enjoyed': 3153, 'shade': 3154, 'david': 3155, 'beckham': 3156, 'whichever': 3157, 'yafavshawty': 3158, 'a39pat': 3159, 'jorhallpia': 3160, 'celtoid2': 3161, 'speakerpelosi': 3162, 'outlaw': 3163, 'control': 3164, 'lin': 3165, 'manuel': 3166, 'gnight': 3167, 'volleyball': 3168, 'offseason': 3169, 'nobodycaresgetbetter': 3170, 'plus': 3171, 'equals': 3172, 'dougayoung': 3173, 'test': 3174, 'fairness': 3175, 'bent': 3176, 'suspendi': 3177, 'instilled': 3178, 'alexmyers': 3179, 'manifestldn': 3180, 'brewdog': 3181, 'brewdogusa': 3182, 'alexmair': 3183, 'tanisharobinson': 3184, 'prweek': 3185, 'reimbursed': 3186, 'niy': 3187, 'cashapp': 3188, 'cashsuppo': 3189, 'closes': 3190, 'receive': 3191, 'refund': 3192, 'classiclib3ral': 3193, 'wingers': 3194, 'burger': 3195, 'king': 3196, 'tweeting': 3197, 'sell': 3198, 'milkshakes': 3199, 'oohmeg': 3200, 'behind': 3201, 'faster': 3202, 'surrender': 3203, 'rise': 3204, 'odds': 3205, 'jesse': 3206, 'jackson': 3207, 'queermarquis': 3208, 'goodboysole': 3209, 'warm': 3210, 'shower': 3211, 'derekme2020': 3212, 'glenngriffin8': 3213, 'senatorcollins': 3214, 'senangusking': 3215, 'lepage': 3216, 'selfish': 3217, 'liars': 3218, 'piss': 3219, 'btobwings': 3220, 'cracked': 3221, 'neck': 3222, 'celtics': 3223, 'ryan': 3224, 'littlebeechinc': 3225, 'yup': 3226, 'naniniwala': 3227, 'talaga': 3228, 'ellie': 3229, 'yago': 3230, 'millie': 3231, 'hind': 3232, 'managers': 3233, 'finals': 3234, 'dxnniie': 3235, 'signing': 3236, 'trial': 3237, 'asks': 3238, 'details': 3239, 'deec748': 3240, 'drinking': 3241, 'muslims': 3242, 'garoukike': 3243, 'detest': 3244, 'chileans': 3245, 'admitted': 3246, 'profession': 3247, 'cpsocal': 3248, 'jaded': 3249, 'la': 3250, 'ending': 3251, 'joaoafonso2002': 3252, 'monkey': 3253, 'darling': 3254, 'tou': 3255, 'gorila': 3256, 'ahahhahaha': 3257, 'sing': 3258, 'olivia': 3259, 'zer0': 3260, 'rahkal': 3261, 'buck': 3262, 'quality': 3263, 'quantity': 3264, '40': 3265, 'sweatylifeline': 3266, 'closest': 3267, 'ill': 3268, 'ashley': 3269, 'selfie': 3270, 'choseuqnyoun': 3271, 'tomtsec': 3272, 'travel': 3273, 'nearest': 3274, 'welfare': 3275, 'bittersanny': 3276, 'aliciaobrien': 3277, 'photo': 3278, 'zubearabdi': 3279, 'ikeepitchilld': 3280, 'lmaoooo': 3281, 'flyshitonly': 3282, 'bazzi': 3283, 'legitsarpong': 3284, 'forgiving': 3285, 'castro1021': 3286, 'reading': 3287, 'iker': 3288, 'casillas': 3289, 'attack': 3290, 'taekookmemories': 3291, 'yas': 3292, 'nanoynoynoy': 3293, 'upset': 3294, 'rabiasquared': 3295, 'hae': 3296, 'lee': 3297, 'woodlawn': 3298, 'disappeared': 3299, 'nearly': 3300, 'lea': 3301, 'recording': 3302, 'babeyie': 3303, 'diorscherie': 3304, 'noone': 3305, 'mention': 3306, 'lana': 3307, 'condor': 3308, 'gala': 3309, 'giambattista': 3310, 'stunning': 3311, 'vultures': 3312, 'eating': 3313, 'possum': 3314, 'danny': 3315, 'wantto': 3316, 'chicken': 3317, 'spinach': 3318, 'orzo': 3319, 'creamy': 3320, 'pesto': 3321, 'seitan': 3322, 'mashed': 3323, 'potatoes': 3324, 'honey': 3325, 'mustard': 3326, 'chili': 3327, 'cheese': 3328, 'cake': 3329, 'chelsearr24': 3330, 'alrighty': 3331, 'loveyoutakecare': 3332, 'whatsapp': 3333, 'tropicsass': 3334, 'd': 3335, 'thefinalepisode': 3336, 'season': 3337, 'smiling': 3338, 'tiresome': 3339, 'themjlegion': 3340, 'underst': 3341, 'messi27110673': 3342, 'fair': 3343, 'schedules': 3344, 'smtown': 3345, 'weareoneexo': 3346, 'sm': 3347, 'cyrusmmcqueen': 3348, 'septic': 3349, 'systems': 3350, 'disposals': 3351, 'compost': 3352, 'nickhansonmn': 3353, 'gunna': 3354, 'iamalanwalker': 3355, 'alternative': 3356, 'onmyway': 3357, 'sabrinaannlynn': 3358, 'farrukoofficial': 3359, 'lunabelle': 3360, '30am': 3361, 'longer': 3362, 'balbuenaa': 3363, 'slazo': 3364, 'frickin': 3365, 'realdjbj': 3366, 'ai': 3367, 'kiddgabbana': 3368, 'pulchritudeusa': 3369, 'nct127': 3370, 'yuta': 3371, 'mgsshitpost': 3372, 'sneak': 3373, 'yoitsmason': 3374, 'sus': 3375, 'workers': 3376, 'society': 3377, 'pinck': 3378, 'presented': 3379, 'replica': 3380, 'congressional': 3381, 'gold': 3382, 'medal': 3383, 'lazagna': 3384, 'mus': 3385, 'auntydonnaboys': 3386, 'castles': 3387, 'telly': 3388, 'rolling': 3389, 'loud': 3390, 'alarm': 3391, 'scariest': 3392, 'moment': 3393, 'woul': 3394, 'savinthebees': 3395, 'fast': 3396, 'moans': 3397, 'manga': 3398, 'bryceariell': 3399, 'asoiafjaime': 3400, 'lt': 3401, 'empireofthekop': 3402, 'blame': 3403, 'moments': 3404, 'klopp': 3405, 'fault': 3406, 'chances': 3407, 'missed': 3408, 'pablopicasshoe': 3409, '13': 3410, 'block': 3411, 'le': 3412, '17': 3413, 'apinkeunjeep': 3414, 'stuff': 3415, 'throat': 3416, 'sustenance': 3417, 'breitba': 3418, 'news': 3419, 'anncoulter': 3420, 'coulter': 3421, 'propaganda': 3422, 'wks': 3423, 'redfoxx92': 3424, 'play': 3425, 'manyvids': 3426, 'stretched': 3427, 'teosgame': 3428, 'spellings': 3429, 'lowercase': 3430, 'lose': 3431, 'joonie': 3432, 'ooohh': 3433, 'uwu': 3434, 'borofccentral': 3435, 'youngjby': 3436, 'mate': 3437, 'noncey': 3438, 'utdxtra': 3439, 'dof': 3440, 'buying': 3441, 'cb': 3442, 'jamescharles': 3443, 'jeffreestar': 3444, 'glamlifeguru': 3445, 'tutorials': 3446, 'reviews': 3447, 'ba': 3448, 'ethantaylor487': 3449, 'fag': 3450, 'livenationkpop': 3451, 'jypetwice': 3452, 'twicelights': 3453, 'sale': 3454, '4pm': 3455, 'sadmelancholia': 3456, 'targaryen': 3457, 'iconic': 3458, 'problems': 3459, 'happens': 3460, 'debater': 3461, 'approach': 3462, 'gun': 3463, 'quickly': 3464, 'google': 3465, 'sculpture': 3466, 'farming': 3467, 'marking': 3468, 'asian': 3469, 'pac': 3470, 'yara': 3471, 'survive': 3472, 'disneyd23': 3473, 'toystory4': 3474, 'tedmcclelland': 3475, 'ilyasahshabazz': 3476, 'bri': 3477, 'rhodes24': 3478, 'highlighting': 3479, 'unfinished': 3480, 'acknowledge': 3481, 'uplift': 3482, 'aye': 3483, 'bro': 3484, 'vinterine': 3485, 'haw': 3486, 'tonysaying': 3487, 'flanos': 3488, 'chizmaga': 3489, 'thousand': 3490, 'grassley': 3491, 'disses': 3492, 'walks': 3493, 'sits': 3494, 'sile': 3495, 'kingslalisa': 3496, 'dua': 3497, 'perform': 3498, 'blackpinkinnewark': 3499, 'btsanswer': 3500, 'fancams': 3501, 'dahyun': 3502, 'recognition': 3503, 'homophobic': 3504, 'nifesq': 3505, 'dive': 3506, 'tackles': 3507, 'uses': 3508, 'brawn': 3509, 'unalive': 3510, 'tati': 3511, 'mentor': 3512, 'baaad': 3513, 'srivatsayb': 3514, 'masoodazhar': 3515, 'listing': 3516, 'listings': 3517, 'hafiz': 3518, 'saeed': 3519, 'jem': 3520, 'probablyiame': 3521, 'izogii': 3522, 'teeth': 3523, 'ihlaking': 3524, 'anyway': 3525, 'carefree': 3526, 'deer': 3527, 'prancing': 3528, 'beach': 3529, 'dawn': 3530, 'size': 3531, 'redo': 3532, 'emmyrossum': 3533, 'anacdotal': 3534, 'thatboysgood': 3535, 'waited': 3536, 'popeye': 3537, 'noticed': 3538, 'survived': 3539, 'djjezy': 3540, 'dawg': 3541, 'malditasosvos': 3542, 'exact': 3543, 'spot': 3544, 'freeway': 3545, 'bigbosstunna': 3546, 'battles': 3547, 'slap': 3548, 'mlota': 3549, 'azola': 3550, 'boobs': 3551, 'intelligently': 3552, 'nce1913': 3553, 'golden': 3554, 'clint': 3555, 'capela': 3556, 'tandy': 3557, 'omz': 3558, 'craigliddell58': 3559, 'presidenti': 3560, 'kingscrown08': 3561, 'rrepuvival': 3562, 'yate': 3563, 'england': 3564, 'via': 3565, 'supervisor': 3566, 'prettyinmoney': 3567, 'reimbursements': 3568, 'ones': 3569, 'subs': 3570, 'skedaddle74': 3571, 'moods': 3572, 'killer': 3573, 'disturbed': 3574, 'dmnug': 3575, 'cramps': 3576, '24': 3577, 'killadayday2000': 3578, 'together': 3579, 'taste': 3580, 'coochie': 3581, 'elmoisnowgod': 3582, 'letters': 3583, 'worship': 3584, 'elmo': 3585, 'j': 3586, 'k': 3587, 'kyle': 3588, 'eight': 3589, 'celebrities': 3590, 'raise': 3591, 'suppossed': 3592, 'role': 3593, 'model': 3594, 'maharashtrambfc': 3595, 'maharshi': 3596, 'hyderabad': 3597, 'areas': 3598, 'fastest': 3599, 'kphb': 3600, 'area': 3601, 'nonbb': 3602, 'cxroads': 3603, '9day': 3604, 'nytmike': 3605, 'officials': 3606, 'sought': 3607, 'counsel': 3608, 'mcgahn': 3609, 'threedailey': 3610, 'rferl': 3611, 'alleged': 3612, 'gru': 3613, 'agents': 3614, 'others': 3615, 'guilty': 3616, 'sentenced': 3617, 'montenegro': 3618, 'plot': 3619, 'ove': 3620, 'hrow': 3621, 'ministries': 3622, 'crucial': 3623, 'building': 3624, 'narrative': 3625, 'minofculturegoi': 3626, 'hrdministry': 3627, 'sstweeps': 3628, 'uae': 3629, 'may16': 3630, '18': 3631, 'wknd': 3632, 'dedepyaarde': 3633, '27383': 3634, '88': 3635, 'cr': 3636, 'mrlocal': 3637, '14040': 3638, '95': 3639, '47': 3640, 'lakh': 3641}
###Markdown
5. Encoding or Sequencing
###Code
encoded_clean_text_t_stem = tok_all.texts_to_sequences(clean_text_t_stem)
print(clean_text_t_stem[0])
print(encoded_clean_text_t_stem[0])
###Output
delmiyaa : samini resetting show moving things along nothing happened need know greatness
[81, 1603, 207, 545, 216, 789, 9, 10]
###Markdown
6. Pre-padding
###Code
from keras.preprocessing import sequence
max_length = 100
padded_clean_text_t_stem = sequence.pad_sequences(encoded_clean_text_t_stem, maxlen=max_length, padding='pre')
###Output
_____no_output_____
###Markdown
GloVe Embedding
###Code
# GloVe Embedding link - https://nlp.stanford.edu/projects/glove/
import os
import numpy as np
embeddings_index = {}
f = open('drive/My Drive/HASOC Competition Data/Copy of glove.6B.300d.txt')
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Loaded %s word vectors.' % len(embeddings_index))
embedding_matrix = np.zeros((vocabulary_all+1, 300))
for word, i in tok_all.word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
###Output
_____no_output_____
###Markdown
**CNN Model**
###Code
from keras.preprocessing import sequence
from keras.preprocessing import text
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding, LSTM
from keras.layers import Conv1D, Flatten
from keras.preprocessing import text
from keras.models import Sequential,Model
from keras.layers import Dense ,Activation,MaxPool1D,Conv1D,Flatten,Dropout,Activation,Dropout,Input,Lambda,concatenate
from keras.utils import np_utils
from nltk.corpus import stopwords
from nltk.tokenize import RegexpTokenizer
from nltk.stem.porter import PorterStemmer
import nltk
import csv
import pandas as pd
from keras.preprocessing import text as keras_text, sequence as keras_seq
Embedding_Layer = Embedding(vocabulary_all+1, 300, weights=[embedding_matrix], input_length=max_length, trainable=False)
CNN2_model=Sequential([Embedding_Layer,
Conv1D(128,5,activation="relu",padding='same'),
Dropout(0.2),
MaxPool1D(2),
Conv1D(64,3,activation="relu",padding='same'),
Dropout(0.2),
MaxPool1D(2),
Flatten(),
Dense(64,activation="relu"),
Dense(2,activation="sigmoid")
])
CNN2_model.summary()
from keras.optimizers import Adam
CNN2_model.compile(loss = "binary_crossentropy", optimizer=Adam(lr=0.00003), metrics=["accuracy"])
from keras.utils.vis_utils import plot_model
plot_model(CNN2_model, to_file='CNN2_model.png', show_shapes=True, show_layer_names=True)
###Output
_____no_output_____
###Markdown
Dataset Splitting
###Code
from keras.callbacks import EarlyStopping, ReduceLROnPlateau,ModelCheckpoint
earlystopper = EarlyStopping(patience=8, verbose=1)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.9,
patience=2, min_lr=0.00001, verbose=1)
###Output
_____no_output_____
###Markdown
**Model Fitting or Training**
###Code
hist = CNN2_model.fit(padded_clean_text_stem,label_twoDimension,epochs=200,batch_size=32,callbacks=[earlystopper, reduce_lr])
###Output
Epoch 1/200
116/116 [==============================] - ETA: 0s - loss: 0.6877 - accuracy: 0.5612WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 10ms/step - loss: 0.6877 - accuracy: 0.5612
Epoch 2/200
111/116 [===========================>..] - ETA: 0s - loss: 0.6575 - accuracy: 0.6875WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.6563 - accuracy: 0.6874
Epoch 3/200
111/116 [===========================>..] - ETA: 0s - loss: 0.5878 - accuracy: 0.7486WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.5849 - accuracy: 0.7503
Epoch 4/200
111/116 [===========================>..] - ETA: 0s - loss: 0.4848 - accuracy: 0.8026WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.4836 - accuracy: 0.8026
Epoch 5/200
114/116 [============================>.] - ETA: 0s - loss: 0.4184 - accuracy: 0.8314WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.4161 - accuracy: 0.8328
Epoch 6/200
116/116 [==============================] - ETA: 0s - loss: 0.3766 - accuracy: 0.8501WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.3766 - accuracy: 0.8501
Epoch 7/200
116/116 [==============================] - ETA: 0s - loss: 0.3450 - accuracy: 0.8646WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.3450 - accuracy: 0.8646
Epoch 8/200
113/116 [============================>.] - ETA: 0s - loss: 0.3276 - accuracy: 0.8700WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.3262 - accuracy: 0.8708
Epoch 9/200
114/116 [============================>.] - ETA: 0s - loss: 0.3086 - accuracy: 0.8745WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.3073 - accuracy: 0.8757
Epoch 10/200
113/116 [============================>.] - ETA: 0s - loss: 0.2923 - accuracy: 0.8833WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.2940 - accuracy: 0.8816
Epoch 11/200
114/116 [============================>.] - ETA: 0s - loss: 0.2805 - accuracy: 0.8904WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.2809 - accuracy: 0.8900
Epoch 12/200
111/116 [===========================>..] - ETA: 0s - loss: 0.2730 - accuracy: 0.8908WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.2701 - accuracy: 0.8919
Epoch 13/200
112/116 [===========================>..] - ETA: 0s - loss: 0.2579 - accuracy: 0.8993WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.2578 - accuracy: 0.8999
Epoch 14/200
110/116 [===========================>..] - ETA: 0s - loss: 0.2487 - accuracy: 0.9062WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.2526 - accuracy: 0.9035
Epoch 15/200
110/116 [===========================>..] - ETA: 0s - loss: 0.2407 - accuracy: 0.9065WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.2395 - accuracy: 0.9075
Epoch 16/200
115/116 [============================>.] - ETA: 0s - loss: 0.2369 - accuracy: 0.9109WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.2370 - accuracy: 0.9107
Epoch 17/200
111/116 [===========================>..] - ETA: 0s - loss: 0.2234 - accuracy: 0.9122WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.2255 - accuracy: 0.9124
Epoch 18/200
110/116 [===========================>..] - ETA: 0s - loss: 0.2192 - accuracy: 0.9193WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.2204 - accuracy: 0.9175
Epoch 19/200
112/116 [===========================>..] - ETA: 0s - loss: 0.2096 - accuracy: 0.9252WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.2101 - accuracy: 0.9253
Epoch 20/200
115/116 [============================>.] - ETA: 0s - loss: 0.2008 - accuracy: 0.9239WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.2006 - accuracy: 0.9237
Epoch 21/200
114/116 [============================>.] - ETA: 0s - loss: 0.1953 - accuracy: 0.9312WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.1943 - accuracy: 0.9320
Epoch 22/200
110/116 [===========================>..] - ETA: 0s - loss: 0.1837 - accuracy: 0.9358WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.1836 - accuracy: 0.9347
Epoch 23/200
115/116 [============================>.] - ETA: 0s - loss: 0.1710 - accuracy: 0.9429WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.1716 - accuracy: 0.9428
Epoch 24/200
110/116 [===========================>..] - ETA: 0s - loss: 0.1751 - accuracy: 0.9358WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.1731 - accuracy: 0.9369
Epoch 25/200
111/116 [===========================>..] - ETA: 0s - loss: 0.1624 - accuracy: 0.9431WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.1623 - accuracy: 0.9431
Epoch 26/200
112/116 [===========================>..] - ETA: 0s - loss: 0.1527 - accuracy: 0.9534WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.1521 - accuracy: 0.9531
Epoch 27/200
112/116 [===========================>..] - ETA: 0s - loss: 0.1473 - accuracy: 0.9523WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.1457 - accuracy: 0.9531
Epoch 28/200
116/116 [==============================] - ETA: 0s - loss: 0.1428 - accuracy: 0.9536WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.1428 - accuracy: 0.9536
Epoch 29/200
115/116 [============================>.] - ETA: 0s - loss: 0.1297 - accuracy: 0.9620WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.1305 - accuracy: 0.9612
Epoch 30/200
115/116 [============================>.] - ETA: 0s - loss: 0.1287 - accuracy: 0.9598WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.1287 - accuracy: 0.9598
Epoch 31/200
115/116 [============================>.] - ETA: 0s - loss: 0.1232 - accuracy: 0.9603WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.1228 - accuracy: 0.9604
Epoch 32/200
114/116 [============================>.] - ETA: 0s - loss: 0.1122 - accuracy: 0.9671WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.1116 - accuracy: 0.9674
Epoch 33/200
112/116 [===========================>..] - ETA: 0s - loss: 0.1081 - accuracy: 0.9688WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.1078 - accuracy: 0.9693
Epoch 34/200
112/116 [===========================>..] - ETA: 0s - loss: 0.1018 - accuracy: 0.9685WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.1017 - accuracy: 0.9682
Epoch 35/200
114/116 [============================>.] - ETA: 0s - loss: 0.0970 - accuracy: 0.9715WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0969 - accuracy: 0.9717
Epoch 36/200
112/116 [===========================>..] - ETA: 0s - loss: 0.0874 - accuracy: 0.9768WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0877 - accuracy: 0.9765
Epoch 37/200
112/116 [===========================>..] - ETA: 0s - loss: 0.0862 - accuracy: 0.9766WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0867 - accuracy: 0.9765
Epoch 38/200
113/116 [============================>.] - ETA: 0s - loss: 0.0801 - accuracy: 0.9784WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0802 - accuracy: 0.9787
Epoch 39/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0755 - accuracy: 0.9821WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0768 - accuracy: 0.9809
Epoch 40/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0743 - accuracy: 0.9792WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0740 - accuracy: 0.9795
Epoch 41/200
112/116 [===========================>..] - ETA: 0s - loss: 0.0684 - accuracy: 0.9824WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0673 - accuracy: 0.9830
Epoch 42/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0648 - accuracy: 0.9842WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0652 - accuracy: 0.9841
Epoch 43/200
115/116 [============================>.] - ETA: 0s - loss: 0.0600 - accuracy: 0.9842WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0598 - accuracy: 0.9844
Epoch 44/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0577 - accuracy: 0.9862WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0584 - accuracy: 0.9860
Epoch 45/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0556 - accuracy: 0.9873WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0564 - accuracy: 0.9868
Epoch 46/200
113/116 [============================>.] - ETA: 0s - loss: 0.0536 - accuracy: 0.9873WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.0535 - accuracy: 0.9873
Epoch 47/200
115/116 [============================>.] - ETA: 0s - loss: 0.0490 - accuracy: 0.9883WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0489 - accuracy: 0.9884
Epoch 48/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0444 - accuracy: 0.9920WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0439 - accuracy: 0.9919
Epoch 49/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0423 - accuracy: 0.9901WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0417 - accuracy: 0.9900
Epoch 50/200
115/116 [============================>.] - ETA: 0s - loss: 0.0403 - accuracy: 0.9902WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0402 - accuracy: 0.9903
Epoch 51/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0421 - accuracy: 0.9906WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0418 - accuracy: 0.9908
Epoch 52/200
116/116 [==============================] - ETA: 0s - loss: 0.0384 - accuracy: 0.9906WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0384 - accuracy: 0.9906
Epoch 53/200
115/116 [============================>.] - ETA: 0s - loss: 0.0347 - accuracy: 0.9924WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0348 - accuracy: 0.9922
Epoch 54/200
112/116 [===========================>..] - ETA: 0s - loss: 0.0319 - accuracy: 0.9930WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0326 - accuracy: 0.9924
Epoch 55/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0329 - accuracy: 0.9929WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0331 - accuracy: 0.9927
Epoch 56/200
115/116 [============================>.] - ETA: 0s - loss: 0.0321 - accuracy: 0.9932WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0320 - accuracy: 0.9933
Epoch 57/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0298 - accuracy: 0.9921WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0300 - accuracy: 0.9919
Epoch 58/200
115/116 [============================>.] - ETA: 0s - loss: 0.0272 - accuracy: 0.9929WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0271 - accuracy: 0.9930
Epoch 59/200
116/116 [==============================] - ETA: 0s - loss: 0.0268 - accuracy: 0.9930WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0268 - accuracy: 0.9930
Epoch 60/200
115/116 [============================>.] - ETA: 0s - loss: 0.0251 - accuracy: 0.9932WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0250 - accuracy: 0.9933
Epoch 61/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0254 - accuracy: 0.9937WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0248 - accuracy: 0.9941
Epoch 62/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0214 - accuracy: 0.9946WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0214 - accuracy: 0.9949
Epoch 63/200
116/116 [==============================] - ETA: 0s - loss: 0.0230 - accuracy: 0.9946WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0230 - accuracy: 0.9946
Epoch 64/200
116/116 [==============================] - ETA: 0s - loss: 0.0197 - accuracy: 0.9951WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0197 - accuracy: 0.9951
Epoch 65/200
115/116 [============================>.] - ETA: 0s - loss: 0.0218 - accuracy: 0.9951WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0217 - accuracy: 0.9951
Epoch 66/200
113/116 [============================>.] - ETA: 0s - loss: 0.0193 - accuracy: 0.9942WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0192 - accuracy: 0.9943
Epoch 67/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0190 - accuracy: 0.9955WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0186 - accuracy: 0.9954
Epoch 68/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0171 - accuracy: 0.9949WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0170 - accuracy: 0.9951
Epoch 69/200
116/116 [==============================] - ETA: 0s - loss: 0.0161 - accuracy: 0.9960WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0161 - accuracy: 0.9960
Epoch 70/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0185 - accuracy: 0.9940WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0186 - accuracy: 0.9941
Epoch 71/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0142 - accuracy: 0.9966WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0152 - accuracy: 0.9962
Epoch 72/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0164 - accuracy: 0.9963WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0166 - accuracy: 0.9962
Epoch 73/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0137 - accuracy: 0.9972WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0141 - accuracy: 0.9970
Epoch 74/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0145 - accuracy: 0.9952WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0143 - accuracy: 0.9954
Epoch 75/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0139 - accuracy: 0.9966WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0136 - accuracy: 0.9968
Epoch 76/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0132 - accuracy: 0.9963WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0135 - accuracy: 0.9962
Epoch 77/200
113/116 [============================>.] - ETA: 0s - loss: 0.0130 - accuracy: 0.9959WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0130 - accuracy: 0.9957
Epoch 78/200
116/116 [==============================] - ETA: 0s - loss: 0.0124 - accuracy: 0.9954WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0124 - accuracy: 0.9954
Epoch 79/200
116/116 [==============================] - ETA: 0s - loss: 0.0122 - accuracy: 0.9965WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0122 - accuracy: 0.9965
Epoch 80/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0117 - accuracy: 0.9955WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0126 - accuracy: 0.9949
Epoch 81/200
112/116 [===========================>..] - ETA: 0s - loss: 0.0124 - accuracy: 0.9958WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0123 - accuracy: 0.9960
Epoch 82/200
113/116 [============================>.] - ETA: 0s - loss: 0.0134 - accuracy: 0.9950WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0131 - accuracy: 0.9951
Epoch 83/200
112/116 [===========================>..] - ETA: 0s - loss: 0.0120 - accuracy: 0.9953WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0120 - accuracy: 0.9951
Epoch 84/200
115/116 [============================>.] - ETA: 0s - loss: 0.0105 - accuracy: 0.9967WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0104 - accuracy: 0.9968
Epoch 85/200
113/116 [============================>.] - ETA: 0s - loss: 0.0124 - accuracy: 0.9953WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0127 - accuracy: 0.9954
Epoch 86/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0105 - accuracy: 0.9960WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0107 - accuracy: 0.9960
Epoch 87/200
115/116 [============================>.] - ETA: 0s - loss: 0.0105 - accuracy: 0.9967WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0105 - accuracy: 0.9968
Epoch 88/200
116/116 [==============================] - ETA: 0s - loss: 0.0109 - accuracy: 0.9960WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0109 - accuracy: 0.9960
Epoch 89/200
116/116 [==============================] - ETA: 0s - loss: 0.0108 - accuracy: 0.9951WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0108 - accuracy: 0.9951
Epoch 90/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0111 - accuracy: 0.9952WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0106 - accuracy: 0.9954
Epoch 91/200
114/116 [============================>.] - ETA: 0s - loss: 0.0116 - accuracy: 0.9962WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0115 - accuracy: 0.9962
Epoch 92/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0091 - accuracy: 0.9963WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0091 - accuracy: 0.9962
Epoch 93/200
116/116 [==============================] - ETA: 0s - loss: 0.0091 - accuracy: 0.9970WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0091 - accuracy: 0.9970
Epoch 94/200
112/116 [===========================>..] - ETA: 0s - loss: 0.0091 - accuracy: 0.9967WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0090 - accuracy: 0.9968
Epoch 95/200
112/116 [===========================>..] - ETA: 0s - loss: 0.0083 - accuracy: 0.9972WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0086 - accuracy: 0.9970
Epoch 96/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0095 - accuracy: 0.9966WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0091 - accuracy: 0.9968
Epoch 97/200
115/116 [============================>.] - ETA: 0s - loss: 0.0094 - accuracy: 0.9965WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0094 - accuracy: 0.9965
Epoch 98/200
115/116 [============================>.] - ETA: 0s - loss: 0.0093 - accuracy: 0.9965WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0092 - accuracy: 0.9965
Epoch 99/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0095 - accuracy: 0.9963WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0091 - accuracy: 0.9965
Epoch 100/200
116/116 [==============================] - ETA: 0s - loss: 0.0088 - accuracy: 0.9978WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0088 - accuracy: 0.9978
Epoch 101/200
115/116 [============================>.] - ETA: 0s - loss: 0.0077 - accuracy: 0.9965WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0077 - accuracy: 0.9965
Epoch 102/200
114/116 [============================>.] - ETA: 0s - loss: 0.0084 - accuracy: 0.9978WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0084 - accuracy: 0.9978
Epoch 103/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0091 - accuracy: 0.9958WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0088 - accuracy: 0.9960
Epoch 104/200
115/116 [============================>.] - ETA: 0s - loss: 0.0092 - accuracy: 0.9962WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0091 - accuracy: 0.9962
Epoch 105/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0086 - accuracy: 0.9963WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0083 - accuracy: 0.9965
Epoch 106/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0081 - accuracy: 0.9972WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0079 - accuracy: 0.9973
Epoch 107/200
116/116 [==============================] - ETA: 0s - loss: 0.0082 - accuracy: 0.9968WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0082 - accuracy: 0.9968
Epoch 108/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0093 - accuracy: 0.9966WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0090 - accuracy: 0.9968
Epoch 109/200
115/116 [============================>.] - ETA: 0s - loss: 0.0078 - accuracy: 0.9976WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0081 - accuracy: 0.9973
Epoch 110/200
113/116 [============================>.] - ETA: 0s - loss: 0.0102 - accuracy: 0.9956WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0100 - accuracy: 0.9957
Epoch 111/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0087 - accuracy: 0.9966WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0083 - accuracy: 0.9968
Epoch 112/200
116/116 [==============================] - ETA: 0s - loss: 0.0080 - accuracy: 0.9968WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0080 - accuracy: 0.9968
Epoch 113/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0069 - accuracy: 0.9972WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0069 - accuracy: 0.9970
Epoch 114/200
112/116 [===========================>..] - ETA: 0s - loss: 0.0079 - accuracy: 0.9972WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0082 - accuracy: 0.9970
Epoch 115/200
115/116 [============================>.] - ETA: 0s - loss: 0.0075 - accuracy: 0.9973WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0074 - accuracy: 0.9973
Epoch 116/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0070 - accuracy: 0.9972WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0068 - accuracy: 0.9973
Epoch 117/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0071 - accuracy: 0.9969WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0070 - accuracy: 0.9968
Epoch 118/200
114/116 [============================>.] - ETA: 0s - loss: 0.0084 - accuracy: 0.9973WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0083 - accuracy: 0.9973
Epoch 119/200
112/116 [===========================>..] - ETA: 0s - loss: 0.0076 - accuracy: 0.9969WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0075 - accuracy: 0.9970
Epoch 120/200
116/116 [==============================] - ETA: 0s - loss: 0.0076 - accuracy: 0.9960WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0076 - accuracy: 0.9960
Epoch 121/200
115/116 [============================>.] - ETA: 0s - loss: 0.0078 - accuracy: 0.9965WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0077 - accuracy: 0.9965
Epoch 122/200
112/116 [===========================>..] - ETA: 0s - loss: 0.0091 - accuracy: 0.9969WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0090 - accuracy: 0.9970
Epoch 123/200
115/116 [============================>.] - ETA: 0s - loss: 0.0071 - accuracy: 0.9973WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0070 - accuracy: 0.9973
Epoch 124/200
115/116 [============================>.] - ETA: 0s - loss: 0.0077 - accuracy: 0.9967WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0076 - accuracy: 0.9968
Epoch 125/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0083 - accuracy: 0.9966WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0081 - accuracy: 0.9968
Epoch 126/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0068 - accuracy: 0.9969WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0069 - accuracy: 0.9968
Epoch 127/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0068 - accuracy: 0.9972WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0072 - accuracy: 0.9970
Epoch 128/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0066 - accuracy: 0.9972WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0067 - accuracy: 0.9970
Epoch 129/200
116/116 [==============================] - ETA: 0s - loss: 0.0079 - accuracy: 0.9970WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0079 - accuracy: 0.9970
Epoch 130/200
116/116 [==============================] - ETA: 0s - loss: 0.0064 - accuracy: 0.9970WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0064 - accuracy: 0.9970
Epoch 131/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0076 - accuracy: 0.9966WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0080 - accuracy: 0.9965
Epoch 132/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0080 - accuracy: 0.9969WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0079 - accuracy: 0.9970
Epoch 133/200
115/116 [============================>.] - ETA: 0s - loss: 0.0073 - accuracy: 0.9973WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0072 - accuracy: 0.9973
Epoch 134/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0063 - accuracy: 0.9972WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0067 - accuracy: 0.9968
Epoch 135/200
112/116 [===========================>..] - ETA: 0s - loss: 0.0065 - accuracy: 0.9975WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0065 - accuracy: 0.9976
Epoch 136/200
114/116 [============================>.] - ETA: 0s - loss: 0.0084 - accuracy: 0.9962WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0084 - accuracy: 0.9962
Epoch 137/200
115/116 [============================>.] - ETA: 0s - loss: 0.0084 - accuracy: 0.9967WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0083 - accuracy: 0.9968
Epoch 138/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0076 - accuracy: 0.9957WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0075 - accuracy: 0.9957
Epoch 139/200
116/116 [==============================] - ETA: 0s - loss: 0.0066 - accuracy: 0.9976WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0066 - accuracy: 0.9976
Epoch 140/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0073 - accuracy: 0.9966WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0070 - accuracy: 0.9968
Epoch 141/200
115/116 [============================>.] - ETA: 0s - loss: 0.0071 - accuracy: 0.9965WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0071 - accuracy: 0.9965
Epoch 142/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0084 - accuracy: 0.9966WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0083 - accuracy: 0.9968
Epoch 143/200
116/116 [==============================] - ETA: 0s - loss: 0.0095 - accuracy: 0.9951WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0095 - accuracy: 0.9951
Epoch 144/200
114/116 [============================>.] - ETA: 0s - loss: 0.0061 - accuracy: 0.9975WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0062 - accuracy: 0.9976
Epoch 145/200
116/116 [==============================] - ETA: 0s - loss: 0.0071 - accuracy: 0.9973WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.0071 - accuracy: 0.9973
Epoch 146/200
113/116 [============================>.] - ETA: 0s - loss: 0.0070 - accuracy: 0.9972WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0074 - accuracy: 0.9970
Epoch 147/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0079 - accuracy: 0.9969WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0076 - accuracy: 0.9970
Epoch 148/200
114/116 [============================>.] - ETA: 0s - loss: 0.0076 - accuracy: 0.9964WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0075 - accuracy: 0.9965
Epoch 149/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0077 - accuracy: 0.9966WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0074 - accuracy: 0.9968
Epoch 150/200
115/116 [============================>.] - ETA: 0s - loss: 0.0073 - accuracy: 0.9962WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0073 - accuracy: 0.9962
Epoch 151/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0065 - accuracy: 0.9972WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0063 - accuracy: 0.9973
Epoch 152/200
112/116 [===========================>..] - ETA: 0s - loss: 0.0077 - accuracy: 0.9969WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0075 - accuracy: 0.9970
Epoch 153/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0082 - accuracy: 0.9963WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0078 - accuracy: 0.9965
Epoch 154/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0078 - accuracy: 0.9952WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0074 - accuracy: 0.9954
Epoch 155/200
116/116 [==============================] - ETA: 0s - loss: 0.0072 - accuracy: 0.9968WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0072 - accuracy: 0.9968
Epoch 156/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0059 - accuracy: 0.9969WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0060 - accuracy: 0.9968
Epoch 157/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0078 - accuracy: 0.9966WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0079 - accuracy: 0.9965
Epoch 158/200
115/116 [============================>.] - ETA: 0s - loss: 0.0057 - accuracy: 0.9976WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0057 - accuracy: 0.9976
Epoch 159/200
112/116 [===========================>..] - ETA: 0s - loss: 0.0073 - accuracy: 0.9969WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0073 - accuracy: 0.9968
Epoch 160/200
113/116 [============================>.] - ETA: 0s - loss: 0.0069 - accuracy: 0.9964WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0069 - accuracy: 0.9965
Epoch 161/200
115/116 [============================>.] - ETA: 0s - loss: 0.0069 - accuracy: 0.9970WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0068 - accuracy: 0.9970
Epoch 162/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0063 - accuracy: 0.9977WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0071 - accuracy: 0.9976
Epoch 163/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0076 - accuracy: 0.9961WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0076 - accuracy: 0.9960
Epoch 164/200
115/116 [============================>.] - ETA: 0s - loss: 0.0065 - accuracy: 0.9970WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0065 - accuracy: 0.9970
Epoch 165/200
115/116 [============================>.] - ETA: 0s - loss: 0.0076 - accuracy: 0.9962WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0076 - accuracy: 0.9962
Epoch 166/200
116/116 [==============================] - ETA: 0s - loss: 0.0074 - accuracy: 0.9957WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0074 - accuracy: 0.9957
Epoch 167/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0069 - accuracy: 0.9966WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0071 - accuracy: 0.9965
Epoch 168/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0073 - accuracy: 0.9969WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0070 - accuracy: 0.9970
Epoch 169/200
116/116 [==============================] - ETA: 0s - loss: 0.0069 - accuracy: 0.9962WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0069 - accuracy: 0.9962
Epoch 170/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0069 - accuracy: 0.9972WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0065 - accuracy: 0.9973
Epoch 171/200
115/116 [============================>.] - ETA: 0s - loss: 0.0060 - accuracy: 0.9973WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0061 - accuracy: 0.9973
Epoch 172/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0074 - accuracy: 0.9966WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0070 - accuracy: 0.9968
Epoch 173/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0066 - accuracy: 0.9966WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0068 - accuracy: 0.9965
Epoch 174/200
115/116 [============================>.] - ETA: 0s - loss: 0.0064 - accuracy: 0.9965WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0064 - accuracy: 0.9965
Epoch 175/200
116/116 [==============================] - ETA: 0s - loss: 0.0079 - accuracy: 0.9960WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0079 - accuracy: 0.9960
Epoch 176/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0067 - accuracy: 0.9972WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0070 - accuracy: 0.9968
Epoch 177/200
116/116 [==============================] - ETA: 0s - loss: 0.0080 - accuracy: 0.9965WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0080 - accuracy: 0.9965
Epoch 178/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0073 - accuracy: 0.9969WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0076 - accuracy: 0.9968
Epoch 179/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0062 - accuracy: 0.9969WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0060 - accuracy: 0.9970
Epoch 180/200
114/116 [============================>.] - ETA: 0s - loss: 0.0081 - accuracy: 0.9967WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0080 - accuracy: 0.9968
Epoch 181/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0059 - accuracy: 0.9966WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0070 - accuracy: 0.9962
Epoch 182/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0066 - accuracy: 0.9963WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0065 - accuracy: 0.9965
Epoch 183/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0067 - accuracy: 0.9972WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0071 - accuracy: 0.9968
Epoch 184/200
116/116 [==============================] - ETA: 0s - loss: 0.0080 - accuracy: 0.9965WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0080 - accuracy: 0.9965
Epoch 185/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0067 - accuracy: 0.9969WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0064 - accuracy: 0.9970
Epoch 186/200
111/116 [===========================>..] - ETA: 0s - loss: 0.0085 - accuracy: 0.9958WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0081 - accuracy: 0.9960
Epoch 187/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0071 - accuracy: 0.9969WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0068 - accuracy: 0.9970
Epoch 188/200
113/116 [============================>.] - ETA: 0s - loss: 0.0061 - accuracy: 0.9972WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0062 - accuracy: 0.9970
Epoch 189/200
113/116 [============================>.] - ETA: 0s - loss: 0.0082 - accuracy: 0.9967WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.0080 - accuracy: 0.9968
Epoch 190/200
116/116 [==============================] - ETA: 0s - loss: 0.0062 - accuracy: 0.9973WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.0062 - accuracy: 0.9973
Epoch 191/200
113/116 [============================>.] - ETA: 0s - loss: 0.0065 - accuracy: 0.9964WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.0064 - accuracy: 0.9965
Epoch 192/200
113/116 [============================>.] - ETA: 0s - loss: 0.0074 - accuracy: 0.9970WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.0073 - accuracy: 0.9970
Epoch 193/200
114/116 [============================>.] - ETA: 0s - loss: 0.0068 - accuracy: 0.9973WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.0068 - accuracy: 0.9973
Epoch 194/200
116/116 [==============================] - ETA: 0s - loss: 0.0072 - accuracy: 0.9965WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.0072 - accuracy: 0.9965
Epoch 195/200
115/116 [============================>.] - ETA: 0s - loss: 0.0065 - accuracy: 0.9970WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.0065 - accuracy: 0.9970
Epoch 196/200
116/116 [==============================] - ETA: 0s - loss: 0.0065 - accuracy: 0.9970WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.0065 - accuracy: 0.9970
Epoch 197/200
115/116 [============================>.] - ETA: 0s - loss: 0.0083 - accuracy: 0.9970WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.0082 - accuracy: 0.9970
Epoch 198/200
114/116 [============================>.] - ETA: 0s - loss: 0.0055 - accuracy: 0.9973WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0060 - accuracy: 0.9970
Epoch 199/200
110/116 [===========================>..] - ETA: 0s - loss: 0.0069 - accuracy: 0.9966WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 9ms/step - loss: 0.0066 - accuracy: 0.9968
Epoch 200/200
115/116 [============================>.] - ETA: 0s - loss: 0.0076 - accuracy: 0.9959WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy,lr
116/116 [==============================] - 1s 8ms/step - loss: 0.0075 - accuracy: 0.9960
###Markdown
log loss
###Code
CNN2_model_predictions = CNN2_model.predict(padded_clean_text_t_stem)
from sklearn.metrics import log_loss
log_loss_test= log_loss(label_twoDimension_t,CNN2_model_predictions)
log_loss_test
###Output
_____no_output_____
###Markdown
Classification Report
###Code
predictions = np.zeros_like(CNN2_model_predictions)
predictions[np.arange(len(CNN2_model_predictions)), CNN2_model_predictions.argmax(1)] = 1
predictionInteger=(np.argmax(predictions, axis=1))
predictionInteger
pred_label = np.array(predictionInteger)
df = pd.DataFrame(data=pred_label , columns=["task1"])
print(df)
df.to_csv("submission_EN_A.csv", index=False)
from sklearn.metrics import classification_report
print(classification_report(label_twoDimension_t,predictions))
###Output
precision recall f1-score support
0 0.91 0.80 0.86 423
1 0.81 0.92 0.86 391
micro avg 0.86 0.86 0.86 814
macro avg 0.86 0.86 0.86 814
weighted avg 0.87 0.86 0.86 814
samples avg 0.86 0.86 0.86 814
###Markdown
Epoch v/s Loss Plot
###Code
from matplotlib import pyplot as plt
plt.plot(hist.history["loss"],color = 'red', label = 'train_loss')
#plt.plot(hist.history["val_loss"],color = 'blue', label = 'val_loss')
plt.title('Loss Visualisation')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.savefig('CNN2_HASOC_Eng_lossPlot.pdf',dpi=1000)
from google.colab import files
files.download('CNN2_HASOC_Eng_lossPlot.pdf')
###Output
_____no_output_____
###Markdown
Epoch v/s Accuracy Plot
###Code
plt.plot(hist.history["accuracy"],color = 'red', label = 'train_accuracy')
#plt.plot(hist.history["val_accuracy"],color = 'blue', label = 'val_accuracy')
plt.title('Accuracy Visualisation')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.savefig('CNN2_HASOC_Eng_accuracyPlot.pdf',dpi=1000)
files.download('CNN2_HASOC_Eng_accuracyPlot.pdf')
###Output
_____no_output_____
###Markdown
Area under Curve-ROC
###Code
pred_train = CNN2_model.predict(padded_clean_text_stem)
pred_test = CNN2_model.predict(padded_clean_text_t_stem)
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn import svm, datasets
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from scipy import interp
def plot_AUC_ROC(y_true, y_pred):
n_classes = 2 #change this value according to class value
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_true[:, i], y_pred[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_true.ravel(), y_pred.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
############################################################################################
lw = 2
# Compute macro-average ROC curve and ROC area
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
colors = cycle(['aqua', 'darkorange'])
#classes_list1 = ["DE","NE","DK"]
classes_list1 = ["Non-duplicate","Duplicate"]
for i, color,c in zip(range(n_classes), colors,classes_list1):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='{0} (AUC = {1:0.2f})'
''.format(c, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic curve')
plt.legend(loc="lower right")
#plt.show()
plt.savefig('CNN2_HASOC_Eng_Area_RocPlot.pdf',dpi=1000)
files.download('CNN2_HASOC_Eng_Area_RocPlot.pdf')
plot_AUC_ROC(label_twoDimension_t,pred_test)
###Output
_____no_output_____ |
notebooks/tutorial/PopCount.ipynb | ###Markdown
PopCount8 and PopCountIn this tutorial, we will illustrate how `Python` can be used to construct `Magma Circuits`.We use Wallace Trees to construct a `PopCount` circuit, which counts the number of bits that are set in an n-bit value.
###Code
import magma as m
m.set_mantle_target("ice40")
###Output
_____no_output_____
###Markdown
In this example, we are going to use the built-in `fulladder` from `Mantle`.`fulladder` instantiates a 3-input 2-output and wires up the inputs and the outputs.A common name for a full adder is a carry-sum adder, `csa`.
###Code
from mantle import fulladder
csa = fulladder
###Output
import lattice ice40
import lattice mantle40
###Markdown
To construct the 8-bit popcount, we first use 3 fulladders to sumbits 0 through 2, 3 through 5, and 6 through 7.This forms 3 2-bit results.We can consider the results to be two columns, one for each *place*.The first column is the 1s and the second column is the 2s.We then use two fulladders to sum these columns.We continue summing 3-bits at a time until we get a single bit in each column.A common way to show these operations is with *Dadda dot notation*which shows how many bits are in each colum.
###Code
def popcount8(I):
# Dadda dot notation (of the result)
# o o
# o o
# o o
csa0_0_21 = csa(I[0], I[1], I[2])
csa0_1_21 = csa(I[3], I[4], I[5])
csa0_2_21 = csa(I[6], I[7], 0)
# o o
# o o
csa1_0_21 = csa(csa0_0_21[0], csa0_1_21[0], csa0_2_21[0])
csa1_0_42 = csa(csa0_0_21[1], csa0_1_21[1], csa0_2_21[1])
# o
# o o o
csa2_0_42 = csa(csa1_0_21[1], csa1_0_42[0], 0)
# o o o o
csa2_0_84 = csa(csa1_0_42[1], csa2_0_42[0], 0)
return m.bits([csa1_0_21[0], csa2_0_42[0], csa2_0_84[0], csa2_0_84[1]])
###Output
_____no_output_____
###Markdown
Test benchIn order to test the popcount circuit,we setup the IceStick boardto have eight inputs and four outputs.As before, `J1` will be used for inputs and `J3` for outputs.
###Code
from loam.boards.icestick import IceStick
icestick = IceStick()
for i in range(8):
icestick.J1[i].input().on()
for i in range(4):
icestick.J3[i].output().on()
main = icestick.DefineMain()
m.wire( popcount8(main.J1), main.J3 )
m.EndDefine()
m.compile('build/popcount8', main)
###Output
compiling FullAdder
compiling main
###Markdown
And use our `yosys`, `arcachne-pnr`, and `icestorm` tool flow.
###Code
%%bash
cd build
yosys -q -p 'synth_ice40 -top main -blif popcount8.blif' popcount8.v
arachne-pnr -q -d 1k -o popcount8.txt -p popcount8.pcf popcount8.blif
icepack popcount8.txt popcount8.bin
iceprog popcount8.bin
###Output
/Users/hanrahan/git/magmathon/notebooks/tutorial/build
|
Tensorflow primitives.ipynb | ###Markdown
Variable tensors
###Code
v = tf.Variable([[1.,2., 3.], [4.,5.,6.]])
v
v.value()
v.assign(2*v)
v[0,1].assign(42)
v[0,1] = 42
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.