path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
colab/autompg_xgboost_service.ipynb | ###Markdown
load pickle with xgboost and scaler
###Code
import pickle
lr= pickle.load(open('./xgb_model.pkl', 'rb'))
#바이너리 파일이라 rb
type(lr)
scaler= pickle.load(open('./scaler_xgb.pkl', 'rb'))
type(scaler)
###Output
_____no_output_____
###Markdown
Predict
###Code
displacement= 307.0
horsepower= 130.0
weight= 3504.0
accel= 12.0
origin= 1
cylinders= 8
# origin-> 1,2,3 cylinders-> 3,4,5,6,8
if cylinders == 3:
cylinder= [1,0,0,0,0]
elif cylinders == 4:
cylinder= [0,1,0,0,0]
elif cylinders == 5:
cylinder= [0,0,1,0,0]
elif cylinders == 6:
cylinder= [0,0,0,1,0]
elif cylinders == 8:
cylinder= [0,0,0,0,1]
if origin == 1:
org= [1,0,0]
elif origin == 2:
org= [0,1,0]
elif origins == 3:
org= [0,0,1]
# x_custmer = [[displacement, horsepower, weight, accel, origin, cylinders]]
# 순서 맞추기
x_custmer = [[displacement, horsepower, weight, accel, cylinder, org]]
x_custmer= scaler.transform([[307.0, 130.0, 3504.0, 12.0, 0,0,0,0,1, 1,0,0 ]])
x_custmer.shape
###Output
_____no_output_____ |
notebooks/Validation_plots.ipynb | ###Markdown
Plot weight norms
###Code
import torch
models_path = Path('../..')/Path('models')/experiment_name
def compute_norms(model):
return {k: torch.norm(v) for k, v in model.items() if k.endswith('.weight')}
steps = sorted([int(x.stem.split('_')[1]) for x in models_path.glob("step_*.pt")])
norms = None
for step in steps:
model_path = models_path/f'step_{step}.pt'
model = torch.load(model_path, map_location='cpu')
current = compute_norms(model['model'])
if norms is None:
norms = {k: [v] for k, v in current.items()}
else:
norms = {k: v + [current[k]] for k, v in norms.items()}
fig = go.Figure()
for k in norms.keys():
fig.add_trace(go.Scatter(
x=steps,
y=norms[k],
name=k, # this sets its legend entry,
))
fig.show()
step = 135000
model_path = models_path/f'step_{step}.pt'
model = torch.load(model_path, map_location='cpu')
print({k: torch.norm(v) for k, v in model['model'].items() if k.endswith('.bias')})
model['model']['predictor.enc.0.conv.bias']
###Output
_____no_output_____ |
Data Analysis/Upload/numpy-array-operations 2.ipynb | ###Markdown
> **Assignment 2 - Numpy Array Operations** >> This assignment is part of the course ["Data Analysis with Python: Zero to Pandas"](http://zerotopandas.com). The objective of this assignment is to develop a solid understanding of Numpy array operations. In this assignment you will:> > 1. Pick 5 interesting Numpy array functions by going through the documentation: https://numpy.org/doc/stable/reference/routines.html > 2. Run and modify this Jupyter notebook to illustrate their usage (some explanation and 3 examples for each function). Use your imagination to come up with interesting and unique examples.> 3. Upload this notebook to your Jovian profile using `jovian.commit` and make a submission here: https://jovian.ml/learn/data-analysis-with-python-zero-to-pandas/assignment/assignment-2-numpy-array-operations> 4. (Optional) Share your notebook online (on Twitter, LinkedIn, Facebook) and on the community forum thread: https://jovian.ml/forum/t/assignment-2-numpy-array-operations-share-your-work/10575 . > 5. (Optional) Check out the notebooks [shared by other participants](https://jovian.ml/forum/t/assignment-2-numpy-array-operations-share-your-work/10575) and give feedback & appreciation.>> The recommended way to run this notebook is to click the "Run" button at the top of this page, and select "Run on Binder". This will run the notebook on mybinder.org, a free online service for running Jupyter notebooks.>> Try to give your notebook a catchy title & subtitle e.g. "All about Numpy array operations", "5 Numpy functions you didn't know you needed", "A beginner's guide to broadcasting in Numpy", "Interesting ways to create Numpy arrays", "Trigonometic functions in Numpy", "How to use Python for Linear Algebra" etc.>> **NOTE**: Remove this block of explanation text before submitting or sharing your notebook online - to make it more presentable. Title Here Subtitle HereWrite a short introduction about Numpy and list the chosen functions. - function 1- function 2- function 3- function 4- function 5The recommended way to run this notebook is to click the "Run" button at the top of this page, and select "Run on Binder". This will run the notebook on mybinder.org, a free online service for running Jupyter notebooks.
###Code
!pip install jovian --upgrade -q
import jovian
jovian.commit(project='numpy-array-operations')
###Output
_____no_output_____
###Markdown
Let's begin by importing Numpy and listing out the functions covered in this notebook.
###Code
import numpy as np
# List of functions explained
function1 = np.linalg.inv
function2 = np.transpose
function3 = np.append
function4 = np.linalg.norm
function5 = np.swapaxes
###Output
_____no_output_____
###Markdown
Function 1 - np.linalg.invAdd some explanation about the function in your own words
###Code
# Example 1 - working (change this)
mat1 = np.arange(1, 10).reshape(3, 3)
mat1
###Output
_____no_output_____
###Markdown
Explanation about example
###Code
# Example 2 - working
np.linalg.inv(mat1)
###Output
_____no_output_____
###Markdown
Explanation about example
###Code
# Example 3 - breaking (to illustrate when it breaks)
mat2 = np.arange(1, 7).reshape(2, 3)
mat2
np.linalg.inv(mat2)
# np.linalg.inv can only be used in only square matrix
# because mat2's shape(m, n) is 2, 3; and m != n so LinAlgError
###Output
_____no_output_____
###Markdown
Explanation about example (why it breaks and how to fix it) Some closing comments about when to use this function.
###Code
jovian.commit()
###Output
_____no_output_____
###Markdown
Function 2 - np.transposeAdd some explanations
###Code
# Example 1 - working
mat1 = np.array([1, 2, 3, 4, 5])
print(mat1.shape)
np.transpose(mat1)
###Output
(5,)
###Markdown
Explanation about example
###Code
# Example 2 - working
mat2 = np.arange(1, 7).reshape(2, 3)
print(mat2.shape)
np.transpose(mat2)
###Output
(2, 3)
###Markdown
Explanation about example
###Code
# Example 3 - breaking (to illustrate when it breaks)
mat3 = np.array([[1, 2, 3],[10, 20]])
# np.transpose(mat3)
np.transpose(mat3)
# which is not right because mat3 is not a proper 2D array
# first row of mat3 have 3 elements and 2nd row have only 2
# so mat3[1, 2] is not available
# np.transpose will work on any vector, matrix or nth dimentional numpy array
###Output
<ipython-input-22-307d13efd60f>:2: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
mat3 = np.array([[1, 2, 3],[10, 20]])
###Markdown
Explanation about example (why it breaks and how to fix it) Some closing comments about when to use this function.
###Code
jovian.commit()
###Output
_____no_output_____
###Markdown
Function 3 - np.appendAdd some explanations
###Code
# Example 1 - working
mat1 = np.arange(1, 10).reshape(3, -1)
print(mat1.shape)
val = [10, 11, 12]
np.append(mat1,[val], axis=0)
###Output
(3, 3)
###Markdown
Explanation about example
###Code
# Example 2 - working
mat2 = np.arange(1, 10).reshape(3, -1)
print(mat2.shape)
val = np.array([10, 11, 12]).reshape(3, 1)
np.append(mat2,val, axis=1)
###Output
(3, 3)
###Markdown
Explanation about example
###Code
# Example 3 - breaking (to illustrate when it breaks)
mat2 = np.arange(1, 10).reshape(3, -1)
print(mat2.shape)
val = np.array([10, 11]).reshape(2, 1)
np.append(mat2,val, axis=1)
# it will generate ValueError because val is 2D vector and mat2 is 3x3 matrix
# to append value at side of the mat2 val should be 3D vector
###Output
(3, 3)
###Markdown
Explanation about example (why it breaks and how to fix it) Some closing comments about when to use this function.
###Code
jovian.commit()
###Output
_____no_output_____
###Markdown
Function 4 - np.linalg.normAdd some explanations
###Code
# Example 1 - working
vec1 = np.array([1,2, 3])
np.linalg.norm(vec1)
###Output
_____no_output_____
###Markdown
Explanation about example
###Code
# Example 2 - working
vec2 = np.array([1,2,3,4,5,6,7,9])
np.linalg.norm(vec2)
###Output
_____no_output_____
###Markdown
Explanation about example
###Code
# Example 3 - breaking (to illustrate when it breaks)
###Output
_____no_output_____
###Markdown
Explanation about example (why it breaks and how to fix it) Some closing comments about when to use this function.
###Code
jovian.commit()
###Output
_____no_output_____
###Markdown
Function 5 - np.swapaxesAdd some explanations
###Code
# Example 1 - working
mat1 = np.arange(1, 10).reshape(3, 3)
np.swapaxes(mat1, 0, 1)
###Output
_____no_output_____
###Markdown
Explanation about example
###Code
# Example 2 - working
mat2 = np.arange(1, 9).reshape(2, 2, 2)
np.swapaxes(mat2, 2, 0)
np.swapaxes(np.arange(1, 5), 1, 0)
###Output
_____no_output_____
###Markdown
Explanation about example
###Code
# Example 3 - breaking (to illustrate when it breaks)
mat3 = np.arange(1, 10)
print(mat3.ndim)
np.swapaxes(mat3, 1, 0)
# mat3 is a nth dim vector and mat3 have only 1 axis
# so thats why if we try to swapaxes of mat3 there will be error
# swapaxes will work on 2d numpy array or more dim
###Output
1
###Markdown
Explanation about example (why it breaks and how to fix it) Some closing comments about when to use this function.
###Code
jovian.commit()
###Output
_____no_output_____
###Markdown
ConclusionSummarize what was covered in this notebook, and where to go next Reference LinksProvide links to your references and other interesting articles about Numpy arrays:* Numpy official tutorial : https://numpy.org/doc/stable/user/quickstart.html* ...
###Code
jovian.commit()
###Output
_____no_output_____ |
RegressionModels/Logistic Regression/StudentsGradePrediction.ipynb | ###Markdown
Data pre-processing
###Code
#reading both the maths class file and protugal files
d1 = pd.read_csv('/home/panther/Downloads/student-mat.csv')
d2 = pd.read_csv('/home/panther/Downloads/student-por.csv')
#merging both the data frames
data = pd.concat([d1, d2], ignore_index=True, sort=False)
#checking for duplicate rows
data[data.duplicated()]
#understanding columns
for i in data.columns:
print(i)
plt.bar(data[i].unique(),data[i].value_counts())
plt.xlabel('unique values')
plt.ylabel('counts')
plt.show()
#checking columns having more than 80% single values
fx = lambda x : max(data[x].value_counts())/data.shape[0]>.80
single_value_columns = list(filter(fx,data.columns))
#droping failure from dingle value columns because it play importent role in grade
single_value_columns.remove('failures')
#checking null values
data.isnull().sum()
#total no of days student taking alcohol
data['total_alcohol'] = data['Walc']+data['Dalc']
data = data.drop(['Walc','Dalc'],axis=1)
data['pedu'] = data['Fedu']+data['Medu']
data = data.drop(['Fedu','Medu'],axis=1)
# checking catogorial columns
catogrial_col = data.columns[data.dtypes == 'object']
#removing G1 and G2 because we are concentarating here only G3
data.loc[data['G1'].between(0,7),'G1']= 3
data.loc[data['G1'].between(8,14),'G1']= 2
data.loc[data['G1'].between(15,20),'G1']= 1
data.loc[data['G2'].between(0,7),'G2']= 3
data.loc[data['G2'].between(8,14),'G2']= 2
data.loc[data['G2'].between(15,20),'G2']= 1
#grouping the grade into classes
data.loc[data['G3'].between(0,7),'G3']= 3
data.loc[data['G3'].between(8,14),'G3']= 2
data.loc[data['G3'].between(15,20),'G3']= 1
#lets see how each feature is behaving with final grade
for i in data.columns:
plt.figure(figsize=(20,5))
plt.subplot(1,2,1)
sns.countplot(i,hue='G3',data=data)
data = pd.get_dummies(data,columns=catogrial_col)
#taking target feature out from the data set
y = data['G3']
X= data.drop('G3',axis=1)
#splitiing data set into train and validation data
X_train, X_test, y_train, y_test = train_test_split(X, y)
###Output
_____no_output_____
###Markdown
Feature selection using random forest
###Code
# feature selection
def select_features(X_train, y_train, X_test):
# configure to select a subset of features
fs = SelectFromModel(RandomForestClassifier())
# learn relationship from training data
fs.fit(X_train, y_train)
# transform train input data
X_train_fs = fs.transform(X_train)
# transform test input data
X_test_fs = fs.transform(X_test)
return X_train_fs, X_test_fs, fs
#feature selection
X_train_fs, X_test_fs, fs = select_features(X_train, y_train, X_test)
# Logistic regression
model = LogisticRegression(solver='liblinear' ).fit(X_train_fs, y_train)
y_hat = model.predict(X_test_fs)
print('Logistic Regression - ')
print('recall_score =', recall_score(y_test, y_hat,average='weighted'))
print('precision_score =',precision_score(y_test,y_hat,average='weighted'))
print(" ")
#fiting train data to model
dtree = DecisionTreeClassifier()
dtree.fit(X_train_fs,y_train)
# Predicting the values of test data
y_pred = dtree.predict(X_test_fs)
print('DecisionTreeClassifier - ')
print('recall_score =', recall_score(y_test, y_pred,average='weighted'))
print('precision_score =',precision_score(y_test,y_hat,average='weighted'))
print(" ")
#Random forest classifier
model = RandomForestClassifier()
model.fit(X_train_fs,y_train)
predict = model.predict(X_test_fs)
print('RandomForestClassifier - ')
print('Precision=',precision_score(y_test,predict,average='weighted'))
print('recall=',recall_score(y_test,predict,average='weighted'))
###Output
Logistic Regression -
recall_score = 0.8773946360153256
precision_score = 0.8665940331571971
DecisionTreeClassifier -
recall_score = 0.8467432950191571
precision_score = 0.8665940331571971
RandomForestClassifier -
Precision= 0.9152417111849159
recall= 0.9157088122605364
###Markdown
Feature selection using Decision tree
###Code
# feature selection
def select_features(X_train, y_train, X_test):
# configure to select a subset of features
fs = SelectFromModel(DecisionTreeClassifier())
# learn relationship from training data
fs.fit(X_train, y_train)
# transform train input data
X_train_fs = fs.transform(X_train)
# transform test input data
X_test_fs = fs.transform(X_test)
return X_train_fs, X_test_fs, fs
#feature selection
X_train_fs, X_test_fs, fs = select_features(X_train, y_train, X_test)
# Logistic regression
model = LogisticRegression(solver='liblinear' ).fit(X_train_fs, y_train)
y_hat = model.predict(X_test_fs)
print('Logistic Regression - ')
print('recall_score =', recall_score(y_test, y_hat,average='weighted'))
print('precision_score =',precision_score(y_test,y_hat,average='weighted'))
print(" ")
#fiting train data to model
dtree = DecisionTreeClassifier()
dtree.fit(X_train_fs,y_train)
# Predicting the values of test data
y_pred = dtree.predict(X_test_fs)
print('DecisionTreeClassifier - ')
print('recall_score =', recall_score(y_test, y_pred,average='weighted'))
print('precision_score =',precision_score(y_test,y_hat,average='weighted'))
print(" ")
#Random forest classifier
model = RandomForestClassifier()
model.fit(X_train_fs,y_train)
predict = model.predict(X_test_fs)
print('RandomForestClassifier - ')
print('Precision=',precision_score(y_test,predict,average='weighted'))
print('recall=',recall_score(y_test,predict,average='weighted'))
###Output
Logistic Regression -
recall_score = 0.8735632183908046
precision_score = 0.808588761174968
###Markdown
Feature selection using Logistic regression
###Code
# feature selection
def select_features(X_train, y_train, X_test):
# configure to select a subset of features
fs = SelectFromModel(LogisticRegression())
# learn relationship from training data
fs.fit(X_train, y_train)
# transform train input data
X_train_fs = fs.transform(X_train)
# transform test input data
X_test_fs = fs.transform(X_test)
return X_train_fs, X_test_fs, fs
#feature selection
X_train_fs, X_test_fs, fs = select_features(X_train, y_train, X_test)
# Logistic regression
model = LogisticRegression(solver='liblinear' ).fit(X_train_fs, y_train)
y_hat = model.predict(X_test_fs)
print('Logistic Regression - ')
print('recall_score =', recall_score(y_test, y_hat,average='weighted'))
print('precision_score =',precision_score(y_test,y_hat,average='weighted'))
print(" ")
#fiting train data to model
dtree = DecisionTreeClassifier()
dtree.fit(X_train_fs,y_train)
# Predicting the values of test data
y_pred = dtree.predict(X_test_fs)
print('DecisionTreeClassifier - ')
print('recall_score =', recall_score(y_test, y_pred,average='weighted'))
print('precision_score =',precision_score(y_test,y_hat,average='weighted'))
print(" ")
#Random forest classifier
model = RandomForestClassifier()
model.fit(X_train_fs,y_train)
predict = model.predict(X_test_fs)
print('RandomForestClassifier - ')
print('Precision=',precision_score(y_test,predict,average='weighted'))
print('recall=',recall_score(y_test,predict,average='weighted'))
###Output
Logistic Regression -
recall_score = 0.8812260536398467
precision_score = 0.8726836791247982
DecisionTreeClassifier -
recall_score = 0.8888888888888888
precision_score = 0.8726836791247982
RandomForestClassifier -
Precision= 0.9032755056732316
recall= 0.9042145593869731
|
colabs/dv3po_custom_signals.ipynb | ###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV-3PO Custom Signals ParametersDV-3PO Custom Signals allows automated changes to be made to DV360 campaigns based on external signals from weather and social media trends. In the future it will also support news, disaster alerts, stocks, sports, custom APIs, etc. 1. Open the template sheet: [DV-3PO] Custom Signals Configs. 1. Make a copy of the sheet through the menu File -> Make a copy, for clarity we suggest you rename the copy to a meaningful name describing the usage of this copy. 1. In the Station IDs field below enter a comma separated list of NOAA weather station IDs. Most major airports are stations and their ID typically is K followed by the 3 letter airport code, e.g. KORD for Chicago O'Hare International Airport, KSFO for San Francisco international airport, etc. You can get a full list of stations here, the station ID to use is the 'CALL' column of this list. 1. In the Sheet URL field below, enter the URL of the copy of the config sheet you've created. 1. Go to the sheet and configure the rules you'd like to be applied in the Rules tab. 1. In the Advertiser ID column, enter the advertiser ID of the line items you'd like to automatically update. 1. In the Line Item ID colunn, enter the line item ID of the line item you'd like to automatically update. 1. The 'Active' column of the Rules tab allows you to control if the line item should be active or paused. If this field is TRUE the line item will be set to active, if this field is FALSE the line item will be set to inactive. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!C2>30, TRUE, FALSE) will cause the line item to be activated if the temperature of the first station in the Weather tab is above 30 degrees. Leave this field empty if you don't want it to be modified by the tool. 1. The 'Fixed Bid' column of the Rules tab allows you to control the fixed bid amount of the line item. The value set to this field will be applied to the specified line item. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!G2>3, 0.7, 0.4) will cause bid to be set to $0.7 if the wind speed of the first line in the Weather tab is greater than 3 mph, or $0.4 otherwise. Leave this field empty if you don't want it to be modified by the tool.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'station_ids': '', # NOAA Weather Station ID
'auth_read': 'user', # Credentials used for reading data.
'sheet_url': '', # Feed Sheet URL
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV-3PO Custom SignalsThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'weather_gov': {
'auth': 'user',
'stations': {'field': {'name': 'station_ids','kind': 'string_list','order': 1,'description': 'NOAA Weather Station ID','default': ''}},
'out': {
'sheets': {
'sheet': {'field': {'name': 'sheet_url','kind': 'string','order': 2,'description': 'Feed Sheet URL','default': ''}},
'tab': 'Weather',
'range': 'A2:K',
'delete': True
}
}
}
},
{
'lineitem_beta': {
'auth': 'user',
'read': {
'sheet': {
'sheet': {'field': {'name': 'sheet_url','kind': 'string','order': 2,'description': 'Feed Sheet URL','default': ''}},
'tab': 'Rules',
'range': 'A1:D'
}
},
'patch': {
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
DV-3PO Custom SignalsDV-3PO Custom Signals allows automated changes to be made to DV360 campaigns based on external signals from weather and social media trends. In the future it will also support news, disaster alerts, stocks, sports, custom APIs, etc. LicenseCopyright 2020 Google LLC,Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. DisclaimerThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.This code generated (see starthinker/scripts for possible source): - **Command**: "python starthinker_ui/manage.py colab" - **Command**: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Set ConfigurationThis code is required to initialize the project. Fill in required fields and press play.1. If the recipe uses a Google Cloud Project: - Set the configuration **project** value to the project identifier from [these instructions](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md).1. If the recipe has **auth** set to **user**: - If you have user credentials: - Set the configuration **user** value to your user credentials JSON. - If you DO NOT have user credentials: - Set the configuration **client** value to [downloaded client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md).1. If the recipe has **auth** set to **service**: - Set the configuration **service** value to [downloaded service credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_service.md).
###Code
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
###Output
_____no_output_____
###Markdown
3. Enter DV-3PO Custom Signals Recipe Parameters 1. Open the template sheet: [DV-3PO] Custom Signals Configs. 1. Make a copy of the sheet through the menu File -> Make a copy, for clarity we suggest you rename the copy to a meaningful name describing the usage of this copy. 1. In the Station IDs field below enter a comma separated list of NOAA weather station IDs. Most major airports are stations and their ID typically is K followed by the 3 letter airport code, e.g. KORD for Chicago O'Hare International Airport, KSFO for San Francisco international airport, etc. You can get a full list of stations here, the station ID to use is the 'CALL' column of this list. 1. In the Sheet URL field below, enter the URL of the copy of the config sheet you've created. 1. Go to the sheet and configure the rules you'd like to be applied in the Rules tab. 1. In the Advertiser ID column, enter the advertiser ID of the line items you'd like to automatically update. 1. In the Line Item ID colunn, enter the line item ID of the line item you'd like to automatically update. 1. The 'Active' column of the Rules tab allows you to control if the line item should be active or paused. If this field is TRUE the line item will be set to active, if this field is FALSE the line item will be set to inactive. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!C2>30, TRUE, FALSE) will cause the line item to be activated if the temperature of the first station in the Weather tab is above 30 degrees. Leave this field empty if you don't want it to be modified by the tool. 1. The 'Fixed Bid' column of the Rules tab allows you to control the fixed bid amount of the line item. The value set to this field will be applied to the specified line item. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!G2>3, 0.7, 0.4) will cause bid to be set to $0.7 if the wind speed of the first line in the Weather tab is greater than 3 mph, or $0.4 otherwise. Leave this field empty if you don't want it to be modified by the tool.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'station_ids': '', # NOAA Weather Station ID
'auth_read': 'user', # Credentials used for reading data.
'sheet_url': '', # Feed Sheet URL
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
4. Execute DV-3PO Custom SignalsThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'weather_gov': {
'auth': 'user',
'stations': {'field': {'name': 'station_ids', 'kind': 'string_list', 'order': 1, 'description': 'NOAA Weather Station ID', 'default': ''}},
'out': {
'sheets': {
'sheet': {'field': {'name': 'sheet_url', 'kind': 'string', 'order': 2, 'description': 'Feed Sheet URL', 'default': ''}},
'tab': 'Weather',
'range': 'A2:K',
'delete': True
}
}
}
},
{
'lineitem_beta': {
'auth': 'user',
'read': {
'sheet': {
'sheet': {'field': {'name': 'sheet_url', 'kind': 'string', 'order': 2, 'description': 'Feed Sheet URL', 'default': ''}},
'tab': 'Rules',
'range': 'A1:D'
}
},
'patch': {
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV-3PO Custom Signals ParametersDV-3PO Custom Signals allows automated changes to be made to DV360 campaigns based on external signals from weather and social media trends. In the future it will also support news, disaster alerts, stocks, sports, custom APIs, etc. 1. Open the template sheet: [DV-3PO] Custom Signals Configs. 1. Make a copy of the sheet through the menu File -> Make a copy, for clarity we suggest you rename the copy to a meaningful name describing the usage of this copy. 1. In the Station IDs field below enter a comma separated list of NOAA weather station IDs. Most major airports are stations and their ID typically is K followed by the 3 letter airport code, e.g. KORD for Chicago O'Hare International Airport, KSFO for San Francisco international airport, etc. You can get a full list of stations here, the station ID to use is the 'CALL' column of this list. 1. In the Sheet URL field below, enter the URL of the copy of the config sheet you've created. 1. Go to the sheet and configure the rules you'd like to be applied in the Rules tab. 1. In the Advertiser ID column, enter the advertiser ID of the line items you'd like to automatically update. 1. In the Line Item ID colunn, enter the line item ID of the line item you'd like to automatically update. 1. The 'Active' column of the Rules tab allows you to control if the line item should be active or paused. If this field is TRUE the line item will be set to active, if this field is FALSE the line item will be set to inactive. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!C2>30, TRUE, FALSE) will cause the line item to be activated if the temperature of the first station in the Weather tab is above 30 degrees. Leave this field empty if you don't want it to be modified by the tool. 1. The 'Fixed Bid' column of the Rules tab allows you to control the fixed bid amount of the line item. The value set to this field will be applied to the specified line item. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!G2>3, 0.7, 0.4) will cause bid to be set to $0.7 if the wind speed of the first line in the Weather tab is greater than 3 mph, or $0.4 otherwise. Leave this field empty if you don't want it to be modified by the tool.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'station_ids': '', # NOAA Weather Station ID
'auth_read': 'user', # Credentials used for reading data.
'sheet_url': '', # Feed Sheet URL
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV-3PO Custom SignalsThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'weather_gov': {
'auth': 'user',
'stations': {'field': {'name': 'station_ids','kind': 'string_list','order': 1,'description': 'NOAA Weather Station ID','default': ''}},
'out': {
'sheets': {
'sheet': {'field': {'name': 'sheet_url','kind': 'string','order': 2,'description': 'Feed Sheet URL','default': ''}},
'tab': 'Weather',
'range': 'A2:K',
'delete': True
}
}
}
},
{
'lineitem_beta': {
'auth': 'user',
'read': {
'sheet': {
'sheet': {'field': {'name': 'sheet_url','kind': 'string','order': 2,'description': 'Feed Sheet URL','default': ''}},
'tab': 'Rules',
'range': 'A1:D'
}
},
'patch': {
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CLIENT CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV-3PO Custom Signals ParametersDV-3PO Custom Signals allows automated changes to be made to DV360 campaigns based on external signals from weather and social media trends. In the future it will also support news, disaster alerts, stocks, sports, custom APIs, etc. 1. Open the template sheet: [DV-3PO] Custom Signals Configs. 1. Make a copy of the sheet through the menu File -> Make a copy, for clarity we suggest you rename the copy to a meaningful name describing the usage of this copy. 1. In the Station IDs field below enter a comma separated list of NOAA weather station IDs. Most major airports are stations and their ID typically is K followed by the 3 letter airport code, e.g. KORD for Chicago O'Hare International Airport, KSFO for San Francisco international airport, etc. You can get a full list of stations here, the station ID to use is the 'CALL' column of this list. 1. In the Sheet URL field below, enter the URL of the copy of the config sheet you've created. 1. Go to the sheet and configure the rules you'd like to be applied in the Rules tab. 1. In the Advertiser ID column, enter the advertiser ID of the line items you'd like to automatically update. 1. In the Line Item ID colunn, enter the line item ID of the line item you'd like to automatically update. 1. The 'Active' column of the Rules tab allows you to control if the line item should be active or paused. If this field is TRUE the line item will be set to active, if this field is FALSE the line item will be set to inactive. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!C2>30, TRUE, FALSE) will cause the line item to be activated if the temperature of the first station in the Weather tab is above 30 degrees. Leave this field empty if you don't want it to be modified by the tool. 1. The 'Fixed Bid' column of the Rules tab allows you to control the fixed bid amount of the line item. The value set to this field will be applied to the specified line item. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!G2>3, 0.7, 0.4) will cause bid to be set to $0.7 if the wind speed of the first line in the Weather tab is greater than 3 mph, or $0.4 otherwise. Leave this field empty if you don't want it to be modified by the tool.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'station_ids': '', # NOAA Weather Station ID
'auth_read': 'user', # Credentials used for reading data.
'sheet_url': '', # Feed Sheet URL
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV-3PO Custom SignalsThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'weather_gov': {
'auth': 'user',
'stations': {'field': {'name': 'station_ids','kind': 'string_list','order': 1,'description': 'NOAA Weather Station ID','default': ''}},
'out': {
'sheets': {
'sheet': {'field': {'name': 'sheet_url','kind': 'string','order': 2,'description': 'Feed Sheet URL','default': ''}},
'tab': 'Weather',
'range': 'A2:K',
'delete': True
}
}
}
},
{
'lineitem_beta': {
'auth': 'user',
'read': {
'sheet': {
'sheet': {'field': {'name': 'sheet_url','kind': 'string','order': 2,'description': 'Feed Sheet URL','default': ''}},
'tab': 'Rules',
'range': 'A1:D'
}
},
'patch': {
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter [DV-3PO] Custom Signals Parameters[DV-3PO] Custom Signals allows automated changes to be made to DV360 campaigns based on external signals from weather and social media trends. In the future it will also support news, disaster alerts, stocks, sports, custom APIs, etc. 1. Open the template sheet: [DV-3PO] Custom Signals Configs. 1. Make a copy of the sheet through the menu File -> Make a copy, for clarity we suggest you rename the copy to a meaningful name describing the usage of this copy. 1. In the Station IDs field below enter a comma separated list of NOAA weather station IDs. Most major airports are stations and their ID typically is K followed by the 3 letter airport code, e.g. KORD for Chicago O'Hare International Airport, KSFO for San Francisco international airport, etc. You can get a full list of stations here, the station ID to use is the 'CALL' column of this list. 1. In the Sheet URL field below, enter the URL of the copy of the config sheet you've created. 1. Go to the sheet and configure the rules you'd like to be applied in the Rules tab. 1. In the Advertiser ID column, enter the advertiser ID of the line items you'd like to automatically update. 1. In the Line Item ID colunn, enter the line item ID of the line item you'd like to automatically update. 1. The 'Active' column of the Rules tab allows you to control if the line item should be active or paused. If this field is TRUE the line item will be set to active, if this field is FALSE the line item will be set to inactive. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!C2>30, TRUE, FALSE) will cause the line item to be activated if the temperature of the first station in the Weather tab is above 30 degrees. Leave this field empty if you don't want it to be modified by the tool. 1. The 'Fixed Bid' column of the Rules tab allows you to control the fixed bid amount of the line item. The value set to this field will be applied to the specified line item. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!G2>3, 0.7, 0.4) will cause bid to be set to $0.7 if the wind speed of the first line in the Weather tab is greater than 3 mph, or $0.4 otherwise. Leave this field empty if you don't want it to be modified by the tool.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'station_ids': '', # NOAA Weather Station ID
'sheet_url': '', # Feed Sheet URL
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute [DV-3PO] Custom SignalsThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'weather_gov': {
'auth': 'user',
'stations': {'field': {'name': 'station_ids','kind': 'string_list','order': 1,'description': 'NOAA Weather Station ID','default': ''}},
'out': {
'sheets': {
'sheet': {'field': {'name': 'sheet_url','kind': 'string','order': 2,'description': 'Feed Sheet URL','default': ''}},
'tab': 'Weather',
'range': 'A2:K',
'delete': True
}
}
}
},
{
'lineitem_beta': {
'auth': 'user',
'read': {
'sheet': {
'sheet': {'field': {'name': 'sheet_url','kind': 'string','order': 2,'description': 'Feed Sheet URL','default': ''}},
'tab': 'Rules',
'range': 'A1:D'
}
},
'patch': {
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter [DV-3PO] Custom Signals Parameters[DV-3PO] Custom Signals allows automated changes to be made to DV360 campaigns based on external signals from weather and social media trends. In the future it will also support news, disaster alerts, stocks, sports, custom APIs, etc. 1. Open the template sheet: [DV-3PO] Custom Signals Configs. 1. Make a copy of the sheet through the menu File -> Make a copy, for clarity we suggest you rename the copy to a meaningful name describing the usage of this copy. 1. In the Station IDs field below enter a comma separated list of NOAA weather station IDs. Most major airports are stations and their ID typically is K followed by the 3 letter airport code, e.g. KORD for Chicago O'Hare International Airport, KSFO for San Francisco international airport, etc. You can get a full list of stations here, the station ID to use is the 'CALL' column of this list. 1. In the Sheet URL field below, enter the URL of the copy of the config sheet you've created. 1. Go to the sheet and configure the rules you'd like to be applied in the Rules tab. 1. In the Advertiser ID column, enter the advertiser ID of the line items you'd like to automatically update. 1. In the Line Item ID colunn, enter the line item ID of the line item you'd like to automatically update. 1. The 'Active' column of the Rules tab allows you to control if the line item should be active or paused. If this field is TRUE the line item will be set to active, if this field is FALSE the line item will be set to inactive. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!C2>30, TRUE, FALSE) will cause the line item to be activated if the temperature of the first station in the Weather tab is above 30 degrees. Leave this field empty if you don't want it to be modified by the tool. 1. The 'Fixed Bid' column of the Rules tab allows you to control the fixed bid amount of the line item. The value set to this field will be applied to the specified line item. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!G2>3, 0.7, 0.4) will cause bid to be set to $0.7 if the wind speed of the first line in the Weather tab is greater than 3 mph, or $0.4 otherwise. Leave this field empty if you don't want it to be modified by the tool.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'station_ids': '', # NOAA Weather Station ID
'auth_read': 'user', # Credentials used for reading data.
'sheet_url': '', # Feed Sheet URL
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute [DV-3PO] Custom SignalsThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'weather_gov': {
'auth': 'user',
'stations': {'field': {'name': 'station_ids','kind': 'string_list','order': 1,'description': 'NOAA Weather Station ID','default': ''}},
'out': {
'sheets': {
'sheet': {'field': {'name': 'sheet_url','kind': 'string','order': 2,'description': 'Feed Sheet URL','default': ''}},
'tab': 'Weather',
'range': 'A2:K',
'delete': True
}
}
}
},
{
'lineitem_beta': {
'auth': 'user',
'read': {
'sheet': {
'sheet': {'field': {'name': 'sheet_url','kind': 'string','order': 2,'description': 'Feed Sheet URL','default': ''}},
'tab': 'Rules',
'range': 'A1:D'
}
},
'patch': {
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter [DV-3PO] Custom Signals Parameters[DV-3PO] Custom Signals allows automated changes to be made to DV360 campaigns based on external signals from weather and social media trends. In the future it will also support news, disaster alerts, stocks, sports, custom APIs, etc. 1. Open the template sheet: [DV-3PO] Custom Signals Configs. 1. Make a copy of the sheet through the menu File -> Make a copy, for clarity we suggest you rename the copy to a meaningful name describing the usage of this copy. 1. In the Station IDs field below enter a comma separated list of NOAA weather station IDs. Most major airports are stations and their ID typically is K followed by the 3 letter airport code, e.g. KORD for Chicago O'Hare International Airport, KSFO for San Francisco international airport, etc. You can get a full list of stations here, the station ID to use is the 'CALL' column of this list. 1. In the Sheet URL field below, enter the URL of the copy of the config sheet you've created. 1. Go to the sheet and configure the rules you'd like to be applied in the Rules tab. 1. In the Advertiser ID column, enter the advertiser ID of the line items you'd like to automatically update. 1. In the Line Item ID colunn, enter the line item ID of the line item you'd like to automatically update. 1. The 'Active' column of the Rules tab allows you to control if the line item should be active or paused. If this field is TRUE the line item will be set to active, if this field is FALSE the line item will be set to inactive. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!C2>30, TRUE, FALSE) will cause the line item to be activated if the temperature of the first station in the Weather tab is above 30 degrees. Leave this field empty if you don't want it to be modified by the tool. 1. The 'Fixed Bid' column of the Rules tab allows you to control the fixed bid amount of the line item. The value set to this field will be applied to the specified line item. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!G2>3, 0.7, 0.4) will cause bid to be set to $0.7 if the wind speed of the first line in the Weather tab is greater than 3 mph, or $0.4 otherwise. Leave this field empty if you don't want it to be modified by the tool.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'station_ids': '', # NOAA Weather Station ID
'auth_read': 'user', # Credentials used for reading data.
'sheet_url': '', # Feed Sheet URL
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute [DV-3PO] Custom SignalsThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'weather_gov': {
'auth': 'user',
'stations': {'field': {'description': 'NOAA Weather Station ID','kind': 'string_list','name': 'station_ids','order': 1,'default': ''}},
'out': {
'sheets': {
'sheet': {'field': {'description': 'Feed Sheet URL','kind': 'string','name': 'sheet_url','order': 2,'default': ''}},
'tab': 'Weather',
'delete': True,
'range': 'A2:K'
}
}
}
},
{
'lineitem_beta': {
'auth': 'user',
'patch': {
},
'read': {
'sheet': {
'sheet': {'field': {'description': 'Feed Sheet URL','kind': 'string','name': 'sheet_url','order': 2,'default': ''}},
'tab': 'Rules',
'range': 'A1:D'
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV-3PO Custom Signals ParametersDV-3PO Custom Signals allows automated changes to be made to DV360 campaigns based on external signals from weather and social media trends. In the future it will also support news, disaster alerts, stocks, sports, custom APIs, etc. 1. Open the template sheet: [DV-3PO] Custom Signals Configs. 1. Make a copy of the sheet through the menu File -> Make a copy, for clarity we suggest you rename the copy to a meaningful name describing the usage of this copy. 1. In the Station IDs field below enter a comma separated list of NOAA weather station IDs. Most major airports are stations and their ID typically is K followed by the 3 letter airport code, e.g. KORD for Chicago O'Hare International Airport, KSFO for San Francisco international airport, etc. You can get a full list of stations here, the station ID to use is the 'CALL' column of this list. 1. In the Sheet URL field below, enter the URL of the copy of the config sheet you've created. 1. Go to the sheet and configure the rules you'd like to be applied in the Rules tab. 1. In the Advertiser ID column, enter the advertiser ID of the line items you'd like to automatically update. 1. In the Line Item ID colunn, enter the line item ID of the line item you'd like to automatically update. 1. The 'Active' column of the Rules tab allows you to control if the line item should be active or paused. If this field is TRUE the line item will be set to active, if this field is FALSE the line item will be set to inactive. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!C2>30, TRUE, FALSE) will cause the line item to be activated if the temperature of the first station in the Weather tab is above 30 degrees. Leave this field empty if you don't want it to be modified by the tool. 1. The 'Fixed Bid' column of the Rules tab allows you to control the fixed bid amount of the line item. The value set to this field will be applied to the specified line item. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!G2>3, 0.7, 0.4) will cause bid to be set to $0.7 if the wind speed of the first line in the Weather tab is greater than 3 mph, or $0.4 otherwise. Leave this field empty if you don't want it to be modified by the tool.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'station_ids': '', # NOAA Weather Station ID
'auth_read': 'user', # Credentials used for reading data.
'sheet_url': '', # Feed Sheet URL
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV-3PO Custom SignalsThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import commandline_parser
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'weather_gov': {
'auth': 'user',
'stations': {'field': {'name': 'station_ids','kind': 'string_list','order': 1,'description': 'NOAA Weather Station ID','default': ''}},
'out': {
'sheets': {
'sheet': {'field': {'name': 'sheet_url','kind': 'string','order': 2,'description': 'Feed Sheet URL','default': ''}},
'tab': 'Weather',
'range': 'A2:K',
'delete': True
}
}
}
},
{
'lineitem_beta': {
'auth': 'user',
'read': {
'sheet': {
'sheet': {'field': {'name': 'sheet_url','kind': 'string','order': 2,'description': 'Feed Sheet URL','default': ''}},
'tab': 'Rules',
'range': 'A1:D'
}
},
'patch': {
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter [DV-3PO] Custom Signals Parameters[DV-3PO] Custom Signals allows automated changes to be made to DV360 campaigns based on external signals from weather and social media trends. In the future it will also support news, disaster alerts, stocks, sports, custom APIs, etc. 1. Open the template sheet: [DV-3PO] Custom Signals Configs. 1. Make a copy of the sheet through the menu File -> Make a copy, for clarity we suggest you rename the copy to a meaningful name describing the usage of this copy. 1. In the Station IDs field below enter a comma separated list of NOAA weather station IDs. Most major airports are stations and their ID typically is K followed by the 3 letter airport code, e.g. KORD for Chicago O'Hare International Airport, KSFO for San Francisco international airport, etc. You can get a full list of stations here, the station ID to use is the 'CALL' column of this list. 1. In the Sheet URL field below, enter the URL of the copy of the config sheet you've created. 1. Go to the sheet and configure the rules you'd like to be applied in the Rules tab. 1. In the Advertiser ID column, enter the advertiser ID of the line items you'd like to automatically update. 1. In the Line Item ID colunn, enter the line item ID of the line item you'd like to automatically update. 1. The 'Active' column of the Rules tab allows you to control if the line item should be active or paused. If this field is TRUE the line item will be set to active, if this field is FALSE the line item will be set to inactive. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!C2>30, TRUE, FALSE) will cause the line item to be activated if the temperature of the first station in the Weather tab is above 30 degrees. Leave this field empty if you don't want it to be modified by the tool. 1. The 'Fixed Bid' column of the Rules tab allows you to control the fixed bid amount of the line item. The value set to this field will be applied to the specified line item. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!G2>3, 0.7, 0.4) will cause bid to be set to $0.7 if the wind speed of the first line in the Weather tab is greater than 3 mph, or $0.4 otherwise. Leave this field empty if you don't want it to be modified by the tool.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'station_ids': '', # NOAA Weather Station ID
'auth_read': 'user', # Credentials used for reading data.
'sheet_url': '', # Feed Sheet URL
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute [DV-3PO] Custom SignalsThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields, json_expand_includes
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'weather_gov': {
'auth': 'user',
'stations': {'field': {'name': 'station_ids','kind': 'string_list','order': 1,'description': 'NOAA Weather Station ID','default': ''}},
'out': {
'sheets': {
'sheet': {'field': {'name': 'sheet_url','kind': 'string','order': 2,'description': 'Feed Sheet URL','default': ''}},
'tab': 'Weather',
'range': 'A2:K',
'delete': True
}
}
}
},
{
'lineitem_beta': {
'auth': 'user',
'read': {
'sheet': {
'sheet': {'field': {'name': 'sheet_url','kind': 'string','order': 2,'description': 'Feed Sheet URL','default': ''}},
'tab': 'Rules',
'range': 'A1:D'
}
},
'patch': {
}
}
}
]
json_set_fields(TASKS, FIELDS)
json_expand_includes(TASKS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV-3PO Custom Signals ParametersDV-3PO Custom Signals allows automated changes to be made to DV360 campaigns based on external signals from weather and social media trends. In the future it will also support news, disaster alerts, stocks, sports, custom APIs, etc. 1. Open the template sheet: [DV-3PO] Custom Signals Configs. 1. Make a copy of the sheet through the menu File -> Make a copy, for clarity we suggest you rename the copy to a meaningful name describing the usage of this copy. 1. In the Station IDs field below enter a comma separated list of NOAA weather station IDs. Most major airports are stations and their ID typically is K followed by the 3 letter airport code, e.g. KORD for Chicago O'Hare International Airport, KSFO for San Francisco international airport, etc. You can get a full list of stations here, the station ID to use is the 'CALL' column of this list. 1. In the Sheet URL field below, enter the URL of the copy of the config sheet you've created. 1. Go to the sheet and configure the rules you'd like to be applied in the Rules tab. 1. In the Advertiser ID column, enter the advertiser ID of the line items you'd like to automatically update. 1. In the Line Item ID colunn, enter the line item ID of the line item you'd like to automatically update. 1. The 'Active' column of the Rules tab allows you to control if the line item should be active or paused. If this field is TRUE the line item will be set to active, if this field is FALSE the line item will be set to inactive. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!C2>30, TRUE, FALSE) will cause the line item to be activated if the temperature of the first station in the Weather tab is above 30 degrees. Leave this field empty if you don't want it to be modified by the tool. 1. The 'Fixed Bid' column of the Rules tab allows you to control the fixed bid amount of the line item. The value set to this field will be applied to the specified line item. You can use a formula to take weather data into consideration to update this field, e.g. =IF(Weather!G2>3, 0.7, 0.4) will cause bid to be set to $0.7 if the wind speed of the first line in the Weather tab is greater than 3 mph, or $0.4 otherwise. Leave this field empty if you don't want it to be modified by the tool.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'station_ids': '', # NOAA Weather Station ID
'auth_read': 'user', # Credentials used for reading data.
'sheet_url': '', # Feed Sheet URL
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV-3PO Custom SignalsThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'weather_gov': {
'auth': 'user',
'stations': {'field': {'name': 'station_ids','kind': 'string_list','order': 1,'description': 'NOAA Weather Station ID','default': ''}},
'out': {
'sheets': {
'sheet': {'field': {'name': 'sheet_url','kind': 'string','order': 2,'description': 'Feed Sheet URL','default': ''}},
'tab': 'Weather',
'range': 'A2:K',
'delete': True
}
}
}
},
{
'lineitem_beta': {
'auth': 'user',
'read': {
'sheet': {
'sheet': {'field': {'name': 'sheet_url','kind': 'string','order': 2,'description': 'Feed Sheet URL','default': ''}},
'tab': 'Rules',
'range': 'A1:D'
}
},
'patch': {
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____ |
unit_2_sprint_3_assignment_4.ipynb | ###Markdown
###Code
from google.colab import files
uploaded = files.upload()
#imported dataset with a binary classifier which is very unbalanced
import pandas as pd
stars = pd.read_csv('pulsar_stars.csv')
stars.head()
#seeing correlation with target clasw - excess kurtosis of integrated profile and skewness of integrated profile have highest correlation
stars.corr()[-1:]
#chekcing balance of target class - it's 91 to 9 so a good baseline would be to assume all 0 then you would get 91% accuracy
#so we have to beat that. we can checking precision/accuracy later in a confusion matrix
stars['target_class'].value_counts(normalize=True)
#no mossing data
stars.isna().sum()
stars.copy().drop('target_class',axis=1).columns
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
features=stars.copy().drop('target_class',axis=1).columns
target=['target_class']
preprocessor = make_pipeline(StandardScaler()) #just practising using pipeline
X = preprocessor.fit_transform(stars[features])
X = pd.DataFrame(X, columns=features)
y = stars[target]
#taking function from lecture notes to make holdout group
from sklearn.model_selection import train_test_split
def train_validation_test_split(
X, y, train_size=0.8, val_size=0.1, test_size=0.1,
random_state=None, shuffle=True):
assert train_size + val_size + test_size == 1
X_train_val, X_test, y_train_val, y_test = train_test_split(
X, y, test_size=test_size, random_state=random_state, shuffle=shuffle)
X_train, X_val, y_train, y_val = train_test_split(
X_train_val, y_train_val, test_size=val_size/(train_size+val_size),
random_state=random_state, shuffle=shuffle)
return X_train, X_val, X_test, y_train, y_val, y_test
X_train, X_val, X_test, y_train, y_val, y_test=train_validation_test_split(X, y, 0.7, 0.15, 0.15)
%matplotlib inline
from ipywidgets import interact
import matplotlib.pyplot as plt
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from xgboost import XGBClassifier
def star_trees(max_depth=1, n_estimators=1):
models = [DecisionTreeClassifier(max_depth=max_depth),
RandomForestClassifier(max_depth=max_depth, n_estimators=n_estimators),
XGBClassifier(max_depth=max_depth, n_estimators=n_estimators)]
for model in models:
name = model.__class__.__name__
model.fit(X_train, y_train)
ax = stars.plot(' Excess kurtosis of the integrated profile', 'target_class', kind='scatter', title=name)
ax.step(X_train, model.predict(X_train), where='mid')
plt.show()
#these charts were not much use in this case
interact(star_trees, max_depth=(1,6,1), n_estimators=(10,40,10));
def warn(*args, **kwargs):
pass
import warnings
warnings.warn = warn
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.model_selection import cross_val_predict
from sklearn.model_selection import cross_val_score
models = [LogisticRegression(solver='lbfgs', max_iter=1000),
DecisionTreeClassifier(max_depth=3),
DecisionTreeClassifier(max_depth=None),
RandomForestClassifier(max_depth=3, n_estimators=100, n_jobs=-1, random_state=42),
RandomForestClassifier(max_depth=None, n_estimators=100, n_jobs=-1, random_state=42),
XGBClassifier(max_depth=3, n_estimators=100, n_jobs=-1, random_state=42)]
y_proba_ensemble=[]
for model in models:
print(model, '\n')
threshold = 0.5
score = cross_val_score(model, X_train, y_train, scoring='accuracy', cv=5).mean()
print('Cross-Validation Accuracy:', score, '\n', '\n')
y_pred_proba=cross_val_predict(model, X_train, y_train, cv=5, n_jobs=-1,
method='predict_proba')[:,1]
y_pred=y_pred_proba>threshold
cm=pd.DataFrame(confusion_matrix(y_train, y_pred),
columns=['Predicted Negative', 'Predicted Positive'],
index=['Actual Negative', 'Actual Positive'])
print(cm)
#as expected the random forrest classifier was the best
#now let's look at feature importance
for model in models:
name = model.__class__.__name__
model.fit(X_train, y_train)
if name == 'LogisticRegression':
coefficients = pd.Series(model.coef_[0], X_train.columns)
coefficients.sort_values().plot.barh(color='#FF6347', title=name)
plt.grid()
plt.show()
else:
importances = pd.Series(model.feature_importances_, X_train.columns)
title = f'{name}, max_depth={model.max_depth}'
importances.sort_values().plot.barh(color='#FF6347', title=title)
plt.grid()
plt.show()
#as expected from looking at the correlation, excess kutosis is the best
#messign around with this function
def star_bagging(max_depth=1, n_estimators=1):
predicteds = []
for i in range(n_estimators):
title = f'Tree {i+1}'
bootstrap_sample = stars.sample(n=len(stars), replace=True)
#preprocessor = make_pipeline(ce.OrdinalEncoder(), SimpleImputer())
bootstrap_X = preprocessor.fit_transform(bootstrap_sample[[' Excess kurtosis of the integrated profile', ' Standard deviation of the DM-SNR curve']])
bootstrap_y = bootstrap_sample['target_class']
tree = DecisionTreeClassifier(max_depth=max_depth)
tree.fit(bootstrap_X, bootstrap_y)
predicted = viz2D(tree, X, feature1=' Excess kurtosis of the integrated profile', feature2=' Standard deviation of the DM-SNR curve', title=title)
predicteds.append(predicted)
ensembled = np.vstack(predicteds).mean(axis=0)
title = f'Ensemble of {n_estimators} trees, with max_depth={max_depth}'
plt.imshow(ensembled.reshape(100, 100), cmap='viridis')
plt.title(title)
plt.xlabel(' Excess kurtosis of the integrated profile')
plt.ylabel(' Standard deviation of the DM-SNR curve')
plt.xticks([])
plt.yticks([])
plt.colorbar()
plt.show()
interact(star_bagging, max_depth=(1,6,1), n_estimators=(2,5,1));
def viz2D(fitted_model, X, feature1, feature2, num=100, title=''):
x1 = np.linspace(X[feature1].min(), X[feature1].max(), num)
x2 = np.linspace(X[feature2].min(), X[feature2].max(), num)
X1, X2 = np.meshgrid(x1, x2)
X = np.c_[X1.flatten(), X2.flatten()]
if hasattr(fitted_model, 'predict_proba'):
predicted = fitted_model.predict_proba(X)[:,1]
else:
predicted = fitted_model.predict(X)
plt.imshow(predicted.reshape(num, num), cmap='viridis')
plt.title(title)
plt.xlabel(feature1)
plt.ylabel(feature2)
plt.xticks([])
plt.yticks([])
plt.colorbar()
plt.show()
return predicted
#now focusing on XGBClassifier
from sklearn.model_selection import ParameterGrid
from sklearn.model_selection import GridSearchCV
depths=[3,4,5]
n_estimator=[75,200,400]
booster=['gbtree', 'gblinear', 'dart']
#thresholds=[0.5,0.6,0.7]
param_grid = [{'max_depth': [3,4,5], 'n_estimators': [75,200,400],'booster': ['gbtree', 'gblinear', 'dart']}]
scores = ['accuracy','precision', 'recall']
for score in scores:
xgb=GridSearchCV(XGBClassifier(n_jobs=-1,random_state=42),param_grid,scoring=score)
xgb.fit(X_train,y_train)
print("Best parameters set found on grid:")
print()
print(xgb.best_params_)
###Output
Best parameters set found on development set:
{'booster': 'gbtree', 'max_depth': 3, 'n_estimators': 200}
Best parameters set found on development set:
{'booster': 'gblinear', 'max_depth': 3, 'n_estimators': 75}
Best parameters set found on development set:
{'booster': 'gbtree', 'max_depth': 3, 'n_estimators': 200}
###Markdown
Best parameters set found on development set:{'booster': 'gbtree', 'max_depth': 3, 'n_estimators': 200}Best parameters set found on development set:{'booster': 'gblinear', 'max_depth': 3, 'n_estimators': 75}Best parameters set found on development set:{'booster': 'gbtree', 'max_depth': 3, 'n_estimators': 200}
###Code
#so max depth 3 seemed to always be best, while 200 maximized for accuracy and recall, while gb tree was often the best booster
xgb_tuned=XGBClassifier(booster='gbtree',max_depth=3, n_estimators=200, n_jobs=-1, random_state=42)
xgb_tuned.fit(X_train,y_train)
#so decent improvement in accuracy from below 98 to 98.4
xgb_tuned.score(X_train,y_train), xgb_tuned.score(X_test,y_test)
###Output
_____no_output_____ |
jupyter/Beyond_Linear_Programming.ipynb | ###Markdown
Tutorial: Beyond Linear Programming, (CPLEX Part2)This notebook describes some special cases of LP, as well as some other non-LP techniques, and also under which conditions they should be used. Before continuing, you should ensure that you followed the CPLEX Tutorial Part 1.After completing this unit, you should be able to describe what a network model is, and the benefits of using network models, explain the concepts of nonlinearity and convexity, describe what a piecewise linear function is, and describe the differences between Linear Programming (LP), Integer Programming (IP), Mixed-Integer Programming (MIP), and Quadratic Programming (QP). You should also be able to construct a simple MIP model. >This notebook is part of **[Prescriptive Analytics for Python](http://ibmdecisionoptimization.github.io/docplex-doc/)**>>It requires either an [installation of CPLEX Optimizers](http://ibmdecisionoptimization.github.io/docplex-doc/getting_started.html) or it can be run on [IBM Cloud Pak for Data as a Service](https://www.ibm.com/products/cloud-pak-for-data/as-a-service/) (Sign up for a [free IBM Cloud account](https://dataplatform.cloud.ibm.com/registration/stepone?context=wdp&apps=all>)and you can start using `IBM Cloud Pak for Data as a Service` right away).>> CPLEX is available on IBM Cloud Pack for Data and IBM Cloud Pak for Data as a Service:> - IBM Cloud Pak for Data as a Service: Depends on the runtime used:> - Python 3.x runtime: Community edition> - Python 3.x + DO runtime: full edition> - Cloud Pack for Data: Community edition is installed by default. Please install the `DO` addon in `Watson Studio Premium` for the full editionTable of contents:* [CPLEX Modeling for Python](Use-IBM-Decision-Optimization-CPLEX-Modeling-for-Python)* [Network models](Network-models)* [Non-linearity and Convexity](Non-linearity-and-Convexity)* [Integer Optimization](Integer-Optimization)* [Quadratic Programming](Quadratic-Programming)We will use DOcplex to write small samples to illustrate the topics Use IBM Decision Optimization CPLEX Modeling for PythonLet's use the [DOcplex](http://ibmdecisionoptimization.github.io/docplex-doc/) Python library to write sample models in Python. Download the libraryInstall `CPLEX` (Community Edition) and `docplex` if they are not installed.In `IBM Cloud Pak for Data as a Service` notebooks, `CPLEX` and `docplex` are preinstalled.
###Code
import sys
try:
import cplex
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install cplex
else:
!pip install --user cplex
###Output
_____no_output_____
###Markdown
Installs `DOcplex`if needed
###Code
import sys
try:
import docplex.mp
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install docplex
else:
!pip install --user docplex
###Output
_____no_output_____
###Markdown
If either `CPLEX` or `docplex` where installed in the steps above, you will need to restart your jupyter kernel for the changes to be taken into account. Network modelsIn this topic, you’ll learn what a network model is, and how its structure can be exploited for more efficient solutions. Networks in real lifeSeveral problems encountered in Operations Research (OR) involve networks, such as:Distribution problems (for example, transportation networks)Assignment problems (for example, networks of workers and jobs they could be assigned to)Planning problems (for example, critical path analysis for project planning)Many network models are special LP problems for which specialized solution algorithms exist. It is important to know whether a problem can be formulated as a network model to exploit the special structure.This topic introduces networks in general, as well as some well-known instances of network models. Network modeling conceptsAny network structure can be described using two types of objects:- Nodes: Defined points in the network, for example warehouses.- Arcs: An arc connects two nodes, for example a road connecting two warehouses. An arc can be _directed_, which means than an arc $a_{ij}$ from node $i$ to node $j$ is different from arc $a_ji$ that begins at node $j$ and ends at node $i$. A sequence of arcs connecting two nodes is called a chain. Each arc in a chain shares exactly one node with the preceding arc. When all the arcs in a chain are directed such that it is possible to traverse the chain in the directions of the arcs from the start node to the end node, it is called a path. Different types of network problemsThe following are some well-known types of network problems:- Transportation problem- Transshipment problem- Assignment problem- Shortest path problem- Critical path analysisNext, you'll learn how to recognize each of these, and how their special structure can be exploited. The Transportation ProblemOne of the most common real-world network problems is the transportation problem. This type of problem involves a set of supply nodes and a set of demand nodes. The objective is to minimize the transportation cost from the supply nodes to the demand nodes, so as to satisfy the demand, and without exceeding the suppliers’ capacities. Such a problem can be depicted in a graph, with supply nodes, demand nodes, and connecting arcs. The supply capacity is indicated with the supply nodes, while the demand is indicated with the demand nodes, and the transportation costs are indicated on the arcs. The LP formulation involves one type of variable, namely x(i,j) representing the quantity transported from supply node i to demand node j. The objective is to minimize the total transportation cost across all arcs. The constraints are flow conservation constraints. The first two constraints state that the outflow from each supply node should be less than or equal to the supply capacity. The next three constraints state that the inflow into each demand node should equal the demand at that node. The domain for the shipments on the allowable arcs is set to be greater than or equal to zero, while the shipment quantities on the disallowed arcs are set to zero. Even though arcs (1,4) and (2,3) do not exist in the graph, the variables are included in the slide to show the special structure of the transportation problem. If you were to formulate such a model in practice, you’d simply exclude these variables. Formulating a simple transportation problem with DOcplexIn the next section, we formulate the problem described above using DOcplex. What data for the transpotation problem?Input ndoes are integers ranging in {1, 2}; output nodes are integers ranging from 3 to 5.The data consists in three Python dictionaries:- one dictionary gives capacity values for all input nodes- one dictionary contains demands for all target nodes- one last dictionary holds cost values for some (source, target) pair of nodes.
###Code
capacities = {1: 15, 2: 20}
demands = {3: 7, 4: 10, 5: 15}
costs = {(1,3): 2, (1,5):4, (2,4):5, (2,5):3}
# Python ranges will be used to iterate on source, target nodes.
source = range(1, 3) # {1, 2}
target = range(3, 6) # {3,4,5}
###Output
_____no_output_____
###Markdown
Create a model instance
###Code
from docplex.mp.model import Model
tm = Model(name='transportation')
###Output
_____no_output_____
###Markdown
Define the decision variables- The continuous variable `desk` represents the production of desk telephones.- The continuous variable `cell` represents the production of cell phones.
###Code
# create flow variables for each couple of nodes
# x(i,j) is the flow going out of node i to node j
x = {(i,j): tm.continuous_var(name='x_{0}_{1}'.format(i,j)) for i in source for j in target}
# each arc comes with a cost. Minimize all costed flows
tm.minimize(tm.sum(x[i,j]*costs.get((i,j), 0) for i in source for j in target))
tm.print_information()
###Output
Model: transportation
- number of variables: 6
- binary=0, integer=0, continuous=6
- number of constraints: 0
- linear=0
- parameters: defaults
- objective: minimize
- problem type is: LP
###Markdown
Set up the constraints- For each source node, the total outbound flow must be smaller than available quantity.- For each target node, total inbound flow must be greater thand demand
###Code
# for each node, total outgoing flow must be smaller than available quantity
for i in source:
tm.add_constraint(tm.sum(x[i,j] for j in target) <= capacities[i])
# for each target node, total ingoing flow must be greater thand demand
for j in target:
tm.add_constraint(tm.sum(x[i,j] for i in source) >= demands[j])
###Output
_____no_output_____
###Markdown
Express the business objective: minimize total flow costEach arc has a unit cost and we want to minimize the total cost. If an arc has no entry in the dictionary, we assume a zero cost (using the `dict.get` method.
###Code
tm.minimize(tm.sum(x[i,j]*costs.get((i,j), 0)))
###Output
_____no_output_____
###Markdown
Solve the modelIf you're using a Community Edition of CPLEX runtimes, depending on the size of the problem, the solve stage may fail and will need a paying subscription or product installation.In any case, `Model.solve()` returns a solution object in Python, containing the optimal values of decision variables, if the solve succeeds, or else it returns `None`.
###Code
tms = tm.solve()
assert tms
tms.display()
###Output
solution for: transportation
objective: 0.000
x_1_5 = 15.000
x_2_3 = 10.000
x_2_4 = 10.000
###Markdown
Special structure of network problemThe special structure of the transportation problem, as well as many other types of network problems, allows the use of specialized algorithms that lead to significant reductions in solution time.Another important characteristic of transportation problems (and also some other network problems) is that if all the capacities and demands are integer, then the decision variables will take integer values.This is important to know, because it means that you do not have to use integer variables in such cases. As you'll learn in a later topic, integer variables often lead to problems that require much more computational effort compared to problems with only continuous variables. The Transshipment ProblemThe transshipment problem is similar to the transportation problem, except that intermediate nodes exist in addition to the supply and demand nodes. In this example, nodes 3 and 4 are intermediate nodes. The LP formulation is also similar, in the sense that it involves an objective to minimize the transportation cost across all arcs, and a set of flow conservation constraints. The first two constraints are for the supply nodes, and state that the outflow from each supply node should equal the capacity of that node, plus any inflow into that same node. The next two constraints are for the intermediate nodes, and state that the inflow into an intermediate node should equal the outflow out of that node. The last two constraints are for the demand nodes, and state that the inflow into each demand node should equal the demand at that node. The domain for the variables is to be greater than or equal to zero. It is possible to write the transshipment problem as a transportation problem in order to use specialized algorithms that exploit the structure of transportation problem. This conversion is not covered as part of this course because CPLEX Optimizer does this conversion automatically, but you can find details in the textbooks listed at the end of this course The Assignment ProblemThe Assignment Problem is the problem of assigning one set of items to another, while optimizing a given objective. For example, the problem of assigning workers to jobs so as to minimize the hiring costs. In this example, the workers are represented by nodes 1 through 3, while the jobs are represented with nodes A, B and C. The cost of assigning a worker to a job is shown on each arc. The objective is to minimize the total assignment cost. Again, the constraints can be seen as flow conservation constraints. The first three constraints state that each job is filled by exactly one person, while the last three constraints state that each person is assigned to exactly one job. The variables should now take a value of 1 if worker i is assigned to job j, and zero otherwise. This problem is a special case of the transportation problem, with each supply node capacity set to one, and each demand set to 1. What’s also important to know is that even though the variables must take 0-1 values, they can be declared as continuous variables due to the integrality property, namely that if all capacity and demand quantities are integer, the variables will take integer values. The Shortest Path ProblemThe Shortest Path Problem is the problem of finding the shortest path through a network. For example, to find the minimum travel time between two cities in a network of cities. The shortest path problem is a special case of the transshipment problem, where there is exactly one supply node and one demand node, and the supply and demand are both equal to 1. In this example, each node represents a city, and each arc represents the road connecting two cities. The travel time is indicated on each arc. The variable x(i, j) takes a value of 1 if the arc between i and j is included in the shortest path, and zero otherwise. The objective is to minimize the total travel time. As with the other network problems, the constraints can be seen as flow conservation constraints. A constraint exists for each node (or each city) and the constraints state that exactly one arc should be chosen into each city, and exactly one arc should be chosen out of each city. Again, even though the x variables must take 0-1 values, they can be declared as continuous due to the integrality property (that is, all the capacity and demand quantities are integer). Critical Path AnalysisCritical path analysis is a technique used in project planning to find the set of critical activities where a delay in one of the critical activities will lead to an overall project delay. The critical path is the longest path in the network. It represents the minimum completion time of the project, and if any task on the critical path is delayed, the entire project will be delayed. Tasks that do not lie on the critical path may be delayed to some extent without impacting the final completion date of the project. The critical path will change if a non-critical task is delayed to a point where it becomes critical. For example, consider a kitchen remodeling project where the home owners are remodeling their kitchen with the eventual goal of hosting a party. Some of the tasks can be done simultaneously, such as plumbing and electricity, while others can only be started once a previous task is complete, for example the sink can only be installed once the plumbing is complete. The critical path will indicate the minimum time for completing this project, so that they know when they can have their party. Here, the arcs show the task durations, while the nodes show the task start times. For modeling purposes, let t(i) be the start time of the tasks originating at node i. The objective is to minimize the project completion time, namely t7. The constraints indicate the task duration and precedence. For example, the third constraint indicates that the minimum time between starting the sink installation (that is, node 3), and starting the plumbing (that is, node 1) is 3 days. Arcs 3-2 and 4-2 are dummy tasks required to complete the model. Such dummy tasks have no minimum time associated with them, but are used to model precedence involving more than one preceding tasks of varying duration, in this case that the plumbing, electricity, and appliance order time must be complete before appliances can be installed. In this graph, it’s easy to see that the critical path lies on nodes 0 – 2 – 6 – 7, with a total length of 19 days. Therefore, any delay in the appliance ordering or installation will delay the party date. Large projects can lead to very complex networks, and specialized algorithms exist for critical path analysis. CPLEX Network OptimizerAs you’ve now seen, many network problems are special types of LP problems. In many cases, using the Simplex or Dual-simplex Optimizers is the most efficient way to solve them. In some cases, specialized algorithms can solve such problems more efficiently. CPLEX automatically invokes the Network Optimizer when it's likely that it would improve solution time compared to the other algorithms. It is also possible to force the use (or not) of the Network Optimizer by setting the `lpopt` parameter ofa DOcplex model to 3 (remember 1 was primal simplex, 2 was dual simplex, and 4 is for barrier). Non-linearity and ConvexityIn this topic, you’ll learn about the concepts of nonlinearity and convexity, and their significance in relation to mathematical programming. Non-linearityIn many problems, the relationships between decision variables are not linear. Examples of industries where nonlinear optimization problems are often encountered are:The chemical process industry, for example oil refining or pipeline designThe finance sector, for example stock portfolio optimization- Nonlinear Programming (NLP) is often used to solve such optimization problems. One of the NLP techniques available through DOcplex is Quadratic Programming (QP), used especially for portfolio optimization. - Sometimes it is possible to approximate a nonlinear function with a set of linear functions and then solve the problem using LP techniques. The success of this depends on whether the nonlinear function is convex or nonconvex. ConvexityA region of the space is _convex_ if any point lying on a straight line between any two points in the region is also in the region.All LPs are convex, because an LP involves the minimization of a linear function over a feasible region defined by a set of linear functions, and linear functions are convex. Approximating a non-linear program with piecewise-linear functionsA convex nonlinear function can be approximated by a set of linear functions, as shown in the next figure. This could be used to allow solution with LP techniques. Such an approximate function consisting of several linear line segments is called a _piecewise linear_ function. We will start by discussing this approximation.Piecewise linear programming is used when dealing with functions consisting of several linear segments. Examples of where this might be appropriate are piecewise linear approximations of convex nonlinear functions, and piecewise linear functions representing economies of scale (for example a unit cost that changes when a certain number of units are ordered.) An example of piecewise linear programming: economies of scaleConsider a transportation problem in which the transportation cost between an origin,$i$, and a destination, $j$, depends on the size of the shipment, $ship_{ij}$:- The first 1000 items have a shipping cost of \$0.40 per item- For the next 2000 items (i.e. items 1001-3000) the cost is \$ 0.20 per item- From item 3001 on, the cost decreases to \$ 0.10 per item.The cost of each quantity bracket remains intact (that is, the cost per unit changes only for additional units, and remains unchanged for the previous quantity bracket). Within each bracket there is a linear relationship between cost and quantity, but at each breakpoint the rate of linear variation changes. If you graph this function, you see that at each breakpoint, the slope of the line changes. The section of the graph between two breakpoints is a linear piece. The diagram shows that the total shipping cost is evaluated by 3 different linear functions, each determined by the quantity shipped $Q$:- 0.40 * items when $Q \le 1000$,- 0.40 * 1000 + 0.20 * (Q - 1000) when $1000 <= Q = 3000$,- 0.40 * 1000 + 0.20 * 2000 + 0.10 * (Q - 3000) when $Q \ge 3000$.This is an example of a piecewise linear function. Note that this function is _continuous_, that is has no 'jump' in its values, but this not mandatory for piecewise linear functions. Piecewise linear functions with DOcplexDOcplex lets you define piecewise linear functions in two styles: either with a set of points, or with a set of values and slopes. As the problem is formulated with slopes, we'll start with this formulation.The `Model.piecewise_with_slopes` method takes a sequence of $(x_{i}, slope_{i})$ values where $slope_{i}$ is the slope of the function between $x_{i-1}$ and $x_{i}$.
###Code
# Display matplotlib plots in the notebook
%matplotlib inline
# create a new model to attach piecewise
pm = Model(name='pwl')
pwf1 = pm.piecewise_as_slopes([(0, 0), (0.4, 1000), (0.2, 3000)], lastslope=0.1)
# plot the function
pwf1.plot(lx=-1, rx=4000, k=1, color='b', marker='s', linewidth=2)
###Output
_____no_output_____
###Markdown
Defining a piecewise linear functions from break pointsDOcplex also allows to define a piecewise linear function from the set of its _break_ points, that is a sequence of $(x_{i}, y_{i})$ values, plus the final slope towards $+\infty$. Here the sequence of breakpoints is:- (0,0)- (1000, 400): computed with 0.4 marginal cost- (3000, 800): computed as 400 + 0.2 * (3000 - 1000)- final slope is 0.1
###Code
pwf2 = pm.piecewise(preslope=0, breaksxy=[(0, 0), (1000, 400), (3000, 800)], postslope=0.1)
# plot the function
pwf2.plot(lx=-1, rx=4000, k=1, color='r', marker='o', linewidth=2)
###Output
_____no_output_____
###Markdown
To bind a variable $y$ to the result of applying the peicewise linear function to another variable $x$, you just have to add the following constraint to the model:
###Code
x = pm.continuous_var(name='x')
y = pm.continuous_var(name='y')
pm.add_constraint(y == pwf2(x)); # y is constrained to be equal to f(x)
###Output
_____no_output_____
###Markdown
Integer OptimizationIn this topic, you’ll learn how to deal with integer decision variables by using Integer Programming and Mixed-Integer Programming, and how these techniques differ from LP. Problems requiring integersFor some optimization problems the decision variables should take integer values. - One example is problems involving the production of large indivisible items, such as airplanes or cars. It usually does not make sense to use a continuous variable to represent the number of airplanes to produce, because there is no point in manufacturing a partial airplane, and each finished airplane involves a large cost. - Another example of where one would use integer variables is to model a particular state, such as on or off. For example, a unit commitment problem where integer variables are used to represent the state of a particular unit being either on or off. - Planning of investments also requires integer variables, for example a variable that takes a value of 1 to invest in a warehouse, and 0 to ignore it. Finally, integer variables are often used to model logic between different decision, for example that a given tax break is only applicable if a certain investment is made. Different types of integer decisionsMany types of decisions can be modeled by using integer variables. One example is yes/no decisions, with a value of 1 for yes, and 0 for no. For example, if x equals 1, new manufacturing equipment should be installed, and if x equals 0, it should not. Integer variables are also used to model state or mode decisions. For example, if z1 equals 1 the machine operates in mode 1, if z2 equals 1, the machine operates in mode 2, and if z3 equals 1 the machine operates in mode 3. The same integer is often used to express both yes/no decisions and logic. For example, y1 equals 1 could in this case also be used to indicate that machine 1 is installed, and 0 otherwise. Finally, integer variables are used tomodel cases where a value can take only integer values: for example: how many flights should a company operate between two airports. Types of integer variablesIn general, integer variables can take any integer value, such as 0, 1, 2, 3, and so forth. Integers that should only take the values of 1 or 0 are known as binary (or Boolean) variables. Binary variables are also often referred to as Boolean variables because the Boolean values of true and false are analogous to 1 and 0. To ensure that an integer variable can only take the values 0 and 1, one can give it an upper bound of 1 or declare it to be of type binary. In a DOcplex model, decision variables are assumed to be nonnegative unless otherwise specified and the lower bound of 0 does not need to be declared explicitly. Declaring integer decision variables in DOcplexDOcplex has specific methods to create integer and binary variables.
###Code
im = Model(name='integer_programming')
b = im.binary_var(name='boolean_var')
ijk = im.integer_var(name='int_var')
im.print_information()
###Output
Model: integer_programming
- number of variables: 2
- binary=1, integer=1, continuous=0
- number of constraints: 0
- linear=0
- parameters: defaults
- objective: none
- problem type is: MILP
###Markdown
Modeling techniques with integer and binary variablesInteger and binary variables are very useful to express logical constraints. Here are a few examples ofsuch constraints. Indicator variablesIndicator variables are binary variables used to indicate whether a certain set of conditions is valid (with the variable equal to 1) or not (with the variable equal to 0). For example, consider a production problem where you want to distinguish between two states, namely production above a minimum threshold, and no production. To model this, define a binary variable $y$ to take a value of 1 if the production is above the minimum threshold (called minProd), and 0 if there is no production. Assume $production$ is a continuous variable containing the produced quantity. This leads to these two constraints.$$production \ge minProd * y\\production \le maxProd * y$$Here, maxProd is an upper bound on the production quantity. Thus, if y = 1, the minimum and maximum production bounds hold, and if y = 0, the production is set to zero. Logical constraints - an exampleFor example, consider an investment decision involving a production plant and two warehouses. - If the production plant is invested in, then either warehouse 1 or warehouse 2 may be invested in (not both).- If the production plant is not invested in, then neither of the two warehouses may be invested in.Let $yPlant$ be 1 if you decide to invest in the production plant, and 0 otherwise. Similar for $yWarehouse1$ and $yWarehouse2$.Then this example can be modeled as follows:$$yWarehouse1 + yWarehouse2 <= yPlant$$If $yPlant$ is 0 then both $yWarehouse1$ and $yWarehouse2$ are set to zero. On the opposite, if one warehouse variable is set to 1, then $yPlant$ is also set to 1. Finally, this constraint also states that warehouse variables cannot be both equal to 1. IP versus MIPWhen all the decision variables in a linear model should take integer values, the model is an Integer Program (or **IP**). When some of the decision variables may also take continuous values, the model is a Mixed Integer Program (or **MIP**). MIPs are very common in, for example, some supply chain applications where investment decisions may be represented by integers and production quantities are represented by continuous variables.IPs and MIPs are generally much more difficult to solve than LPs.The solution complexity increases with the number of possible combinations of the integer variables, and such problems are often referred to as being “combinatorial”.In the worst case, the solution complexity increases exponentially with the number of integer decision variables.Many advanced algorithms can solve complex IPs and MIPs in reasonable time An integer programming exampleIn the telephone production problem where the optimal solution found in chapter 2 'Linear programming' had integer values, it is possible that the solution becomes non-integer under certain circumstances, for example:- Change the availability of the assembly machine to 401 hours- Change the painting machine availability to 492 hours- Change the profit for a desk phone to 12.4- Change the profit for a cell phone to 20.2The fractional values for profit are quite realistic. Even though the fractional times for availability are not entirely realistic, these are used here to illustrate how fractional solutions may occur. Let's solve again the telephone production problem with these new data. A detailed explanation of the model is found in notebook 2: 'Linear Programming'
###Code
lm = Model(name='lp_telephone_production')
desk = lm.continuous_var(name='desk')
cell = lm.continuous_var(name='cell')
# write constraints
# constraint #1: desk production is greater than 100
lm.add_constraint(desk >= 100)
# constraint #2: cell production is greater than 100
lm.add_constraint(cell >= 100)
# constraint #3: assembly time limit
ct_assembly = lm.add_constraint( 0.2 * desk + 0.4 * cell <= 401)
# constraint #4: painting time limit
ct_painting = lm.add_constraint( 0.5 * desk + 0.4 * cell <= 492)
lm.maximize(12.4 * desk + 20.2 * cell)
ls = lm.solve()
lm.print_solution()
###Output
objective: 20948.167
desk=303.333
cell=850.833
###Markdown
As we can see the optimal solution contains fractional values for number of telephones, which are not realistic.To ensure we get integer values in the solution, we can use integer decision variables.Let's solve a new model, identical except that its two decision variables are declared as _integer_ variables.
###Code
im = Model(name='ip_telephone_production')
desk = im.integer_var(name='desk')
cell = im.integer_var(name='cell')
# write constraints
# constraint #1: desk production is greater than 100
im.add_constraint(desk >= 100)
# constraint #2: cell production is greater than 100
im.add_constraint(cell >= 100)
# constraint #3: assembly time limit
im.add_constraint( 0.2 * desk + 0.4 * cell <= 401)
# constraint #4: painting time limit
im.add_constraint( 0.5 * desk + 0.4 * cell <= 492)
im.maximize(12.4 * desk + 20.2 * cell)
si = im.solve()
im.print_solution()
###Output
objective: 20947.400
desk=303
cell=851
###Markdown
As expected, the IP model returns integer values as optimal solution. This graphic shows the new feasible region where the dots indicate the feasible solutions. That is, solutions where the variables take only integer values. This graphic is not according to scale, because it’s not possible to indicate all the integer points graphically for this example. What you should take away from this graphic, is that the feasible region is now a collection of points, as opposed to a solid area. Because, in this example, the integer solution does not lie on an extreme point of the continuous feasible region, LP techniques would not find the integer optimum. To find the integer optimum, you should use an integer programming technique. Rouding a fractional solutionAn idea that often comes up to deal with fractional solutions is to solve an LP and then round the fractional numbers in order to find an integer solution. However, because the optimal solution is always on the edge of the feasible region, rounding can lead to an infeasible solution, that is, a solution that lies outside the feasible region. In the case of the telephone problem, rounding would produce infeasible results for both types of phones. When large quantities of items are produced, for example thousands of phones, rounding may be still be a good approach to avoid integer variables.In general, you should use an integer programming algorithm to solve IPs. The most well-known of these is the branch-and-bound algorithm. The branch and bound methodThe _branch and bound_ method, implemented in CPLEX Mixed-Integer Optimizer, provides an efficient way to solve IP and MIP problems. This method begins by relaxing the integer requirement and treating the problem as an LP. If all the variables take integer values, the solution is complete. If not, the algorithm begins a tree search. You’ll now see an example of this tree search. Consider this integer programming problem, involving an objective to maximize, three constraints, and three non-negative integer variables (this is the default for DOcplex variables).$maximize\ x + y + 2 z\\subject\ to: 7x + 2y + 3z <= 36\\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 5x + 4y + 7z <= 42\\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 2x + 3y + 5z <= 28\\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x,y,z \ge 0$ Branch and Bound: the root nodeThe first node of the branch and bound tree is the LP relaxation of the original IP model. LP relaxation means that the integer variables have been relaxed to be continuous variables. The solution to the LP relaxation of a maximization IP, such as this, provides an upper bound to the original problem, in this case that bound is eleven and five elevenths. The current lower bound is minus infinity. In this case, the solution is fractional and the tree search continues in order to try and find an integer solution. Branch and Bound: branching on a variableThe algorithm next chooses one of the variables to branch on, in this case $x$, and adds two constraints to create two subproblems. These two constraints are based on the relaxed value of x, namely one and three elevenths. In the one subproblem, $x$ is required to be less than or equal to one, and in the other problem, $x$ is required to be greater than or equal to two, in order to eliminate the fractional solution found. IP2 gives another fractional solution, but IP3 gives an integer solution. This integer solution of 10 is the new lower bound to the original maximization problem, because it is the best current solution to the maximization problem. The algorithm will terminate when the gap between the upper and lower bounds is sufficiently small, but at this point there is still more of the tree to explore. Branch and Bound: iteration Two new subproblems are now generated from IP2, and these constraints are determined by the fractional value of z in IP2. In IP4, z must be less than or equal to 5, and in IP3 z must be greater than or equal to 6. IP4 gives another fractional solution, while IP3 is infeasible and can be pruned. When a node is pruned, the node is not explored further in the tree. Next, two more subproblems are created from IP4, namely one with y less than or equal to zero in IP6, and one with y greater than or equal to 1 in IP5. IP6 yields an integer solution of 11, which is an improvement of the previously found lower bound of 10. IP5 gives a fractional solution and can be explored further. So another two subproblems are created from IP5, namely IP8 with z less than or equal to 4, and IP7 with z greater than or equal to 5. However, the constraint added for IP4 specifies that z must be less than or equal to 5, so node IP7 immediately yields an integer solution with an objective value of 11, which is the same objective as for IP6. IP8 yields an integer solution with objective value of 9, which is a worse solution than those previously found and IP8 can therefore be discarded. The optimal solution reported is the integer solution with the best objective value that was found first, namely the solution to IP6. The progess of the Branch & Bound algorithm can be monitored by looking at the CPLEX the _log_. Adding the keyword argument `log_output=True` to the `Model.solve()` method will print the log on the standard output.You can see the best bound going down until the gap closes and the final solution of 11 is returned.By default the CPLEX log is not printed.
###Code
bbm = Model(name='b&b')
x, y, z = bbm.integer_var_list(3, name=['x', 'y', 'z'])
bbm.maximize(x + y + 2*z)
bbm.add_constraint(7*x + 2*y + 3*z <= 36)
bbm.add_constraint(5*x + 4*y + 7*z <= 42)
bbm.add_constraint(2*x + 3*y + 5*z <= 28)
bbm.solve(log_output=True);
###Output
Version identifier: 12.10.0.0 | 2019-11-26 | 843d4de2ae
CPXPARAM_Read_DataCheck 1
Found incumbent of value 0.000000 after 0.00 sec. (0.00 ticks)
Tried aggregator 1 time.
Reduced MIP has 3 rows, 3 columns, and 9 nonzeros.
Reduced MIP has 0 binaries, 3 generals, 0 SOSs, and 0 indicators.
Presolve time = 0.00 sec. (0.00 ticks)
Tried aggregator 1 time.
Reduced MIP has 3 rows, 3 columns, and 9 nonzeros.
Reduced MIP has 0 binaries, 3 generals, 0 SOSs, and 0 indicators.
Presolve time = 0.02 sec. (0.00 ticks)
MIP emphasis: balance optimality and feasibility.
MIP search method: dynamic search.
Parallel mode: deterministic, using up to 12 threads.
Root relaxation solution time = 0.00 sec. (0.00 ticks)
Nodes Cuts/
Node Left Objective IInf Best Integer Best Bound ItCnt Gap
* 0+ 0 0.0000 24.0000 ---
0 0 11.4286 2 0.0000 11.4286 2 ---
* 0+ 0 11.0000 11.4286 3.90%
0 0 cutoff 11.0000 11.4286 2 3.90%
Elapsed time = 0.02 sec. (0.03 ticks, tree = 0.01 MB, solutions = 2)
Root node processing (before b&c):
Real time = 0.02 sec. (0.03 ticks)
Parallel b&c, 12 threads:
Real time = 0.00 sec. (0.00 ticks)
Sync time (average) = 0.00 sec.
Wait time (average) = 0.00 sec.
------------
Total (root+branch&cut) = 0.02 sec. (0.03 ticks)
###Markdown
Modeling yes/no decisions with binary variables: an exampleBinary variables are often used to model yes/no decisions. Consider again the telephone production problem, but ignore the lower bounds of 100 on production for simplicity. The company is considering replacing the assembly machine with a newer machine that requires less time for cell phones, namely 18 minutes per phone, but more time for desk phones, namely 15 minutes per phone. This machine is available for 430 hours, as opposed to the 400 hours of the existing assembly machine, because it requires less downtime. We will design and write a model that uses binary variables to help the company choose between the two machines. The steps to formulate the mixed-integer model are:- Add four new variables (desk1, desk2, cell1, and cell2, to indicate the production on assembly machines 1 and 2, respectively.- Add two constraints to define the total production of desk and cell to equal the sum of production from the two assembly machines.- Rewrite the constraint for assembly machine 1 to use the new variables for that machine (desk1 and cell1).- Add a similar constraint for the production on assembly machine 2.- Define a Boolean variable, y, to take a value of 1 if assembly machine 1 is chosen, and 0 if assembly machine 2 is chosen.- Use the z variable to set the production to zero for the machine that is not chosen. Implementing the yes/no decision model with DOcplexFirst, create a model instance.
###Code
tm2 = Model('decision_phone')
###Output
_____no_output_____
###Markdown
Setup decision variableswe create two sets of (desk, cell) integer variables, one per machine type, plus the total production variables.Note that the total production variables do not need to be declared if the four typed productions are integers.As the sum of two integers, they will always be integers; the less we have of integer variables, the easier CPLEX willsolve the problem.In addition, we define an extra binary variable $z$ to model the choice we are facing: use machine 1 or machine 2.
###Code
# variables for total production
desk = tm2.integer_var(name='desk', lb=100)
cell = tm2.continuous_var(name='cell', lb=100)
# two variables per machine type:
desk1 = tm2.integer_var(name='desk1')
cell1 = tm2.integer_var(name='cell1')
desk2 = tm2.integer_var(name='desk2')
cell2 = tm2.integer_var(name='cell2')
# yes no variable
z = tm2.binary_var(name='z')
###Output
_____no_output_____
###Markdown
Setup constraints- The constraint for painting machine limit is identical to the basic telephone model- Two extra constraints express the total production as the sum of productions on the two assembly machines.- Each assembly machine type has its own constraint, in which variable $z$ expresses the exclusive choice between the two.
###Code
# total production is sum of type1 + type 2
tm2.add_constraint(desk == desk1 + desk2)
tm2.add_constraint(cell == cell1 + cell2)
# production on assembly machine of type 1 must be less than 400 if z is 1, else 0
tm2.add_constraint(0.2 * desk1 + 0.4 * cell1 <= 400 * z)
# production on assembly machine of type 2 must be less than 430 if z is 0, else 0
tm2.add_constraint(0.25 * desk2 + 0.3 * cell2 <= 430 * (1-z))
# painting machine limit is identical
# constraint #4: painting time limit
tm2.add_constraint( 0.5 * desk + 0.4 * cell <= 490)
tm2.print_information()
###Output
Model: decision_phone
- number of variables: 7
- binary=1, integer=5, continuous=1
- number of constraints: 5
- linear=5
- parameters: defaults
- objective: none
- problem type is: MILP
###Markdown
Expressing the objectiveThe objective is identical: maximize total profit, using total productions.
###Code
tm2.maximize(12 * desk + 20 * cell)
###Output
_____no_output_____
###Markdown
Solve with the Decision Optimization solve service
###Code
tm2s= tm2.solve(log_output=True)
assert tm2s
tm2.print_solution()
###Output
Version identifier: 12.10.0.0 | 2019-11-26 | 843d4de2ae
CPXPARAM_Read_DataCheck 1
Found incumbent of value 12800.000000 after 0.00 sec. (0.00 ticks)
Tried aggregator 1 time.
MIP Presolve modified 2 coefficients.
Reduced MIP has 5 rows, 7 columns, and 14 nonzeros.
Reduced MIP has 1 binaries, 5 generals, 0 SOSs, and 0 indicators.
Presolve time = 0.00 sec. (0.01 ticks)
Probing fixed 0 vars, tightened 1 bounds.
Probing time = 0.00 sec. (0.00 ticks)
Tried aggregator 1 time.
Detecting symmetries...
MIP Presolve modified 1 coefficients.
Reduced MIP has 5 rows, 7 columns, and 14 nonzeros.
Reduced MIP has 1 binaries, 6 generals, 0 SOSs, and 0 indicators.
Presolve time = 0.02 sec. (0.01 ticks)
Probing time = 0.00 sec. (0.00 ticks)
MIP emphasis: balance optimality and feasibility.
MIP search method: dynamic search.
Parallel mode: deterministic, using up to 12 threads.
Root relaxation solution time = 0.00 sec. (0.01 ticks)
Nodes Cuts/
Node Left Objective IInf Best Integer Best Bound ItCnt Gap
* 0+ 0 12800.0000 32800.0000 156.25%
* 0 0 integral 0 23200.0000 23200.0000 1 0.00%
Elapsed time = 0.02 sec. (0.05 ticks, tree = 0.00 MB, solutions = 2)
Root node processing (before b&c):
Real time = 0.02 sec. (0.05 ticks)
Parallel b&c, 12 threads:
Real time = 0.00 sec. (0.00 ticks)
Sync time (average) = 0.00 sec.
Wait time (average) = 0.00 sec.
------------
Total (root+branch&cut) = 0.02 sec. (0.05 ticks)
objective: 23200.000
desk=100
cell=1100.000
desk2=100
cell2=1100
###Markdown
ConclusionThis model demonstrates that the optimal solution is to use machine 2 , producing 100 desk phones and 1100 cell phones. Using binary variables for logical decisionsWhat if the company had to choose between 3 possible candidates for the assembly machine, as opposed to two?The above model can be generalized with three binary variables $z1$, $z2$, $z3$ each of which is equal to 1 only if machine type 1,2, or 3 is used. But then we need to express that _exactly_ one of those variables must be equal to 1. How can we achive this?The answer is to add the following constraint to the model:$$z_{1} + z_{2} + z_{3} = 1$$Thus, if one of zs variables is equal to 0, the two other areequal to zero (remember binary variables can take value 0 or 1). Quadratic ProgrammingIn this topic, you’ll learn what a quadratic program (or QP) is, and learn how quadratic programming applies to portfolio management. Quadratic FunctionsMathematically, a function is quadratic if: - The variables are only first or second degree (that is, one variable may be multiplied by another variable, and any of the variables may be squared) and- The coefficients of the variables are constant numeric values (that is, integers or real numbers). A quadratic function is also known as a second degree polynomial function. Geometrically, a quadratic function is a curve or curved surface. For example, a quadratic function in two dimensions is a curved line, such as a parabola, hyperbola, ellipse, or circle. What is a Quadratic Program?Quadratic Programs (or QPs) have quadratic objectives and linear constraints. A model that has quadratic functions in the constraints is a Quadratically Constrained Program (or QCP). The objective function of a QCP may be quadratic or linear. A simple formulation of a QP is:$$ {minimize}\ \frac{1}{2}{x^{t}Qx + c^{t}x}\\ subject\ to \\\ \ Ax \ge b \\\ \ lb \le x \le ub \\$$The first objective term is quadratic, with Q being the matrix of objective function coefficients of the quadratic terms. The second term and the constraints are linear. CPLEX Optimizer can solve convex QP and QCP problems. Quadratic programming is used in several real-world situations, for example portfolio management or chemical process modeling. In the next two slides, you’ll see how QP applies to portfolio management. A model that has quadratic functions in the constraints is a Quadratically Constrained Program (QCP). The objective function of a QCP problem may be quadratic or linear. Portfolio managementIn order to mitigate risk while ensuring a reasonable level of return, investors purchase a variety of securities and combine these into an investment portfolio. Each security has an expected return and an associated level of risk (or variance). Securities sometimes covary, that is, they change together with some classes of securities, and in the opposite direction of other classes of securities. An example of positive covariance is when shares in technology companies follow similar patterns of increases and decreases in value. On the other hand, as the price of oil rises, shares in oil companies may increase in value, but plastics manufacturers, who depend on petroleum as a major primary resource, may see their shares decline in value as their costs go up and vice versa. This is negative covariance. To optimize a portfolio in terms of risk and return, an investor will evaluate the sum of expected returns of the securities, the total variances of the securities, and the covariances of the securities. A portfolio that contains a large number of positively covariant securities is more risky (and potentially more rewarding) than one that contains a mix of positively and negatively covariant securities. Potfolio optimization: what use?Portfolio optimization is used to select securities to maximize the rate of return, while managing the volatility of the portfolio and remaining within the investment budget. As the securities covary with one another, selecting the right mix of securities can change or even reduce the volatility of the portfolio with the same expected return. At a given expected rate of return, there is one portfolio which has the lowest risk. If you plot each lowest-risk portfolio for each expected rate of return, the result is a convex graph, called the efficient frontier. The risk-return characteristics of a portfolio change in a nonlinear fashion, and quadratic expressions are used to model them. Data comes in two parts:- Basic data on shares: activity sector, expected return rate, and whether or not activity is based in North America- The covariance square matrix for all pairs of shares.The `pandas` Python data analysis library is used to store the data. Let's set up and declare the data.
###Code
import pandas as pd
from pandas import DataFrame
sec_data = {
'sector': ['treasury', 'hardware', 'theater', 'telecom', 'brewery', 'highways', 'cars', 'bank', 'software',
'electronics'],
'return': [5, 17, 26, 12, 8, 9, 7, 6, 31, 21],
'area': ['N-Am.', 'N-Am.', 'N-Am.', 'N-Am.', "ww", 'ww', 'ww', 'ww', 'ww', 'ww']
}
df_secs = DataFrame(sec_data, columns=['sector', 'return', 'area'])
df_secs.set_index(['sector'], inplace=True)
# store set of share names
securities = df_secs.index
df_secs
###Output
_____no_output_____
###Markdown
The covariance matrixCovraiance matrix is a square matrix (its size is the nu,ber of shares). The covariance matrix is also stored in a pandas DataFrame.
###Code
# the variance matrix
var = {
"treasury": [0.1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"hardware": [0, 19, -2, 4, 1, 1, 1, 0.5, 10, 5],
"theater": [0, -2, 28, 1, 2, 1, 1, 0, -2, -1],
"telecom": [0, 4, 1, 22, 0, 1, 2, 0, 3, 4],
"brewery": [0, 1, 2, 0, 4, -1.5, -2, -1, 1, 1],
"highways": [0, 1, 1, 1, -1.5, 3.5, 2, 0.5, 1, 1.5],
"cars": [0, 1, 1, 2, -2, 2, 5, 0.5, 1, 2.5],
"bank": [0, 0.5, 0, 0, -1, 0.5, 0.5, 1, 0.5, 0.5],
"software": [0, 10, -2, 3, 1, 1, 1, 0.5, 25, 8],
"electronics": [0, 5, -1, 4, 1, 1.5, 2.5, 0.5, 8, 16]
}
dfv = pd.DataFrame(var, index=securities, columns=securities)
dfv
###Output
_____no_output_____
###Markdown
There is a constraint that the total fraction of wealth invested in North American securities must be greater than some minimum value. To implement this constraint, we add a new column to df_secs, that is equal to 1 if and only if the area column equals "N.-Am.", else is equal to 0 (see later how we use this column to implemen the constraint).
###Code
def is_nam(s):
return 1 if s == 'N-Am.' else 0
df_secs['is_na'] = df_secs['area'].apply(is_nam)
df_secs
from docplex.mp.model import Model
mdl = Model(name='portfolio_miqp')
###Output
_____no_output_____
###Markdown
We model variables as the _fraction_ of wealth to invest in each share. Each variable is a continuous variable between 0 and 1. Variables are stored in a column of the dataframe.
###Code
# create variables
df_secs['frac'] = mdl.continuous_var_list(securities, name='frac', ub=1)
###Output
_____no_output_____
###Markdown
Express the business constraintsThe business constraints are the following:- the sum of allocated fractions equal 100%- each security cannot exceed a certain percentage of the initial allocated wealth (here 30%)- there must be at least 40% of wealth invested in securities hosted in North America- compound return on investment must be less than or equal to a minimum target (say 9%)
###Code
# max fraction
frac_max = 0.3
for row in df_secs.itertuples():
mdl.add_constraint(row.frac <= 0.3)
# sum of fractions equal 100%
mdl.add_constraint(mdl.sum(df_secs.frac) == 1);
# north america constraint:
# - add a 1-0 column equal to 1
# compute the scalar product of frac variables and the 1-0 'is_na' column and set a minimum
mdl.add_constraint(mdl.dot(df_secs.frac, df_secs.is_na) >= .4);
# ensure minimal return on investment
target_return = 9 # return data is expressed in percents
# again we use scalar product to compute compound return rate
# keep the expression to use as a kpi.
actual_return = mdl.dot(df_secs.frac, df_secs['return'])
mdl.add_kpi(actual_return, 'ROI')
# keep the constraint for later use (more on this later)
ct_return = mdl.add_constraint(actual_return >= 9);
###Output
_____no_output_____
###Markdown
Express the objectiveThe objective or goal is to minimize risk, here computed as the variance of the allocation, given a minimum return rate is guaranteed.Variance is computed as a _quadratic_ expression, which makes this model a Quadratic Programming (QP) model
###Code
# KPIs
fracs = df_secs.frac
variance = mdl.sum(float(dfv[sec1][sec2]) * fracs[sec1] * fracs[sec2] for sec1 in securities for sec2 in securities)
mdl.add_kpi(variance, 'Variance')
# finally the objective
mdl.minimize(variance)
###Output
_____no_output_____
###Markdown
Solve the modelIf you're using a Community Edition of CPLEX runtimes, depending on the size of the problem, the solve stage may fail and will need a paying subscription or product installation.We display the objective and KPI values after the solve by calling the method report() on the model.
###Code
assert mdl.solve(), "Solve failed"
mdl.report()
###Output
* model portfolio_miqp solved with objective = 0.406
* KPI: ROI = 9.000
* KPI: Variance = 0.406
###Markdown
The model has solved with a target return of 9% and a variance of 0.406.
###Code
all_fracs = {}
for row in df_secs.itertuples():
pct = 100 * row.frac.solution_value
all_fracs[row[0]] = pct
print('-- fraction allocated in: {0:<12}: {1:.2f}%'.format(row[0], pct))
###Output
-- fraction allocated in: treasury : 30.00%
-- fraction allocated in: hardware : 2.08%
-- fraction allocated in: theater : 5.46%
-- fraction allocated in: telecom : 2.46%
-- fraction allocated in: brewery : 15.35%
-- fraction allocated in: highways : 8.60%
-- fraction allocated in: cars : 1.61%
-- fraction allocated in: bank : 29.00%
-- fraction allocated in: software : 4.34%
-- fraction allocated in: electronics : 1.10%
###Markdown
Let's display these fractions in a pie chart using the Python package [*matplotlib*](http://matplotlib.org/).
###Code
%matplotlib inline
import matplotlib.pyplot as plt
def display_pie(pie_values, pie_labels, colors=None,title=''):
plt.axis("equal")
plt.pie(pie_values, labels=pie_labels, colors=colors, autopct="%1.1f%%")
plt.title(title)
plt.show()
display_pie( list(all_fracs.values()), list(all_fracs),title='Allocated Fractions')
###Output
_____no_output_____
###Markdown
What-if analysis: trying different values for target returnThe above model was solved with a 'hard coded' value of 9% for the target. Now, one can wonder how variance would vary if we changed this target return value.In this part, we will leverage DOcplex model edition capabilities to explore different scenarios with different target return values.We will run the model for target return values betwen 4% and 20%. For each possible target return value,we modify the right-hand side (or _rhs_) of the `ct_target` constraint we kept as a variable, and solve again,keeping the values in a list.
###Code
target_returns = range(5,21) # from 5 to 20, included
variances = []
for target in target_returns:
# modify the constraint's right hand side.
ct_return.rhs = target
cur_s = mdl.solve()
assert cur_s # solve is OK
cur_variance = variance.solution_value
print('- for a target return of: {0}%, variance={1}'.format(target, cur_variance))
variances.append(cur_variance)
###Output
- for a target return of: 5%, variance=0.28105252209449944
- for a target return of: 6%, variance=0.28105252214416476
- for a target return of: 7%, variance=0.28105252225011274
- for a target return of: 8%, variance=0.30818590869638357
- for a target return of: 9%, variance=0.40557734940356227
- for a target return of: 10%, variance=0.5503435250054378
- for a target return of: 11%, variance=0.7417945731282698
- for a target return of: 12%, variance=0.9798459646928664
- for a target return of: 13%, variance=1.2598935443762442
- for a target return of: 14%, variance=1.5813755540443808
- for a target return of: 15%, variance=1.9442235946080064
- for a target return of: 16%, variance=2.3469592331334908
- for a target return of: 17%, variance=2.7889850545628727
- for a target return of: 18%, variance=3.2707284094234224
- for a target return of: 19%, variance=3.792503995389222
- for a target return of: 20%, variance=4.3543118101284675
###Markdown
Again we use `matplotlib` to print variances vs. target returns.
###Code
plt.plot(target_returns, variances, 'bo-')
plt.title('Variance vs. Target Return')
plt.xlabel('target return (in %)')
plt.ylabel('variance')
plt.show()
###Output
_____no_output_____ |
nb/training_desi_complexdust_speculator.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/My\ Drive/speculator_fork
import os
import pickle
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from speculator import SpectrumPCA
from speculator import Speculator
# read DESI wavelength
wave = np.load('wave_fsps.npy')
n_param = 10
n_wave = len(wave)
batches = '0_399'
n_pcas = 50
# load trained PCA basis object
print('training PCA bases')
PCABasis = SpectrumPCA(
n_parameters=n_param, # number of parameters
n_wavelengths=n_wave, # number of wavelength values
n_pcas=n_pcas, # number of pca coefficients to include in the basis
spectrum_filenames=None, # list of filenames containing the (un-normalized) log spectra for training the PCA
parameter_filenames=[], # list of filenames containing the corresponding parameter values
parameter_selection=None) # pass an optional function that takes in parameter vector(s) and returns True/False for any extra parameter cuts we want to impose on the training sample (eg we may want to restrict the parameter ranges)
PCABasis._load_from_file('DESI_complexdust.0_199.seed0.pca%i.hdf5' % n_pcas)
_training_theta = np.load('DESI_complexdust.%s.seed0.0_199pca%i_parameters.npy'% (batches, n_pcas))
_training_pca = np.load('DESI_complexdust.%s.seed0.0_199pca%i_pca.npy'% (batches, n_pcas))
training_theta = tf.convert_to_tensor(_training_theta.astype(np.float32))
training_pca = tf.convert_to_tensor(_training_pca.astype(np.float32))
print('training set size = %i' % training_pca.shape[0])
# train Speculator
speculator = Speculator(
n_parameters=n_param, # number of model parameters
wavelengths=wave, # array of wavelengths
pca_transform_matrix=PCABasis.pca_transform_matrix,
parameters_shift=PCABasis.parameters_shift,
parameters_scale=PCABasis.parameters_scale,
pca_shift=PCABasis.pca_shift,
pca_scale=PCABasis.pca_scale,
spectrum_shift=PCABasis.spectrum_shift,
spectrum_scale=PCABasis.spectrum_scale,
n_hidden=[256, 256, 256], # network architecture (list of hidden units per layer)
restore=False,
optimizer=tf.keras.optimizers.Adam()) # optimizer for model training
# cooling schedule
lr = [1e-3, 1e-4, 1e-5, 1e-6]
batch_size = [5000, 10000, 50000, 100000]#int(training_theta.shape[0])]
gradient_accumulation_steps = [1, 1, 1, 1] # split the largest batch size into 10 when computing gradients to avoid memory overflow
# early stopping set up
patience = 20
# train using cooling/heating schedule for lr/batch-size
for i in range(len(lr)):
print('learning rate = ' + str(lr[i]) + ', batch size = ' + str(batch_size[i]))
# set learning rate
speculator.optimizer.lr = lr[i]
n_training = training_theta.shape[0]
# create iterable dataset (given batch size)
training_data = tf.data.Dataset.from_tensor_slices((training_theta, training_pca)).shuffle(n_training).batch(batch_size[i])
# set up training loss
training_loss = [np.infty]
validation_loss = [np.infty]
best_loss = np.infty
early_stopping_counter = 0
# loop over epochs
while early_stopping_counter < patience:
# loop over batches
for theta, pca in training_data:
# training step: check whether to accumulate gradients or not (only worth doing this for very large batch sizes)
if gradient_accumulation_steps[i] == 1:
loss = speculator.training_step(theta, pca)
else:
loss = speculator.training_step_with_accumulated_gradients(theta, pca, accumulation_steps=gradient_accumulation_steps[i])
# compute validation loss at the end of the epoch
validation_loss.append(speculator.compute_loss(training_theta, training_pca).numpy())
# early stopping condition
if validation_loss[-1] < best_loss:
best_loss = validation_loss[-1]
early_stopping_counter = 0
else:
early_stopping_counter += 1
if early_stopping_counter >= patience:
speculator.update_emulator_parameters()
speculator.save('_DESI_complexdust_model.%s.pca%i.log' % (batches, n_pcas))
attributes = list([
list(speculator.W_),
list(speculator.b_),
list(speculator.alphas_),
list(speculator.betas_),
speculator.pca_transform_matrix_,
speculator.pca_shift_,
speculator.pca_scale_,
speculator.spectrum_shift_,
speculator.spectrum_scale_,
speculator.parameters_shift_,
speculator.parameters_scale_,
speculator.wavelengths])
# save attributes to file
f = open('DESI_complexdust_model.%s.pca%i.log.pkl' % (batches, n_pcas), 'wb')
pickle.dump(attributes, f)
f.close()
print('Validation loss = %s' % str(best_loss))
speculator = Speculator(restore=True, restore_filename='_DESI_complexdust_model.%s.pca%i.log' % (batches, n_pcas))
# read in training parameters and data
theta_test = np.load('DESI_complexdust.theta_test.npy')
logspectrum_test = np.load('DESI_complexdust.logspectrum_fsps_test.npy')
spectrum_test = 10**logspectrum_test
logspectrum_spec = speculator.log_spectrum(theta_test.astype(np.float32))
spectrum_spec = 10**logspectrum_spec
# figure comparing Speculator log spectrum to FSPS log spectrum
fig = plt.figure(figsize=(15,5))
sub = fig.add_subplot(111)
for ii, i in enumerate(np.random.choice(len(theta_test), size=5)):
sub.plot(wave, logspectrum_spec[i], c='C%i' % ii, ls='-', label='Speculator')
sub.plot(wave, logspectrum_test[i], c='C%i' % ii, ls=':', label='FSPS')
if ii == 0: sub.legend(loc='upper right', fontsize=20)
sub.set_xlabel('wavelength ($A$)', fontsize=25)
sub.set_xlim(3e3, 1e4)
sub.set_ylabel('log flux', fontsize=25)
fig.savefig('validate_desi_complexdust.%s.pca%i.0.png' % (batches, n_pcas), bbox_inches='tight')
# more quantitative accuracy test of the Speculator model
frac_dspectrum = (spectrum_spec - spectrum_test) / spectrum_test
frac_dspectrum_quantiles = np.nanquantile(frac_dspectrum,
[0.0005, 0.005, 0.025, 0.5, 0.975, 0.995, 0.9995], axis=0)
fig = plt.figure(figsize=(15,5))
sub = fig.add_subplot(111)
sub.fill_between(wave, frac_dspectrum_quantiles[0],
frac_dspectrum_quantiles[6], fc='C0', ec='none', alpha=0.1, label='99.9%')
sub.fill_between(wave, frac_dspectrum_quantiles[1],
frac_dspectrum_quantiles[5], fc='C0', ec='none', alpha=0.2, label='99%')
sub.fill_between(wave, frac_dspectrum_quantiles[2],
frac_dspectrum_quantiles[4], fc='C0', ec='none', alpha=0.3, label='95%')
sub.plot(wave, frac_dspectrum_quantiles[3], c='C0', ls='-')
sub.plot(wave, np.zeros(len(wave)), c='k', ls=':')
sub.legend(loc='upper right', fontsize=20)
sub.set_xlabel('wavelength ($A$)', fontsize=25)
sub.set_xlim(3e3, 1e4)
sub.set_ylabel(r'$(f_{\rm speculator} - f_{\rm fsps})/f_{\rm fsps}$', fontsize=25)
sub.set_ylim(-0.1, 0.1)
fig.savefig('validate_desi_complexdust.%s.pca%i.1.png' % (batches, n_pcas), bbox_inches='tight')
###Output
_____no_output_____ |
Code/Lab/Airbnb.ipynb | ###Markdown
Airbnb Data First we read in the data
###Code
url1 = "http://data.insideairbnb.com/united-states/"
url2 = "ny/new-york-city/2016-02-02/data/listings.csv.gz"
full_df = pd.read_csv(url1+url2, compression="gzip")
full_df.head()
###Output
/home/chase/Programming/anaconda3/lib/python3.5/site-packages/IPython/core/interactiveshell.py:2723: DtypeWarning: Columns (40) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
We don't want all data, so let's focus on a few variables.
###Code
df = full_df[["id", "price", "number_of_reviews", "review_scores_rating"]]
df.head()
###Output
_____no_output_____
###Markdown
Need to convert prices to floats
###Code
df.replace({'price': {'\$': ''}}, regex=True, inplace=True)
df.replace({'price': {'\,': ''}}, regex=True, inplace=True)
df['price'] = df['price'].astype('float64', copy=False)
###Output
/home/chase/Programming/anaconda3/lib/python3.5/site-packages/pandas/core/generic.py:3050: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
regex=regex)
/home/chase/Programming/anaconda3/lib/python3.5/site-packages/ipykernel/__main__.py:3: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
app.launch_new_instance()
###Markdown
We might think that better apartments get rented more often, let's plot a scatter (or multiple boxes?) plot of the number of reviews vs the review score
###Code
df.plot.scatter(x="number_of_reviews", y="review_scores_rating", figsize=(10, 8), alpha=0.2)
bins = [0, 5, 10, 25, 50, 100, 350]
boxplot_vecs = []
fig, ax = plt.subplots(figsize=(10, 8))
for i in range(1, 7):
lb = bins[i-1]
ub = bins[i]
foo = df["review_scores_rating"][df["number_of_reviews"].apply(lambda x: lb <= x <= ub)].dropna()
boxplot_vecs.append(foo.values)
ax.boxplot(boxplot_vecs, labels=bins[:-1])
plt.show()
###Output
_____no_output_____
###Markdown
Better reviews also are correlated with higher prices
###Code
df.plot.scatter(x="review_scores_rating", y="price", figsize=(10, 8), alpha=0.2)
###Output
_____no_output_____ |
demo/mmaction2_tutorial.ipynb | ###Markdown
MMAction2 TutorialWelcome to MMAction2! This is the official colab tutorial for using MMAction2. In this tutorial, you will learn- Perform inference with a MMAction2 recognizer.- Train a new recognizer with a new dataset.Let's start! Install MMAction2
###Code
# Check nvcc version
!nvcc -V
# Check GCC version
!gcc --version
# install dependencies: (use cu101 because colab has CUDA 10.1)
!pip install -U torch==1.8.0+cu101 torchvision==0.9.0+cu101 torchtext==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
# install mmcv-full thus we could use CUDA operators
!pip install mmcv-full==1.3.9 -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.8.0/index.html
# Install mmaction2
!rm -rf mmaction2
!git clone https://github.com/open-mmlab/mmaction2.git
%cd mmaction2
!pip install -e .
# Install some optional requirements
!pip install -r requirements/optional.txt
# Check Pytorch installation
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
# Check MMAction2 installation
import mmaction
print(mmaction.__version__)
# Check MMCV installation
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print(get_compiling_cuda_version())
print(get_compiler_version())
###Output
1.8.0+cu101 True
0.16.0
10.1
GCC 7.3
###Markdown
Perform inference with a MMAction2 recognizerMMAction2 already provides high level APIs to do inference and training.
###Code
!mkdir checkpoints
!wget -c https://download.openmmlab.com/mmaction/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth \
-O checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth
from mmaction.apis import inference_recognizer, init_recognizer
# Choose to use a config and initialize the recognizer
config = 'configs/recognition/tsn/tsn_r50_video_inference_1x1x3_100e_kinetics400_rgb.py'
# Setup a checkpoint file to load
checkpoint = 'checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Initialize the recognizer
model = init_recognizer(config, checkpoint, device='cuda:0')
# Use the recognizer to do inference
video = 'demo/demo.mp4'
label = 'demo/label_map_k400.txt'
results = inference_recognizer(model, video, label)
# Let's show the results
for result in results:
print(f'{result[0]}: ', result[1])
###Output
arm wrestling: 29.616438
rock scissors paper: 10.754841
shaking hands: 9.908401
clapping: 9.189913
massaging feet: 8.305307
###Markdown
Train a recognizer on customized datasetTo train a new recognizer, there are usually three things to do:1. Support a new dataset2. Modify the config3. Train a new recognizer Support a new datasetIn this tutorial, we gives an example to convert the data into the format of existing datasets. Other methods and more advanced usages can be found in the [doc](/docs/tutorials/new_dataset.md)Firstly, let's download a tiny dataset obtained from [Kinetics-400](https://deepmind.com/research/open-source/open-source-datasets/kinetics/). We select 30 videos with their labels as train dataset and 10 videos with their labels as test dataset.
###Code
# download, decompress the data
!rm kinetics400_tiny.zip*
!rm -rf kinetics400_tiny
!wget https://download.openmmlab.com/mmaction/kinetics400_tiny.zip
!unzip kinetics400_tiny.zip > /dev/null
# Check the directory structure of the tiny data
# Install tree first
!apt-get -q install tree
!tree kinetics400_tiny
# After downloading the data, we need to check the annotation format
!cat kinetics400_tiny/kinetics_tiny_train_video.txt
###Output
D32_1gwq35E.mp4 0
iRuyZSKhHRg.mp4 1
oXy-e_P_cAI.mp4 0
34XczvTaRiI.mp4 1
h2YqqUhnR34.mp4 0
O46YA8tI530.mp4 0
kFC3KY2bOP8.mp4 1
WWP5HZJsg-o.mp4 1
phDqGd0NKoo.mp4 1
yLC9CtWU5ws.mp4 0
27_CSXByd3s.mp4 1
IyfILH9lBRo.mp4 1
T_TMNGzVrDk.mp4 1
TkkZPZHbAKA.mp4 0
PnOe3GZRVX8.mp4 1
soEcZZsBmDs.mp4 1
FMlSTTpN3VY.mp4 1
WaS0qwP46Us.mp4 0
A-wiliK50Zw.mp4 1
oMrZaozOvdQ.mp4 1
ZQV4U2KQ370.mp4 0
DbX8mPslRXg.mp4 1
h10B9SVE-nk.mp4 1
P5M-hAts7MQ.mp4 0
R8HXQkdgKWA.mp4 0
D92m0HsHjcQ.mp4 0
RqnKtCEoEcA.mp4 0
LvcFDgCAXQs.mp4 0
xGY2dP0YUjA.mp4 0
Wh_YPQdH1Zg.mp4 0
###Markdown
According to the format defined in [`VideoDataset`](./datasets/video_dataset.py), each line indicates a sample video with the filepath and label, which are split with a whitespace. Modify the configIn the next step, we need to modify the config for the training.To accelerate the process, we finetune a recognizer using a pre-trained recognizer.
###Code
from mmcv import Config
cfg = Config.fromfile('./configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py')
###Output
_____no_output_____
###Markdown
Given a config that trains a TSN model on kinetics400-full dataset, we need to modify some values to use it for training TSN on Kinetics400-tiny dataset.
###Code
from mmcv.runner import set_random_seed
# Modify dataset type and path
cfg.dataset_type = 'VideoDataset'
cfg.data_root = 'kinetics400_tiny/train/'
cfg.data_root_val = 'kinetics400_tiny/val/'
cfg.ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.type = 'VideoDataset'
cfg.data.test.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.data_prefix = 'kinetics400_tiny/val/'
cfg.data.train.type = 'VideoDataset'
cfg.data.train.ann_file = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.data.train.data_prefix = 'kinetics400_tiny/train/'
cfg.data.val.type = 'VideoDataset'
cfg.data.val.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.val.data_prefix = 'kinetics400_tiny/val/'
# The flag is used to determine whether it is omnisource training
cfg.setdefault('omnisource', False)
# Modify num classes of the model in cls_head
cfg.model.cls_head.num_classes = 2
# We can use the pre-trained TSN model
cfg.load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Set up working dir to save files and logs.
cfg.work_dir = './tutorial_exps'
# The original learning rate (LR) is set for 8-GPU training.
# We divide it by 8 since we only use one GPU.
cfg.data.videos_per_gpu = cfg.data.videos_per_gpu // 16
cfg.optimizer.lr = cfg.optimizer.lr / 8 / 16
cfg.total_epochs = 10
# We can set the checkpoint saving interval to reduce the storage cost
cfg.checkpoint_config.interval = 5
# We can set the log print interval to reduce the the times of printing log
cfg.log_config.interval = 5
# Set seed thus the results are more reproducible
cfg.seed = 0
set_random_seed(0, deterministic=False)
cfg.gpu_ids = range(1)
# Save the best
cfg.evaluation.save_best='auto'
# We can initialize the logger for training and have a look
# at the final config used for training
print(f'Config:\n{cfg.pretty_text}')
###Output
Config:
model = dict(
type='Recognizer2D',
backbone=dict(
type='ResNet',
pretrained='torchvision://resnet50',
depth=50,
norm_eval=False),
cls_head=dict(
type='TSNHead',
num_classes=2,
in_channels=2048,
spatial_type='avg',
consensus=dict(type='AvgConsensus', dim=1),
dropout_ratio=0.4,
init_std=0.01),
train_cfg=None,
test_cfg=dict(average_clips=None))
optimizer = dict(type='SGD', lr=7.8125e-05, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=40, norm_type=2))
lr_config = dict(policy='step', step=[40, 80])
total_epochs = 10
checkpoint_config = dict(interval=5)
log_config = dict(interval=5, hooks=[dict(type='TextLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
resume_from = None
workflow = [('train', 1)]
dataset_type = 'VideoDataset'
data_root = 'kinetics400_tiny/train/'
data_root_val = 'kinetics400_tiny/val/'
ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_bgr=False)
train_pipeline = [
dict(type='DecordInit'),
dict(type='SampleFrames', clip_len=1, frame_interval=1, num_clips=8),
dict(type='DecordDecode'),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]
val_pipeline = [
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
test_pipeline = [
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='ThreeCrop', crop_size=256),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
data = dict(
videos_per_gpu=2,
workers_per_gpu=4,
train=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_train_video.txt',
data_prefix='kinetics400_tiny/train/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames', clip_len=1, frame_interval=1,
num_clips=8),
dict(type='DecordDecode'),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]),
val=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_val_video.txt',
data_prefix='kinetics400_tiny/val/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]),
test=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_val_video.txt',
data_prefix='kinetics400_tiny/val/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='ThreeCrop', crop_size=256),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]))
evaluation = dict(
interval=5,
metrics=['top_k_accuracy', 'mean_class_accuracy'],
save_best='auto')
work_dir = './tutorial_exps'
omnisource = False
seed = 0
gpu_ids = range(0, 1)
###Markdown
Train a new recognizerFinally, lets initialize the dataset and recognizer, then train a new recognizer!
###Code
import os.path as osp
from mmaction.datasets import build_dataset
from mmaction.models import build_model
from mmaction.apis import train_model
import mmcv
# Build the dataset
datasets = [build_dataset(cfg.data.train)]
# Build the recognizer
model = build_model(cfg.model, train_cfg=cfg.get('train_cfg'), test_cfg=cfg.get('test_cfg'))
# Create work_dir
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
train_model(model, datasets, cfg, distributed=False, validate=True)
###Output
Use load_from_torchvision loader
###Markdown
Understand the logFrom the log, we can have a basic understanding the training process and know how well the recognizer is trained.Firstly, the ResNet-50 backbone pre-trained on ImageNet is loaded, this is a common practice since training from scratch is more cost. The log shows that all the weights of the ResNet-50 backbone are loaded except the `fc.bias` and `fc.weight`.Second, since the dataset we are using is small, we loaded a TSN model and finetune it for action recognition.The original TSN is trained on original Kinetics-400 dataset which contains 400 classes but Kinetics-400 Tiny dataset only have 2 classes. Therefore, the last FC layer of the pre-trained TSN for classification has different weight shape and is not used.Third, after training, the recognizer is evaluated by the default evaluation. The results show that the recognizer achieves 100% top1 accuracy and 100% top5 accuracy on the val dataset, Not bad! Test the trained recognizerAfter finetuning the recognizer, let's check the prediction results!
###Code
from mmaction.apis import single_gpu_test
from mmaction.datasets import build_dataloader
from mmcv.parallel import MMDataParallel
# Build a test dataloader
dataset = build_dataset(cfg.data.test, dict(test_mode=True))
data_loader = build_dataloader(
dataset,
videos_per_gpu=1,
workers_per_gpu=cfg.data.workers_per_gpu,
dist=False,
shuffle=False)
model = MMDataParallel(model, device_ids=[0])
outputs = single_gpu_test(model, data_loader)
eval_config = cfg.evaluation
eval_config.pop('interval')
eval_res = dataset.evaluate(outputs, **eval_config)
for name, val in eval_res.items():
print(f'{name}: {val:.04f}')
###Output
[ ] 0/10, elapsed: 0s, ETA:
###Markdown
MMAction2 TutorialWelcome to MMAction2! This is the official colab tutorial for using MMAction2. In this tutorial, you will learn- Perform inference with a MMAction2 recognizer.- Train a new recognizer with a new dataset.Let's start! Install MMAction2
###Code
# Check nvcc version
!nvcc -V
# Check GCC version
!gcc --version
# install dependencies: (use cu101 because colab has CUDA 10.1)
!pip install -U torch==1.8.0+cu101 torchvision==0.9.0+cu101 torchtext==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
# install mmcv-full thus we could use CUDA operators
!pip install mmcv-full==1.3.9 -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.8.0/index.html
# Install mmaction2
!rm -rf mmaction2
!git clone https://github.com/open-mmlab/mmaction2.git
%cd mmaction2
!pip install -e .
# Install some optional requirements
!pip install -r requirements/optional.txt
# Check Pytorch installation
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
# Check MMAction2 installation
import mmaction
print(mmaction.__version__)
# Check MMCV installation
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print(get_compiling_cuda_version())
print(get_compiler_version())
###Output
1.8.0+cu101 True
0.16.0
10.1
GCC 7.3
###Markdown
Perform inference with a MMAction2 recognizerMMAction2 already provides high level APIs to do inference and training.
###Code
!mkdir checkpoints
!wget -c https://download.openmmlab.com/mmaction/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth \
-O checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth
from mmaction.apis import inference_recognizer, init_recognizer
# Choose to use a config and initialize the recognizer
config = 'configs/recognition/tsn/tsn_r50_video_inference_1x1x3_100e_kinetics400_rgb.py'
# Setup a checkpoint file to load
checkpoint = 'checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Initialize the recognizer
model = init_recognizer(config, checkpoint, device='cuda:0')
# Use the recognizer to do inference
video = 'demo/demo.mp4'
label = 'tools/data/kinetics/label_map_k400.txt'
results = inference_recognizer(model, video)
labels = open(label).readlines()
labels = [x.strip() for x in labels]
results = [(labels[k[0]], k[1]) for k in results]
# Let's show the results
for result in results:
print(f'{result[0]}: ', result[1])
###Output
arm wrestling: 29.616438
rock scissors paper: 10.754841
shaking hands: 9.908401
clapping: 9.189913
massaging feet: 8.305307
###Markdown
Train a recognizer on customized datasetTo train a new recognizer, there are usually three things to do:1. Support a new dataset2. Modify the config3. Train a new recognizer Support a new datasetIn this tutorial, we gives an example to convert the data into the format of existing datasets. Other methods and more advanced usages can be found in the [doc](/docs/en/tutorials/new_dataset.md)Firstly, let's download a tiny dataset obtained from [Kinetics-400](https://deepmind.com/research/open-source/open-source-datasets/kinetics/). We select 30 videos with their labels as train dataset and 10 videos with their labels as test dataset.
###Code
# download, decompress the data
!rm kinetics400_tiny.zip*
!rm -rf kinetics400_tiny
!wget https://download.openmmlab.com/mmaction/kinetics400_tiny.zip
!unzip kinetics400_tiny.zip > /dev/null
# Check the directory structure of the tiny data
# Install tree first
!apt-get -q install tree
!tree kinetics400_tiny
# After downloading the data, we need to check the annotation format
!cat kinetics400_tiny/kinetics_tiny_train_video.txt
###Output
D32_1gwq35E.mp4 0
iRuyZSKhHRg.mp4 1
oXy-e_P_cAI.mp4 0
34XczvTaRiI.mp4 1
h2YqqUhnR34.mp4 0
O46YA8tI530.mp4 0
kFC3KY2bOP8.mp4 1
WWP5HZJsg-o.mp4 1
phDqGd0NKoo.mp4 1
yLC9CtWU5ws.mp4 0
27_CSXByd3s.mp4 1
IyfILH9lBRo.mp4 1
T_TMNGzVrDk.mp4 1
TkkZPZHbAKA.mp4 0
PnOe3GZRVX8.mp4 1
soEcZZsBmDs.mp4 1
FMlSTTpN3VY.mp4 1
WaS0qwP46Us.mp4 0
A-wiliK50Zw.mp4 1
oMrZaozOvdQ.mp4 1
ZQV4U2KQ370.mp4 0
DbX8mPslRXg.mp4 1
h10B9SVE-nk.mp4 1
P5M-hAts7MQ.mp4 0
R8HXQkdgKWA.mp4 0
D92m0HsHjcQ.mp4 0
RqnKtCEoEcA.mp4 0
LvcFDgCAXQs.mp4 0
xGY2dP0YUjA.mp4 0
Wh_YPQdH1Zg.mp4 0
###Markdown
According to the format defined in [`VideoDataset`](./datasets/video_dataset.py), each line indicates a sample video with the filepath and label, which are split with a whitespace. Modify the configIn the next step, we need to modify the config for the training.To accelerate the process, we finetune a recognizer using a pre-trained recognizer.
###Code
from mmcv import Config
cfg = Config.fromfile('./configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py')
###Output
_____no_output_____
###Markdown
Given a config that trains a TSN model on kinetics400-full dataset, we need to modify some values to use it for training TSN on Kinetics400-tiny dataset.
###Code
from mmcv.runner import set_random_seed
# Modify dataset type and path
cfg.dataset_type = 'VideoDataset'
cfg.data_root = 'kinetics400_tiny/train/'
cfg.data_root_val = 'kinetics400_tiny/val/'
cfg.ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.type = 'VideoDataset'
cfg.data.test.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.data_prefix = 'kinetics400_tiny/val/'
cfg.data.train.type = 'VideoDataset'
cfg.data.train.ann_file = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.data.train.data_prefix = 'kinetics400_tiny/train/'
cfg.data.val.type = 'VideoDataset'
cfg.data.val.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.val.data_prefix = 'kinetics400_tiny/val/'
# The flag is used to determine whether it is omnisource training
cfg.setdefault('omnisource', False)
# Modify num classes of the model in cls_head
cfg.model.cls_head.num_classes = 2
# We can use the pre-trained TSN model
cfg.load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Set up working dir to save files and logs.
cfg.work_dir = './tutorial_exps'
# The original learning rate (LR) is set for 8-GPU training.
# We divide it by 8 since we only use one GPU.
cfg.data.videos_per_gpu = cfg.data.videos_per_gpu // 16
cfg.optimizer.lr = cfg.optimizer.lr / 8 / 16
cfg.total_epochs = 10
# We can set the checkpoint saving interval to reduce the storage cost
cfg.checkpoint_config.interval = 5
# We can set the log print interval to reduce the the times of printing log
cfg.log_config.interval = 5
# Set seed thus the results are more reproducible
cfg.seed = 0
set_random_seed(0, deterministic=False)
cfg.gpu_ids = range(1)
# Save the best
cfg.evaluation.save_best='auto'
# We can initialize the logger for training and have a look
# at the final config used for training
print(f'Config:\n{cfg.pretty_text}')
###Output
Config:
model = dict(
type='Recognizer2D',
backbone=dict(
type='ResNet',
pretrained='torchvision://resnet50',
depth=50,
norm_eval=False),
cls_head=dict(
type='TSNHead',
num_classes=2,
in_channels=2048,
spatial_type='avg',
consensus=dict(type='AvgConsensus', dim=1),
dropout_ratio=0.4,
init_std=0.01),
train_cfg=None,
test_cfg=dict(average_clips=None))
optimizer = dict(type='SGD', lr=7.8125e-05, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=40, norm_type=2))
lr_config = dict(policy='step', step=[40, 80])
total_epochs = 10
checkpoint_config = dict(interval=5)
log_config = dict(interval=5, hooks=[dict(type='TextLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
resume_from = None
workflow = [('train', 1)]
dataset_type = 'VideoDataset'
data_root = 'kinetics400_tiny/train/'
data_root_val = 'kinetics400_tiny/val/'
ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_bgr=False)
train_pipeline = [
dict(type='DecordInit'),
dict(type='SampleFrames', clip_len=1, frame_interval=1, num_clips=8),
dict(type='DecordDecode'),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]
val_pipeline = [
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
test_pipeline = [
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='ThreeCrop', crop_size=256),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
data = dict(
videos_per_gpu=2,
workers_per_gpu=2,
train=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_train_video.txt',
data_prefix='kinetics400_tiny/train/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames', clip_len=1, frame_interval=1,
num_clips=8),
dict(type='DecordDecode'),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]),
val=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_val_video.txt',
data_prefix='kinetics400_tiny/val/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]),
test=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_val_video.txt',
data_prefix='kinetics400_tiny/val/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='ThreeCrop', crop_size=256),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]))
evaluation = dict(
interval=5,
metrics=['top_k_accuracy', 'mean_class_accuracy'],
save_best='auto')
work_dir = './tutorial_exps'
omnisource = False
seed = 0
gpu_ids = range(0, 1)
###Markdown
Train a new recognizerFinally, lets initialize the dataset and recognizer, then train a new recognizer!
###Code
import os.path as osp
from mmaction.datasets import build_dataset
from mmaction.models import build_model
from mmaction.apis import train_model
import mmcv
# Build the dataset
datasets = [build_dataset(cfg.data.train)]
# Build the recognizer
model = build_model(cfg.model, train_cfg=cfg.get('train_cfg'), test_cfg=cfg.get('test_cfg'))
# Create work_dir
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
train_model(model, datasets, cfg, distributed=False, validate=True)
###Output
Use load_from_torchvision loader
###Markdown
Understand the logFrom the log, we can have a basic understanding the training process and know how well the recognizer is trained.Firstly, the ResNet-50 backbone pre-trained on ImageNet is loaded, this is a common practice since training from scratch is more cost. The log shows that all the weights of the ResNet-50 backbone are loaded except the `fc.bias` and `fc.weight`.Second, since the dataset we are using is small, we loaded a TSN model and finetune it for action recognition.The original TSN is trained on original Kinetics-400 dataset which contains 400 classes but Kinetics-400 Tiny dataset only have 2 classes. Therefore, the last FC layer of the pre-trained TSN for classification has different weight shape and is not used.Third, after training, the recognizer is evaluated by the default evaluation. The results show that the recognizer achieves 100% top1 accuracy and 100% top5 accuracy on the val dataset, Not bad! Test the trained recognizerAfter finetuning the recognizer, let's check the prediction results!
###Code
from mmaction.apis import single_gpu_test
from mmaction.datasets import build_dataloader
from mmcv.parallel import MMDataParallel
# Build a test dataloader
dataset = build_dataset(cfg.data.test, dict(test_mode=True))
data_loader = build_dataloader(
dataset,
videos_per_gpu=1,
workers_per_gpu=cfg.data.workers_per_gpu,
dist=False,
shuffle=False)
model = MMDataParallel(model, device_ids=[0])
outputs = single_gpu_test(model, data_loader)
eval_config = cfg.evaluation
eval_config.pop('interval')
eval_res = dataset.evaluate(outputs, **eval_config)
for name, val in eval_res.items():
print(f'{name}: {val:.04f}')
###Output
[ ] 0/10, elapsed: 0s, ETA:
###Markdown
MMAction2 TutorialWelcome to MMAction2! This is the official colab tutorial for using MMAction2. In this tutorial, you will learn- Perform inference with a MMAction2 recognizer.- Train a new recognizer with a new dataset.Let's start! Install MMAction2
###Code
# Check nvcc version
!nvcc -V
# Check GCC version
!gcc --version
# install dependencies: (use cu101 because colab has CUDA 10.1)
!pip install -U torch==1.8.0+cu101 torchvision==0.9.0+cu101 torchtext==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
# install mmcv-full thus we could use CUDA operators
!pip install mmcv-full==1.3.9 -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.8.0/index.html
# Install mmaction2
!rm -rf mmaction2
!git clone https://github.com/open-mmlab/mmaction2.git
%cd mmaction2
!pip install -e .
# Install some optional requirements
!pip install -r requirements/optional.txt
# Check Pytorch installation
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
# Check MMAction2 installation
import mmaction
print(mmaction.__version__)
# Check MMCV installation
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print(get_compiling_cuda_version())
print(get_compiler_version())
###Output
1.8.0+cu101 True
0.16.0
10.1
GCC 7.3
###Markdown
Perform inference with a MMAction2 recognizerMMAction2 already provides high level APIs to do inference and training.
###Code
!mkdir checkpoints
!wget -c https://download.openmmlab.com/mmaction/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth \
-O checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth
from mmaction.apis import inference_recognizer, init_recognizer
# Choose to use a config and initialize the recognizer
config = 'configs/recognition/tsn/tsn_r50_video_inference_1x1x3_100e_kinetics400_rgb.py'
# Setup a checkpoint file to load
checkpoint = 'checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Initialize the recognizer
model = init_recognizer(config, checkpoint, device='cuda:0')
# Use the recognizer to do inference
video = 'demo/demo.mp4'
label = 'demo/label_map_k400.txt'
results = inference_recognizer(model, video, label)
# Let's show the results
for result in results:
print(f'{result[0]}: ', result[1])
###Output
arm wrestling: 29.616438
rock scissors paper: 10.754841
shaking hands: 9.908401
clapping: 9.189913
massaging feet: 8.305307
###Markdown
Train a recognizer on customized datasetTo train a new recognizer, there are usually three things to do:1. Support a new dataset2. Modify the config3. Train a new recognizer Support a new datasetIn this tutorial, we gives an example to convert the data into the format of existing datasets. Other methods and more advanced usages can be found in the [doc](/docs/tutorials/new_dataset.md)Firstly, let's download a tiny dataset obtained from [Kinetics-400](https://deepmind.com/research/open-source/open-source-datasets/kinetics/). We select 30 videos with their labels as train dataset and 10 videos with their labels as test dataset.
###Code
# download, decompress the data
!rm kinetics400_tiny.zip*
!rm -rf kinetics400_tiny
!wget https://download.openmmlab.com/mmaction/kinetics400_tiny.zip
!unzip kinetics400_tiny.zip > /dev/null
# Check the directory structure of the tiny data
# Install tree first
!apt-get -q install tree
!tree kinetics400_tiny
# After downloading the data, we need to check the annotation format
!cat kinetics400_tiny/kinetics_tiny_train_video.txt
###Output
D32_1gwq35E.mp4 0
iRuyZSKhHRg.mp4 1
oXy-e_P_cAI.mp4 0
34XczvTaRiI.mp4 1
h2YqqUhnR34.mp4 0
O46YA8tI530.mp4 0
kFC3KY2bOP8.mp4 1
WWP5HZJsg-o.mp4 1
phDqGd0NKoo.mp4 1
yLC9CtWU5ws.mp4 0
27_CSXByd3s.mp4 1
IyfILH9lBRo.mp4 1
T_TMNGzVrDk.mp4 1
TkkZPZHbAKA.mp4 0
PnOe3GZRVX8.mp4 1
soEcZZsBmDs.mp4 1
FMlSTTpN3VY.mp4 1
WaS0qwP46Us.mp4 0
A-wiliK50Zw.mp4 1
oMrZaozOvdQ.mp4 1
ZQV4U2KQ370.mp4 0
DbX8mPslRXg.mp4 1
h10B9SVE-nk.mp4 1
P5M-hAts7MQ.mp4 0
R8HXQkdgKWA.mp4 0
D92m0HsHjcQ.mp4 0
RqnKtCEoEcA.mp4 0
LvcFDgCAXQs.mp4 0
xGY2dP0YUjA.mp4 0
Wh_YPQdH1Zg.mp4 0
###Markdown
According to the format defined in [`VideoDataset`](./datasets/video_dataset.py), each line indicates a sample video with the filepath and label, which are split with a whitespace. Modify the configIn the next step, we need to modify the config for the training.To accelerate the process, we finetune a recognizer using a pre-trained recognizer.
###Code
from mmcv import Config
cfg = Config.fromfile('./configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py')
###Output
_____no_output_____
###Markdown
Given a config that trains a TSN model on kinetics400-full dataset, we need to modify some values to use it for training TSN on Kinetics400-tiny dataset.
###Code
from mmcv.runner import set_random_seed
# Modify dataset type and path
cfg.dataset_type = 'VideoDataset'
cfg.data_root = 'kinetics400_tiny/train/'
cfg.data_root_val = 'kinetics400_tiny/val/'
cfg.ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.type = 'VideoDataset'
cfg.data.test.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.data_prefix = 'kinetics400_tiny/val/'
cfg.data.train.type = 'VideoDataset'
cfg.data.train.ann_file = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.data.train.data_prefix = 'kinetics400_tiny/train/'
cfg.data.val.type = 'VideoDataset'
cfg.data.val.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.val.data_prefix = 'kinetics400_tiny/val/'
# The flag is used to determine whether it is omnisource training
cfg.setdefault('omnisource', False)
# Modify num classes of the model in cls_head
cfg.model.cls_head.num_classes = 2
# We can use the pre-trained TSN model
cfg.load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Set up working dir to save files and logs.
cfg.work_dir = './tutorial_exps'
# The original learning rate (LR) is set for 8-GPU training.
# We divide it by 8 since we only use one GPU.
cfg.data.videos_per_gpu = cfg.data.videos_per_gpu // 16
cfg.optimizer.lr = cfg.optimizer.lr / 8 / 16
cfg.total_epochs = 10
# We can set the checkpoint saving interval to reduce the storage cost
cfg.checkpoint_config.interval = 5
# We can set the log print interval to reduce the the times of printing log
cfg.log_config.interval = 5
# Set seed thus the results are more reproducible
cfg.seed = 0
set_random_seed(0, deterministic=False)
cfg.gpu_ids = range(1)
# Save the best
cfg.evaluation.save_best='auto'
# We can initialize the logger for training and have a look
# at the final config used for training
print(f'Config:\n{cfg.pretty_text}')
###Output
Config:
model = dict(
type='Recognizer2D',
backbone=dict(
type='ResNet',
pretrained='torchvision://resnet50',
depth=50,
norm_eval=False),
cls_head=dict(
type='TSNHead',
num_classes=2,
in_channels=2048,
spatial_type='avg',
consensus=dict(type='AvgConsensus', dim=1),
dropout_ratio=0.4,
init_std=0.01),
train_cfg=None,
test_cfg=dict(average_clips=None))
optimizer = dict(type='SGD', lr=7.8125e-05, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=40, norm_type=2))
lr_config = dict(policy='step', step=[40, 80])
total_epochs = 10
checkpoint_config = dict(interval=5)
log_config = dict(interval=5, hooks=[dict(type='TextLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
resume_from = None
workflow = [('train', 1)]
dataset_type = 'VideoDataset'
data_root = 'kinetics400_tiny/train/'
data_root_val = 'kinetics400_tiny/val/'
ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_bgr=False)
train_pipeline = [
dict(type='DecordInit'),
dict(type='SampleFrames', clip_len=1, frame_interval=1, num_clips=8),
dict(type='DecordDecode'),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]
val_pipeline = [
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
test_pipeline = [
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='ThreeCrop', crop_size=256),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
data = dict(
videos_per_gpu=2,
workers_per_gpu=2,
train=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_train_video.txt',
data_prefix='kinetics400_tiny/train/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames', clip_len=1, frame_interval=1,
num_clips=8),
dict(type='DecordDecode'),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]),
val=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_val_video.txt',
data_prefix='kinetics400_tiny/val/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]),
test=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_val_video.txt',
data_prefix='kinetics400_tiny/val/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='ThreeCrop', crop_size=256),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]))
evaluation = dict(
interval=5,
metrics=['top_k_accuracy', 'mean_class_accuracy'],
save_best='auto')
work_dir = './tutorial_exps'
omnisource = False
seed = 0
gpu_ids = range(0, 1)
###Markdown
Train a new recognizerFinally, lets initialize the dataset and recognizer, then train a new recognizer!
###Code
import os.path as osp
from mmaction.datasets import build_dataset
from mmaction.models import build_model
from mmaction.apis import train_model
import mmcv
# Build the dataset
datasets = [build_dataset(cfg.data.train)]
# Build the recognizer
model = build_model(cfg.model, train_cfg=cfg.get('train_cfg'), test_cfg=cfg.get('test_cfg'))
# Create work_dir
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
train_model(model, datasets, cfg, distributed=False, validate=True)
###Output
Use load_from_torchvision loader
###Markdown
Understand the logFrom the log, we can have a basic understanding the training process and know how well the recognizer is trained.Firstly, the ResNet-50 backbone pre-trained on ImageNet is loaded, this is a common practice since training from scratch is more cost. The log shows that all the weights of the ResNet-50 backbone are loaded except the `fc.bias` and `fc.weight`.Second, since the dataset we are using is small, we loaded a TSN model and finetune it for action recognition.The original TSN is trained on original Kinetics-400 dataset which contains 400 classes but Kinetics-400 Tiny dataset only have 2 classes. Therefore, the last FC layer of the pre-trained TSN for classification has different weight shape and is not used.Third, after training, the recognizer is evaluated by the default evaluation. The results show that the recognizer achieves 100% top1 accuracy and 100% top5 accuracy on the val dataset, Not bad! Test the trained recognizerAfter finetuning the recognizer, let's check the prediction results!
###Code
from mmaction.apis import single_gpu_test
from mmaction.datasets import build_dataloader
from mmcv.parallel import MMDataParallel
# Build a test dataloader
dataset = build_dataset(cfg.data.test, dict(test_mode=True))
data_loader = build_dataloader(
dataset,
videos_per_gpu=1,
workers_per_gpu=cfg.data.workers_per_gpu,
dist=False,
shuffle=False)
model = MMDataParallel(model, device_ids=[0])
outputs = single_gpu_test(model, data_loader)
eval_config = cfg.evaluation
eval_config.pop('interval')
eval_res = dataset.evaluate(outputs, **eval_config)
for name, val in eval_res.items():
print(f'{name}: {val:.04f}')
###Output
[ ] 0/10, elapsed: 0s, ETA:
###Markdown
MMAction2 TutorialWelcome to MMAction2! This is the official colab tutorial for using MMAction2. In this tutorial, you will learn- Perform inference with a MMAction2 recognizer.- Train a new recognizer with a new dataset.Let's start! Install MMAction2
###Code
# Check nvcc version
!nvcc -V
# Check GCC version
!gcc --version
# install dependencies: (use cu101 because colab has CUDA 10.1)
!pip install -U torch==1.8.0+cu101 torchvision==0.9.0+cu101 torchtext==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
# install mmcv-full thus we could use CUDA operators
!pip install mmcv-full==1.3.9 -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.8.0/index.html
# Install mmaction2
!rm -rf mmaction2
!git clone https://github.com/open-mmlab/mmaction2.git
%cd mmaction2
!pip install -e .
# Install some optional requirements
!pip install -r requirements/optional.txt
# Check Pytorch installation
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
# Check MMAction2 installation
import mmaction
print(mmaction.__version__)
# Check MMCV installation
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print(get_compiling_cuda_version())
print(get_compiler_version())
###Output
1.8.0+cu101 True
0.16.0
10.1
GCC 7.3
###Markdown
Perform inference with a MMAction2 recognizerMMAction2 already provides high level APIs to do inference and training.
###Code
!mkdir checkpoints
!wget -c https://download.openmmlab.com/mmaction/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth \
-O checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth
from mmaction.apis import inference_recognizer, init_recognizer
# Choose to use a config and initialize the recognizer
config = 'configs/recognition/tsn/tsn_r50_video_inference_1x1x3_100e_kinetics400_rgb.py'
# Setup a checkpoint file to load
checkpoint = 'checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Initialize the recognizer
model = init_recognizer(config, checkpoint, device='cuda:0')
# Use the recognizer to do inference
video = 'demo/demo.mp4'
label = 'tools/data/kinetics/label_map_k400.txt'
results = inference_recognizer(model, video, label)
# Let's show the results
for result in results:
print(f'{result[0]}: ', result[1])
###Output
arm wrestling: 29.616438
rock scissors paper: 10.754841
shaking hands: 9.908401
clapping: 9.189913
massaging feet: 8.305307
###Markdown
Train a recognizer on customized datasetTo train a new recognizer, there are usually three things to do:1. Support a new dataset2. Modify the config3. Train a new recognizer Support a new datasetIn this tutorial, we gives an example to convert the data into the format of existing datasets. Other methods and more advanced usages can be found in the [doc](/docs/tutorials/new_dataset.md)Firstly, let's download a tiny dataset obtained from [Kinetics-400](https://deepmind.com/research/open-source/open-source-datasets/kinetics/). We select 30 videos with their labels as train dataset and 10 videos with their labels as test dataset.
###Code
# download, decompress the data
!rm kinetics400_tiny.zip*
!rm -rf kinetics400_tiny
!wget https://download.openmmlab.com/mmaction/kinetics400_tiny.zip
!unzip kinetics400_tiny.zip > /dev/null
# Check the directory structure of the tiny data
# Install tree first
!apt-get -q install tree
!tree kinetics400_tiny
# After downloading the data, we need to check the annotation format
!cat kinetics400_tiny/kinetics_tiny_train_video.txt
###Output
D32_1gwq35E.mp4 0
iRuyZSKhHRg.mp4 1
oXy-e_P_cAI.mp4 0
34XczvTaRiI.mp4 1
h2YqqUhnR34.mp4 0
O46YA8tI530.mp4 0
kFC3KY2bOP8.mp4 1
WWP5HZJsg-o.mp4 1
phDqGd0NKoo.mp4 1
yLC9CtWU5ws.mp4 0
27_CSXByd3s.mp4 1
IyfILH9lBRo.mp4 1
T_TMNGzVrDk.mp4 1
TkkZPZHbAKA.mp4 0
PnOe3GZRVX8.mp4 1
soEcZZsBmDs.mp4 1
FMlSTTpN3VY.mp4 1
WaS0qwP46Us.mp4 0
A-wiliK50Zw.mp4 1
oMrZaozOvdQ.mp4 1
ZQV4U2KQ370.mp4 0
DbX8mPslRXg.mp4 1
h10B9SVE-nk.mp4 1
P5M-hAts7MQ.mp4 0
R8HXQkdgKWA.mp4 0
D92m0HsHjcQ.mp4 0
RqnKtCEoEcA.mp4 0
LvcFDgCAXQs.mp4 0
xGY2dP0YUjA.mp4 0
Wh_YPQdH1Zg.mp4 0
###Markdown
According to the format defined in [`VideoDataset`](./datasets/video_dataset.py), each line indicates a sample video with the filepath and label, which are split with a whitespace. Modify the configIn the next step, we need to modify the config for the training.To accelerate the process, we finetune a recognizer using a pre-trained recognizer.
###Code
from mmcv import Config
cfg = Config.fromfile('./configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py')
###Output
_____no_output_____
###Markdown
Given a config that trains a TSN model on kinetics400-full dataset, we need to modify some values to use it for training TSN on Kinetics400-tiny dataset.
###Code
from mmcv.runner import set_random_seed
# Modify dataset type and path
cfg.dataset_type = 'VideoDataset'
cfg.data_root = 'kinetics400_tiny/train/'
cfg.data_root_val = 'kinetics400_tiny/val/'
cfg.ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.type = 'VideoDataset'
cfg.data.test.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.data_prefix = 'kinetics400_tiny/val/'
cfg.data.train.type = 'VideoDataset'
cfg.data.train.ann_file = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.data.train.data_prefix = 'kinetics400_tiny/train/'
cfg.data.val.type = 'VideoDataset'
cfg.data.val.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.val.data_prefix = 'kinetics400_tiny/val/'
# The flag is used to determine whether it is omnisource training
cfg.setdefault('omnisource', False)
# Modify num classes of the model in cls_head
cfg.model.cls_head.num_classes = 2
# We can use the pre-trained TSN model
cfg.load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Set up working dir to save files and logs.
cfg.work_dir = './tutorial_exps'
# The original learning rate (LR) is set for 8-GPU training.
# We divide it by 8 since we only use one GPU.
cfg.data.videos_per_gpu = cfg.data.videos_per_gpu // 16
cfg.optimizer.lr = cfg.optimizer.lr / 8 / 16
cfg.total_epochs = 10
# We can set the checkpoint saving interval to reduce the storage cost
cfg.checkpoint_config.interval = 5
# We can set the log print interval to reduce the the times of printing log
cfg.log_config.interval = 5
# Set seed thus the results are more reproducible
cfg.seed = 0
set_random_seed(0, deterministic=False)
cfg.gpu_ids = range(1)
# Save the best
cfg.evaluation.save_best='auto'
# We can initialize the logger for training and have a look
# at the final config used for training
print(f'Config:\n{cfg.pretty_text}')
###Output
Config:
model = dict(
type='Recognizer2D',
backbone=dict(
type='ResNet',
pretrained='torchvision://resnet50',
depth=50,
norm_eval=False),
cls_head=dict(
type='TSNHead',
num_classes=2,
in_channels=2048,
spatial_type='avg',
consensus=dict(type='AvgConsensus', dim=1),
dropout_ratio=0.4,
init_std=0.01),
train_cfg=None,
test_cfg=dict(average_clips=None))
optimizer = dict(type='SGD', lr=7.8125e-05, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=40, norm_type=2))
lr_config = dict(policy='step', step=[40, 80])
total_epochs = 10
checkpoint_config = dict(interval=5)
log_config = dict(interval=5, hooks=[dict(type='TextLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
resume_from = None
workflow = [('train', 1)]
dataset_type = 'VideoDataset'
data_root = 'kinetics400_tiny/train/'
data_root_val = 'kinetics400_tiny/val/'
ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_bgr=False)
train_pipeline = [
dict(type='DecordInit'),
dict(type='SampleFrames', clip_len=1, frame_interval=1, num_clips=8),
dict(type='DecordDecode'),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]
val_pipeline = [
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
test_pipeline = [
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='ThreeCrop', crop_size=256),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
data = dict(
videos_per_gpu=2,
workers_per_gpu=2,
train=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_train_video.txt',
data_prefix='kinetics400_tiny/train/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames', clip_len=1, frame_interval=1,
num_clips=8),
dict(type='DecordDecode'),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]),
val=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_val_video.txt',
data_prefix='kinetics400_tiny/val/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]),
test=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_val_video.txt',
data_prefix='kinetics400_tiny/val/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='ThreeCrop', crop_size=256),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]))
evaluation = dict(
interval=5,
metrics=['top_k_accuracy', 'mean_class_accuracy'],
save_best='auto')
work_dir = './tutorial_exps'
omnisource = False
seed = 0
gpu_ids = range(0, 1)
###Markdown
Train a new recognizerFinally, lets initialize the dataset and recognizer, then train a new recognizer!
###Code
import os.path as osp
from mmaction.datasets import build_dataset
from mmaction.models import build_model
from mmaction.apis import train_model
import mmcv
# Build the dataset
datasets = [build_dataset(cfg.data.train)]
# Build the recognizer
model = build_model(cfg.model, train_cfg=cfg.get('train_cfg'), test_cfg=cfg.get('test_cfg'))
# Create work_dir
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
train_model(model, datasets, cfg, distributed=False, validate=True)
###Output
Use load_from_torchvision loader
###Markdown
Understand the logFrom the log, we can have a basic understanding the training process and know how well the recognizer is trained.Firstly, the ResNet-50 backbone pre-trained on ImageNet is loaded, this is a common practice since training from scratch is more cost. The log shows that all the weights of the ResNet-50 backbone are loaded except the `fc.bias` and `fc.weight`.Second, since the dataset we are using is small, we loaded a TSN model and finetune it for action recognition.The original TSN is trained on original Kinetics-400 dataset which contains 400 classes but Kinetics-400 Tiny dataset only have 2 classes. Therefore, the last FC layer of the pre-trained TSN for classification has different weight shape and is not used.Third, after training, the recognizer is evaluated by the default evaluation. The results show that the recognizer achieves 100% top1 accuracy and 100% top5 accuracy on the val dataset, Not bad! Test the trained recognizerAfter finetuning the recognizer, let's check the prediction results!
###Code
from mmaction.apis import single_gpu_test
from mmaction.datasets import build_dataloader
from mmcv.parallel import MMDataParallel
# Build a test dataloader
dataset = build_dataset(cfg.data.test, dict(test_mode=True))
data_loader = build_dataloader(
dataset,
videos_per_gpu=1,
workers_per_gpu=cfg.data.workers_per_gpu,
dist=False,
shuffle=False)
model = MMDataParallel(model, device_ids=[0])
outputs = single_gpu_test(model, data_loader)
eval_config = cfg.evaluation
eval_config.pop('interval')
eval_res = dataset.evaluate(outputs, **eval_config)
for name, val in eval_res.items():
print(f'{name}: {val:.04f}')
###Output
[ ] 0/10, elapsed: 0s, ETA:
###Markdown
MMAction2 TutorialWelcome to MMAction2! This is the official colab tutorial for using MMAction2. In this tutorial, you will learn- Perform inference with a MMAction2 recognizer.- Train a new recognizer with a new dataset.Let's start! Install MMAction2
###Code
# Check nvcc version
!nvcc -V
# Check GCC version
!gcc --version
# install dependencies: (use cu101 because colab has CUDA 10.1)
!pip install -U torch==1.5.1+cu101 torchvision==0.6.1+cu101 -f https://download.pytorch.org/whl/torch_stable.html
# install mmcv-full thus we could use CUDA operators
!pip install mmcv-full==latest+torch1.5.0+cu101 -f https://download.openmmlab.com/mmcv/dist/index.html
# Install mmaction2
!rm -rf mmaction2
!git clone https://github.com/open-mmlab/mmaction2.git
%cd mmaction2
!pip install -e .
# Install some optional requirements
!pip install -r requirements/optional.txt
# Check Pytorch installation
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
# Check MMAction2 installation
import mmaction
print(mmaction.__version__)
# Check MMCV installation
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print(get_compiling_cuda_version())
print(get_compiler_version())
###Output
1.5.1+cu101 True
0.8.0
10.1
GCC 7.3
###Markdown
Perform inference with a MMAction2 recognizerMMAction2 already provides high level APIs to do inference and training.
###Code
!mkdir checkpoints
!wget -c https://download.openmmlab.com/mmaction/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth \
-O checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth
from mmaction.apis import inference_recognizer, init_recognizer
# Choose to use a config and initialize the recognizer
config = 'configs/recognition/tsn/tsn_r50_video_inference_1x1x3_100e_kinetics400_rgb.py'
# Setup a checkpoint file to load
checkpoint = 'checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Initialize the recognizer
model = init_recognizer(config, checkpoint, device='cuda:0')
# Use the recognizer to do inference
video = 'demo/demo.mp4'
label = 'demo/label_map.txt'
results = inference_recognizer(model, video, label)
# Let's show the results
for result in results:
print(f'{result[0]}: ', result[1])
###Output
arm wrestling: 29.616438
rock scissors paper: 10.754841
shaking hands: 9.908401
clapping: 9.189913
massaging feet: 8.305306
###Markdown
Train a recognizer on customized datasetTo train a new recognizer, there are usually three things to do:1. Support a new dataset2. Modify the config3. Train a new recognizer Support a new datasetIn this tutorial, we gives an example to convert the data into the format of existing datasets. Other methods and more advanced usages can be found in the [doc](/docs/tutorials/new_dataset.md)Firstly, let's download a tiny dataset obtained from [Kinetics-400](https://deepmind.com/research/open-source/open-source-datasets/kinetics/). We select 30 videos with their labels as train dataset and 10 videos with their labels as test dataset.
###Code
# download, decompress the data
!rm kinetics400_tiny.zip*
!rm -rf kinetics400_tiny
!wget https://download.openmmlab.com/mmaction/kinetics400_tiny.zip
!unzip kinetics400_tiny.zip > /dev/null
# Check the directory structure of the tiny data
# Install tree first
!apt-get -q install tree
!tree kinetics400_tiny
# After downloading the data, we need to check the annotation format
!cat kinetics400_tiny/kinetics_tiny_train_video.txt
###Output
D32_1gwq35E.mp4 0
iRuyZSKhHRg.mp4 1
oXy-e_P_cAI.mp4 0
34XczvTaRiI.mp4 1
h2YqqUhnR34.mp4 0
O46YA8tI530.mp4 0
kFC3KY2bOP8.mp4 1
WWP5HZJsg-o.mp4 1
phDqGd0NKoo.mp4 1
yLC9CtWU5ws.mp4 0
27_CSXByd3s.mp4 1
IyfILH9lBRo.mp4 1
T_TMNGzVrDk.mp4 1
TkkZPZHbAKA.mp4 0
PnOe3GZRVX8.mp4 1
soEcZZsBmDs.mp4 1
FMlSTTpN3VY.mp4 1
WaS0qwP46Us.mp4 0
A-wiliK50Zw.mp4 1
oMrZaozOvdQ.mp4 1
ZQV4U2KQ370.mp4 0
DbX8mPslRXg.mp4 1
h10B9SVE-nk.mp4 1
P5M-hAts7MQ.mp4 0
R8HXQkdgKWA.mp4 0
D92m0HsHjcQ.mp4 0
RqnKtCEoEcA.mp4 0
LvcFDgCAXQs.mp4 0
xGY2dP0YUjA.mp4 0
Wh_YPQdH1Zg.mp4 0
###Markdown
According to the format defined in [`VideoDataset`](./datasets/video_dataset.py), each line indicates a sample video with the filepath and label, which are split with a whitespace. Modify the configIn the next step, we need to modify the config for the training.To accelerate the process, we finetune a recognizer using a pre-trained recognizer.
###Code
from mmcv import Config
cfg = Config.fromfile('./configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py')
###Output
_____no_output_____
###Markdown
Given a config that trains a TSN model on kinetics400-full dataset, we need to modify some values to use it for training TSN on Kinetics400-tiny dataset.
###Code
from mmcv.runner import set_random_seed
# Modify dataset type and path
cfg.dataset_type = 'VideoDataset'
cfg.data_root = 'kinetics400_tiny/train/'
cfg.data_root_val = 'kinetics400_tiny/val/'
cfg.ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.type = 'VideoDataset'
cfg.data.test.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.data_prefix = 'kinetics400_tiny/val/'
cfg.data.train.type = 'VideoDataset'
cfg.data.train.ann_file = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.data.train.data_prefix = 'kinetics400_tiny/train/'
cfg.data.val.type = 'VideoDataset'
cfg.data.val.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.val.data_prefix = 'kinetics400_tiny/val/'
# The flag is used to determine whether it is omnisource training
cfg.setdefault('omnisource', False)
# Modify num classes of the model in cls_head
cfg.model.cls_head.num_classes = 2
# We can use the pre-trained TSN model
cfg.load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Set up working dir to save files and logs.
cfg.work_dir = './tutorial_exps'
# The original learning rate (LR) is set for 8-GPU training.
# We divide it by 8 since we only use one GPU.
cfg.data.videos_per_gpu = cfg.data.videos_per_gpu // 16
cfg.optimizer.lr = cfg.optimizer.lr / 8 / 16
cfg.total_epochs = 30
# We can set the checkpoint saving interval to reduce the storage cost
cfg.checkpoint_config.interval = 10
# We can set the log print interval to reduce the the times of printing log
cfg.log_config.interval = 5
# Set seed thus the results are more reproducible
cfg.seed = 0
set_random_seed(0, deterministic=False)
cfg.gpu_ids = range(1)
# We can initialize the logger for training and have a look
# at the final config used for training
print(f'Config:\n{cfg.pretty_text}')
###Output
Config:
model = dict(
type='Recognizer2D',
backbone=dict(
type='ResNet',
pretrained='torchvision://resnet50',
depth=50,
norm_eval=False),
cls_head=dict(
type='TSNHead',
num_classes=2,
in_channels=2048,
spatial_type='avg',
consensus=dict(type='AvgConsensus', dim=1),
dropout_ratio=0.4,
init_std=0.01))
train_cfg = None
test_cfg = dict(average_clips=None)
dataset_type = 'VideoDataset'
data_root = 'kinetics400_tiny/train/'
data_root_val = 'kinetics400_tiny/val/'
ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_bgr=False)
train_pipeline = [
dict(type='DecordInit'),
dict(type='SampleFrames', clip_len=1, frame_interval=1, num_clips=8),
dict(type='DecordDecode'),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]
val_pipeline = [
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(type='Flip', flip_ratio=0),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
test_pipeline = [
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='ThreeCrop', crop_size=256),
dict(type='Flip', flip_ratio=0),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
data = dict(
videos_per_gpu=2,
workers_per_gpu=4,
train=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_train_video.txt',
data_prefix='kinetics400_tiny/train/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames', clip_len=1, frame_interval=1,
num_clips=8),
dict(type='DecordDecode'),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]),
val=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_val_video.txt',
data_prefix='kinetics400_tiny/val/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(type='Flip', flip_ratio=0),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]),
test=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_val_video.txt',
data_prefix='kinetics400_tiny/val/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='ThreeCrop', crop_size=256),
dict(type='Flip', flip_ratio=0),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]))
optimizer = dict(type='SGD', lr=7.8125e-05, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=40, norm_type=2))
lr_config = dict(policy='step', step=[40, 80])
total_epochs = 30
checkpoint_config = dict(interval=10)
evaluation = dict(
interval=5, metrics=['top_k_accuracy', 'mean_class_accuracy'])
log_config = dict(interval=5, hooks=[dict(type='TextLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = './tutorial_exps'
load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
resume_from = None
workflow = [('train', 1)]
omnisource = False
seed = 0
gpu_ids = range(0, 1)
###Markdown
Train a new recognizerFinally, lets initialize the dataset and recognizer, then train a new recognizer!
###Code
import os.path as osp
from mmaction.datasets import build_dataset
from mmaction.models import build_model
from mmaction.apis import train_model
import mmcv
# Build the dataset
datasets = [build_dataset(cfg.data.train)]
# Build the recognizer
model = build_model(cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
# Create work_dir
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
train_model(model, datasets, cfg, distributed=False, validate=True)
###Output
2020-11-20 09:38:54,909 - mmaction - INFO - These parameters in pretrained checkpoint are not loaded: {'fc.weight', 'fc.bias'}
2020-11-20 09:38:54,960 - mmaction - INFO - load checkpoint from ./checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth
2020-11-20 09:38:55,052 - mmaction - WARNING - The model and loaded state dict do not match exactly
size mismatch for cls_head.fc_cls.weight: copying a param with shape torch.Size([400, 2048]) from checkpoint, the shape in current model is torch.Size([2, 2048]).
size mismatch for cls_head.fc_cls.bias: copying a param with shape torch.Size([400]) from checkpoint, the shape in current model is torch.Size([2]).
2020-11-20 09:38:55,056 - mmaction - INFO - Start running, host: root@74e02ee9f123, work_dir: /content/mmaction2/tutorial_exps
2020-11-20 09:38:55,061 - mmaction - INFO - workflow: [('train', 1)], max: 30 epochs
2020-11-20 09:38:59,709 - mmaction - INFO - Epoch [1][5/15] lr: 7.813e-05, eta: 0:06:52, time: 0.927, data_time: 0.708, memory: 2918, top1_acc: 0.7000, top5_acc: 1.0000, loss_cls: 0.6865, loss: 0.6865, grad_norm: 12.7663
2020-11-20 09:39:01,247 - mmaction - INFO - Epoch [1][10/15] lr: 7.813e-05, eta: 0:04:32, time: 0.309, data_time: 0.106, memory: 2918, top1_acc: 0.6000, top5_acc: 1.0000, loss_cls: 0.7171, loss: 0.7171, grad_norm: 13.7446
2020-11-20 09:39:02,112 - mmaction - INFO - Epoch [1][15/15] lr: 7.813e-05, eta: 0:03:24, time: 0.173, data_time: 0.001, memory: 2918, top1_acc: 0.2000, top5_acc: 1.0000, loss_cls: 0.8884, loss: 0.8884, grad_norm: 14.7140
2020-11-20 09:39:06,596 - mmaction - INFO - Epoch [2][5/15] lr: 7.813e-05, eta: 0:04:05, time: 0.876, data_time: 0.659, memory: 2918, top1_acc: 0.8000, top5_acc: 1.0000, loss_cls: 0.6562, loss: 0.6562, grad_norm: 10.5716
2020-11-20 09:39:08,104 - mmaction - INFO - Epoch [2][10/15] lr: 7.813e-05, eta: 0:03:39, time: 0.300, data_time: 0.081, memory: 2918, top1_acc: 0.2000, top5_acc: 1.0000, loss_cls: 0.7480, loss: 0.7480, grad_norm: 11.7083
2020-11-20 09:39:09,075 - mmaction - INFO - Epoch [2][15/15] lr: 7.813e-05, eta: 0:03:14, time: 0.195, data_time: 0.008, memory: 2918, top1_acc: 0.6000, top5_acc: 1.0000, loss_cls: 0.6735, loss: 0.6735, grad_norm: 12.8046
2020-11-20 09:39:13,756 - mmaction - INFO - Epoch [3][5/15] lr: 7.813e-05, eta: 0:03:39, time: 0.914, data_time: 0.693, memory: 2918, top1_acc: 0.4000, top5_acc: 1.0000, loss_cls: 0.7218, loss: 0.7218, grad_norm: 12.4893
2020-11-20 09:39:15,203 - mmaction - INFO - Epoch [3][10/15] lr: 7.813e-05, eta: 0:03:24, time: 0.291, data_time: 0.092, memory: 2918, top1_acc: 0.8000, top5_acc: 1.0000, loss_cls: 0.6188, loss: 0.6188, grad_norm: 11.8106
2020-11-20 09:39:16,108 - mmaction - INFO - Epoch [3][15/15] lr: 7.813e-05, eta: 0:03:07, time: 0.181, data_time: 0.003, memory: 2918, top1_acc: 0.4000, top5_acc: 1.0000, loss_cls: 0.7298, loss: 0.7298, grad_norm: 12.5043
2020-11-20 09:39:21,525 - mmaction - INFO - Epoch [4][5/15] lr: 7.813e-05, eta: 0:03:29, time: 1.062, data_time: 0.832, memory: 2918, top1_acc: 0.5000, top5_acc: 1.0000, loss_cls: 0.6833, loss: 0.6833, grad_norm: 10.1046
2020-11-20 09:39:22,815 - mmaction - INFO - Epoch [4][10/15] lr: 7.813e-05, eta: 0:03:17, time: 0.258, data_time: 0.059, memory: 2918, top1_acc: 0.7000, top5_acc: 1.0000, loss_cls: 0.6640, loss: 0.6640, grad_norm: 11.7589
2020-11-20 09:39:23,686 - mmaction - INFO - Epoch [4][15/15] lr: 7.813e-05, eta: 0:03:03, time: 0.174, data_time: 0.001, memory: 2918, top1_acc: 0.3000, top5_acc: 1.0000, loss_cls: 0.7372, loss: 0.7372, grad_norm: 13.6163
2020-11-20 09:39:28,818 - mmaction - INFO - Epoch [5][5/15] lr: 7.813e-05, eta: 0:03:17, time: 1.001, data_time: 0.767, memory: 2918, top1_acc: 0.8000, top5_acc: 1.0000, loss_cls: 0.6309, loss: 0.6309, grad_norm: 11.1864
2020-11-20 09:39:29,915 - mmaction - INFO - Epoch [5][10/15] lr: 7.813e-05, eta: 0:03:06, time: 0.220, data_time: 0.005, memory: 2918, top1_acc: 0.4000, top5_acc: 1.0000, loss_cls: 0.7178, loss: 0.7178, grad_norm: 12.4574
2020-11-20 09:39:31,063 - mmaction - INFO - Epoch [5][15/15] lr: 7.813e-05, eta: 0:02:57, time: 0.229, data_time: 0.052, memory: 2918, top1_acc: 0.4000, top5_acc: 1.0000, loss_cls: 0.7094, loss: 0.7094, grad_norm: 12.4649
###Markdown
Understand the logFrom the log, we can have a basic understanding the training process and know how well the recognizer is trained.Firstly, the ResNet-50 backbone pre-trained on ImageNet is loaded, this is a common practice since training from scratch is more cost. The log shows that all the weights of the ResNet-50 backbone are loaded except the `fc.bias` and `fc.weight`.Second, since the dataset we are using is small, we loaded a TSN model and finetune it for action recognition.The original TSN is trained on original Kinetics-400 dataset which contains 400 classes but Kinetics-400 Tiny dataset only have 2 classes. Therefore, the last FC layer of the pre-trained TSN for classification has different weight shape and is not used.Third, after training, the recognizer is evaluated by the default evaluation. The results show that the recognizer achieves 100% top1 accuracy and 100% top5 accuracy on the val dataset, Not bad! Test the trained recognizerAfter finetuning the recognizer, let's check the prediction results!
###Code
from mmaction.apis import single_gpu_test
from mmaction.datasets import build_dataloader
from mmcv.parallel import MMDataParallel
# Build a test dataloader
dataset = build_dataset(cfg.data.test, dict(test_mode=True))
data_loader = build_dataloader(
dataset,
videos_per_gpu=1,
workers_per_gpu=cfg.data.workers_per_gpu,
dist=False,
shuffle=False)
model = MMDataParallel(model, device_ids=[0])
outputs = single_gpu_test(model, data_loader)
eval_config = cfg.evaluation
eval_config.pop('interval')
eval_res = dataset.evaluate(outputs, **eval_config)
for name, val in eval_res.items():
print(f'{name}: {val:.04f}')
###Output
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 10/10, 1.9 task/s, elapsed: 5s, ETA: 0s
Evaluating top_k_accuracy ...
top1_acc 1.0000
top5_acc 1.0000
Evaluating mean_class_accuracy ...
mean_acc 1.0000
top1_acc: 1.0000
top5_acc: 1.0000
mean_class_accuracy: 1.0000
###Markdown
MMAction2 TutorialWelcome to MMAction2! This is the official colab tutorial for using MMAction2. In this tutorial, you will learn- Perform inference with a MMAction2 recognizer.- Train a new recognizer with a new dataset.Let's start! Install MMAction2
###Code
# Check nvcc version
!nvcc -V
# Check GCC version
!gcc --version
# install dependencies: (use cu101 because colab has CUDA 10.1)
!pip install -U torch==1.8.0+cu101 torchvision==0.9.0+cu101 torchtext==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
# install mmcv-full thus we could use CUDA operators
!pip install mmcv-full==1.3.9 -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.8.0/index.html
# Install mmaction2
!rm -rf mmaction2
!git clone https://github.com/open-mmlab/mmaction2.git
%cd mmaction2
!pip install -e .
# Install some optional requirements
!pip install -r requirements/optional.txt
# Check Pytorch installation
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
# Check MMAction2 installation
import mmaction
print(mmaction.__version__)
# Check MMCV installation
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print(get_compiling_cuda_version())
print(get_compiler_version())
###Output
1.8.0+cu101 True
0.16.0
10.1
GCC 7.3
###Markdown
Perform inference with a MMAction2 recognizerMMAction2 already provides high level APIs to do inference and training.
###Code
!mkdir checkpoints
!wget -c https://download.openmmlab.com/mmaction/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth \
-O checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth
from mmaction.apis import inference_recognizer, init_recognizer
# Choose to use a config and initialize the recognizer
config = 'configs/recognition/tsn/tsn_r50_video_inference_1x1x3_100e_kinetics400_rgb.py'
# Setup a checkpoint file to load
checkpoint = 'checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Initialize the recognizer
model = init_recognizer(config, checkpoint, device='cuda:0')
# Use the recognizer to do inference
video = 'demo/demo.mp4'
label = 'tools/data/kinetics/label_map_k400.txt'
results = inference_recognizer(model, video)
labels = open(label).readlines()
labels = [x.strip() for x in labels]
results = [(labels[k[0]], k[1]) for k in results]
# Let's show the results
for result in results:
print(f'{result[0]}: ', result[1])
###Output
arm wrestling: 29.616438
rock scissors paper: 10.754841
shaking hands: 9.908401
clapping: 9.189913
massaging feet: 8.305307
###Markdown
Train a recognizer on customized datasetTo train a new recognizer, there are usually three things to do:1. Support a new dataset2. Modify the config3. Train a new recognizer Support a new datasetIn this tutorial, we gives an example to convert the data into the format of existing datasets. Other methods and more advanced usages can be found in the [doc](/docs/tutorials/new_dataset.md)Firstly, let's download a tiny dataset obtained from [Kinetics-400](https://deepmind.com/research/open-source/open-source-datasets/kinetics/). We select 30 videos with their labels as train dataset and 10 videos with their labels as test dataset.
###Code
# download, decompress the data
!rm kinetics400_tiny.zip*
!rm -rf kinetics400_tiny
!wget https://download.openmmlab.com/mmaction/kinetics400_tiny.zip
!unzip kinetics400_tiny.zip > /dev/null
# Check the directory structure of the tiny data
# Install tree first
!apt-get -q install tree
!tree kinetics400_tiny
# After downloading the data, we need to check the annotation format
!cat kinetics400_tiny/kinetics_tiny_train_video.txt
###Output
D32_1gwq35E.mp4 0
iRuyZSKhHRg.mp4 1
oXy-e_P_cAI.mp4 0
34XczvTaRiI.mp4 1
h2YqqUhnR34.mp4 0
O46YA8tI530.mp4 0
kFC3KY2bOP8.mp4 1
WWP5HZJsg-o.mp4 1
phDqGd0NKoo.mp4 1
yLC9CtWU5ws.mp4 0
27_CSXByd3s.mp4 1
IyfILH9lBRo.mp4 1
T_TMNGzVrDk.mp4 1
TkkZPZHbAKA.mp4 0
PnOe3GZRVX8.mp4 1
soEcZZsBmDs.mp4 1
FMlSTTpN3VY.mp4 1
WaS0qwP46Us.mp4 0
A-wiliK50Zw.mp4 1
oMrZaozOvdQ.mp4 1
ZQV4U2KQ370.mp4 0
DbX8mPslRXg.mp4 1
h10B9SVE-nk.mp4 1
P5M-hAts7MQ.mp4 0
R8HXQkdgKWA.mp4 0
D92m0HsHjcQ.mp4 0
RqnKtCEoEcA.mp4 0
LvcFDgCAXQs.mp4 0
xGY2dP0YUjA.mp4 0
Wh_YPQdH1Zg.mp4 0
###Markdown
According to the format defined in [`VideoDataset`](./datasets/video_dataset.py), each line indicates a sample video with the filepath and label, which are split with a whitespace. Modify the configIn the next step, we need to modify the config for the training.To accelerate the process, we finetune a recognizer using a pre-trained recognizer.
###Code
from mmcv import Config
cfg = Config.fromfile('./configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py')
###Output
_____no_output_____
###Markdown
Given a config that trains a TSN model on kinetics400-full dataset, we need to modify some values to use it for training TSN on Kinetics400-tiny dataset.
###Code
from mmcv.runner import set_random_seed
# Modify dataset type and path
cfg.dataset_type = 'VideoDataset'
cfg.data_root = 'kinetics400_tiny/train/'
cfg.data_root_val = 'kinetics400_tiny/val/'
cfg.ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.type = 'VideoDataset'
cfg.data.test.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.data_prefix = 'kinetics400_tiny/val/'
cfg.data.train.type = 'VideoDataset'
cfg.data.train.ann_file = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.data.train.data_prefix = 'kinetics400_tiny/train/'
cfg.data.val.type = 'VideoDataset'
cfg.data.val.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.val.data_prefix = 'kinetics400_tiny/val/'
# The flag is used to determine whether it is omnisource training
cfg.setdefault('omnisource', False)
# Modify num classes of the model in cls_head
cfg.model.cls_head.num_classes = 2
# We can use the pre-trained TSN model
cfg.load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Set up working dir to save files and logs.
cfg.work_dir = './tutorial_exps'
# The original learning rate (LR) is set for 8-GPU training.
# We divide it by 8 since we only use one GPU.
cfg.data.videos_per_gpu = cfg.data.videos_per_gpu // 16
cfg.optimizer.lr = cfg.optimizer.lr / 8 / 16
cfg.total_epochs = 10
# We can set the checkpoint saving interval to reduce the storage cost
cfg.checkpoint_config.interval = 5
# We can set the log print interval to reduce the the times of printing log
cfg.log_config.interval = 5
# Set seed thus the results are more reproducible
cfg.seed = 0
set_random_seed(0, deterministic=False)
cfg.gpu_ids = range(1)
# Save the best
cfg.evaluation.save_best='auto'
# We can initialize the logger for training and have a look
# at the final config used for training
print(f'Config:\n{cfg.pretty_text}')
###Output
Config:
model = dict(
type='Recognizer2D',
backbone=dict(
type='ResNet',
pretrained='torchvision://resnet50',
depth=50,
norm_eval=False),
cls_head=dict(
type='TSNHead',
num_classes=2,
in_channels=2048,
spatial_type='avg',
consensus=dict(type='AvgConsensus', dim=1),
dropout_ratio=0.4,
init_std=0.01),
train_cfg=None,
test_cfg=dict(average_clips=None))
optimizer = dict(type='SGD', lr=7.8125e-05, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=40, norm_type=2))
lr_config = dict(policy='step', step=[40, 80])
total_epochs = 10
checkpoint_config = dict(interval=5)
log_config = dict(interval=5, hooks=[dict(type='TextLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
resume_from = None
workflow = [('train', 1)]
dataset_type = 'VideoDataset'
data_root = 'kinetics400_tiny/train/'
data_root_val = 'kinetics400_tiny/val/'
ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_bgr=False)
train_pipeline = [
dict(type='DecordInit'),
dict(type='SampleFrames', clip_len=1, frame_interval=1, num_clips=8),
dict(type='DecordDecode'),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]
val_pipeline = [
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
test_pipeline = [
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='ThreeCrop', crop_size=256),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
data = dict(
videos_per_gpu=2,
workers_per_gpu=2,
train=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_train_video.txt',
data_prefix='kinetics400_tiny/train/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames', clip_len=1, frame_interval=1,
num_clips=8),
dict(type='DecordDecode'),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]),
val=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_val_video.txt',
data_prefix='kinetics400_tiny/val/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]),
test=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_val_video.txt',
data_prefix='kinetics400_tiny/val/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='ThreeCrop', crop_size=256),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]))
evaluation = dict(
interval=5,
metrics=['top_k_accuracy', 'mean_class_accuracy'],
save_best='auto')
work_dir = './tutorial_exps'
omnisource = False
seed = 0
gpu_ids = range(0, 1)
###Markdown
Train a new recognizerFinally, lets initialize the dataset and recognizer, then train a new recognizer!
###Code
import os.path as osp
from mmaction.datasets import build_dataset
from mmaction.models import build_model
from mmaction.apis import train_model
import mmcv
# Build the dataset
datasets = [build_dataset(cfg.data.train)]
# Build the recognizer
model = build_model(cfg.model, train_cfg=cfg.get('train_cfg'), test_cfg=cfg.get('test_cfg'))
# Create work_dir
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
train_model(model, datasets, cfg, distributed=False, validate=True)
###Output
Use load_from_torchvision loader
###Markdown
Understand the logFrom the log, we can have a basic understanding the training process and know how well the recognizer is trained.Firstly, the ResNet-50 backbone pre-trained on ImageNet is loaded, this is a common practice since training from scratch is more cost. The log shows that all the weights of the ResNet-50 backbone are loaded except the `fc.bias` and `fc.weight`.Second, since the dataset we are using is small, we loaded a TSN model and finetune it for action recognition.The original TSN is trained on original Kinetics-400 dataset which contains 400 classes but Kinetics-400 Tiny dataset only have 2 classes. Therefore, the last FC layer of the pre-trained TSN for classification has different weight shape and is not used.Third, after training, the recognizer is evaluated by the default evaluation. The results show that the recognizer achieves 100% top1 accuracy and 100% top5 accuracy on the val dataset, Not bad! Test the trained recognizerAfter finetuning the recognizer, let's check the prediction results!
###Code
from mmaction.apis import single_gpu_test
from mmaction.datasets import build_dataloader
from mmcv.parallel import MMDataParallel
# Build a test dataloader
dataset = build_dataset(cfg.data.test, dict(test_mode=True))
data_loader = build_dataloader(
dataset,
videos_per_gpu=1,
workers_per_gpu=cfg.data.workers_per_gpu,
dist=False,
shuffle=False)
model = MMDataParallel(model, device_ids=[0])
outputs = single_gpu_test(model, data_loader)
eval_config = cfg.evaluation
eval_config.pop('interval')
eval_res = dataset.evaluate(outputs, **eval_config)
for name, val in eval_res.items():
print(f'{name}: {val:.04f}')
###Output
[ ] 0/10, elapsed: 0s, ETA:
###Markdown
MMAction2 TutorialWelcome to MMAction2! This is the official colab tutorial for using MMAction2. In this tutorial, you will learn- Perform inference with a MMAction2 recognizer.- Train a new recognizer with a new dataset.Let's start! Install MMAction2
###Code
# Check nvcc version
!nvcc -V
# Check GCC version
!gcc --version
# install dependencies: (use cu101 because colab has CUDA 10.1)
!pip install -U torch==1.5.1+cu101 torchvision==0.6.1+cu101 -f https://download.pytorch.org/whl/torch_stable.html
# Install mmaction2
!rm -rf mmaction2
!git clone https://github.com/open-mmlab/mmaction2.git
%cd mmaction2
!pip install -e .
!pip install -r requirements/optional.txt
# Check Pytorch installation
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
# Check MMAction2 installation
import mmaction
print(mmaction.__version__)
###Output
1.5.1+cu101 True
0.1.0+78b68d5
###Markdown
Perform inference with a MMAction2 recognizerMMAction2 already provides high level APIs to do inference and training.
###Code
!mkdir checkpoints
!wget -c https://download.openmmlab.com/mmaction/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth \
-O checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth
from mmaction.apis import inference_recognizer, init_recognizer
# Choose to use a config and initialize the recognizer
config = 'configs/recognition/tsn/tsn_r50_video_inference_1x1x3_100e_kinetics400_rgb.py'
# Setup a checkpoint file to load
checkpoint = 'checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Initialize the recognizer
model = init_recognizer(config, checkpoint, device='cuda:0')
# Use the recognizer to do inference
video = 'demo/demo.mp4'
label = 'demo/label_map.txt'
results = inference_recognizer(model, video, label)
# Let's show the results
for result in results:
print(f'{result[0]}: ', result[1])
###Output
arm wrestling: 29.61644
rock scissors paper: 10.754843
shaking hands: 9.908401
clapping: 9.189913
massaging feet: 8.305306
###Markdown
Train a recognizer on customized datasetTo train a new recognizer, there are usually three things to do:1. Support a new dataset2. Modify the config3. Train a new recognizer Support a new datasetIn this tutorial, we gives an example to convert the data into the format of existing datasets. Other methods and more advanced usages can be found in the [doc](/docs/tutorials/new_dataset.md)Firstly, let's download a tiny dataset obtained from [Kinetics-400](https://deepmind.com/research/open-source/open-source-datasets/kinetics/). We select 30 videos with their labels as train dataset and 10 videos with their labels as test dataset.
###Code
# download, decompress the data
!rm kinetics400_tiny.zip*
!rm -rf kinetics400_tiny
!wget https://download.openmmlab.com/mmaction/kinetics400_tiny.zip
!unzip kinetics400_tiny.zip > /dev/null
# Check the directory structure of the tiny data
# Install tree first
!apt-get -q install tree
!tree kinetics400_tiny
# After downloading the data, we need to check the annotation format
!cat kinetics400_tiny/kinetics_tiny_train_video.txt
###Output
D32_1gwq35E.mp4 0
iRuyZSKhHRg.mp4 1
oXy-e_P_cAI.mp4 0
34XczvTaRiI.mp4 1
h2YqqUhnR34.mp4 0
O46YA8tI530.mp4 0
kFC3KY2bOP8.mp4 1
WWP5HZJsg-o.mp4 1
phDqGd0NKoo.mp4 1
yLC9CtWU5ws.mp4 0
27_CSXByd3s.mp4 1
IyfILH9lBRo.mp4 1
T_TMNGzVrDk.mp4 1
TkkZPZHbAKA.mp4 0
PnOe3GZRVX8.mp4 1
soEcZZsBmDs.mp4 1
FMlSTTpN3VY.mp4 1
WaS0qwP46Us.mp4 0
A-wiliK50Zw.mp4 1
oMrZaozOvdQ.mp4 1
ZQV4U2KQ370.mp4 0
DbX8mPslRXg.mp4 1
h10B9SVE-nk.mp4 1
P5M-hAts7MQ.mp4 0
R8HXQkdgKWA.mp4 0
D92m0HsHjcQ.mp4 0
RqnKtCEoEcA.mp4 0
LvcFDgCAXQs.mp4 0
xGY2dP0YUjA.mp4 0
Wh_YPQdH1Zg.mp4 0
###Markdown
According to the format defined in [`VideoDataset`](./datasets/video_dataset.py), each line indicates a sample video with the filepath and label, which are split with a whitespace. Modify the configIn the next step, we need to modify the config for the training.To accelerate the process, we finetune a recognizer using a pre-trained recognizer.
###Code
from mmcv import Config
cfg = Config.fromfile('./configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py')
###Output
_____no_output_____
###Markdown
Given a config that trains a TSN model on kinetics400-full dataset, we need to modify some values to use it for training TSN on Kinetics400-tiny dataset.
###Code
from mmcv.runner import set_random_seed
# Modify dataset type and path
cfg.dataset_type = 'VideoDataset'
cfg.data_root = 'kinetics400_tiny/train/'
cfg.data_root_val = 'kinetics400_tiny/val/'
cfg.ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.type = 'VideoDataset'
cfg.data.test.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.data_prefix = 'kinetics400_tiny/val/'
cfg.data.train.type = 'VideoDataset'
cfg.data.train.ann_file = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.data.train.data_prefix = 'kinetics400_tiny/train/'
cfg.data.val.type = 'VideoDataset'
cfg.data.val.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.val.data_prefix = 'kinetics400_tiny/val/'
# Modify num classes of the model in cls_head
cfg.model.cls_head.num_classes = 2
# We can use the pre-trained TSN model
cfg.load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Set up working dir to save files and logs.
cfg.work_dir = './tutorial_exps'
# The original learning rate (LR) is set for 8-GPU training.
# We divide it by 8 since we only use one GPU.
cfg.data.videos_per_gpu = cfg.data.videos_per_gpu // 16
cfg.optimizer.lr = cfg.optimizer.lr / 8 / 16
cfg.total_epochs = 30
# We can set the checkpoint saving interval to reduce the storage cost
cfg.checkpoint_config.interval = 10
# We can set the log print interval to reduce the the times of printing log
cfg.log_config.interval = 5
# Set seed thus the results are more reproducible
cfg.seed = 0
set_random_seed(0, deterministic=False)
cfg.gpu_ids = range(1)
###Output
_____no_output_____
###Markdown
Train a new recognizerFinally, lets initialize the dataset and recognizer, then train a new recognizer!
###Code
import os.path as osp
from mmaction.datasets import build_dataset
from mmaction.models import build_model
from mmaction.apis import train_model
import mmcv
# Build the dataset
datasets = [build_dataset(cfg.data.train)]
# Build the recognizer
model = build_model(cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
# Create work_dir
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
train_model(model, datasets, cfg, distributed=False, validate=True)
###Output
Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /root/.cache/torch/checkpoints/resnet50-19c8e357.pth
###Markdown
Understand the logFrom the log, we can have a basic understanding the training process and know how well the recognizer is trained.Firstly, the ResNet-50 backbone pre-trained on ImageNet is loaded, this is a common practice since training from scratch is more cost. The log shows that all the weights of the ResNet-50 backbone are loaded except the `fc.bias` and `fc.weight`.Second, since the dataset we are using is small, we loaded a TSN model and finetune it for action recognition.The original TSN is trained on original Kinetics-400 dataset which contains 400 classes but Kinetics-400 Tiny dataset only have 2 classes. Therefore, the last FC layer of the pre-trained TSN for classification has different weight shape and is not used.Third, after training, the recognizer is evaluated by the default evaluation. The results show that the recognizer achieves 100% top1 accuracy and 100% top5 accuracy on the val dataset, Not bad! Test the trained recognizerAfter finetuning the recognizer, let's check the prediction results!
###Code
from mmaction.apis import single_gpu_test
from mmaction.datasets import build_dataloader
from mmcv.parallel import MMDataParallel
# Build a test dataloader
dataset = build_dataset(cfg.data.test, dict(test_mode=True))
data_loader = build_dataloader(
dataset,
videos_per_gpu=1,
workers_per_gpu=cfg.data.workers_per_gpu,
dist=False,
shuffle=False)
model = MMDataParallel(model, device_ids=[0])
outputs = single_gpu_test(model, data_loader)
eval_config = cfg.evaluation
eval_config.pop('interval')
eval_res = dataset.evaluate(outputs, **eval_config)
for name, val in eval_res.items():
print(f'{name}: {val:.04f}')
###Output
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 10/10, 0.5 task/s, elapsed: 18s, ETA: 0s
Evaluating top_k_accuracy...
top1_acc 1.0000
top5_acc 1.0000
Evaluating mean_class_accuracy...
mean_acc 1.0000
top1_acc: 1.0000
top5_acc: 1.0000
mean_class_accuracy: 1.0000
###Markdown
MMAction2 TutorialWelcome to MMAction2! This is the official colab tutorial for using MMAction2. In this tutorial, you will learn- Perform inference with a MMAction2 recognizer.- Train a new recognizer with a new dataset.Let's start! Install MMAction2
###Code
# Check nvcc version
!nvcc -V
# Check GCC version
!gcc --version
# install dependencies: (use cu101 because colab has CUDA 10.1)
!pip install -U torch==1.5.1+cu101 torchvision==0.6.1+cu101 -f https://download.pytorch.org/whl/torch_stable.html
# Install mmaction2
!rm -rf mmaction2
!git clone https://github.com/open-mmlab/mmaction2.git
%cd mmaction2
!pip install -e .
!pip install -r requirements/optional.txt
# Check Pytorch installation
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
# Check MMAction2 installation
import mmaction
print(mmaction.__version__)
###Output
1.5.1+cu101 True
0.1.0+78b68d5
###Markdown
Perform inference with a MMAction2 recognizerMMAction2 already provides high level APIs to do inference and training.
###Code
!mkdir checkpoints
!wget -c https://openmmlab.oss-accelerate.aliyuncs.com/mmaction/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth \
-O checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth
from mmaction.apis import inference_recognizer, init_recognizer
# Choose to use a config and initialize the recognizer
config = 'configs/recognition/tsn/tsn_r50_video_inference_1x1x3_100e_kinetics400_rgb.py'
# Setup a checkpoint file to load
checkpoint = 'checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Initialize the recognizer
model = init_recognizer(config, checkpoint, device='cuda:0')
# Use the recognizer to do inference
video = 'demo/demo.mp4'
label = 'demo/label_map.txt'
results = inference_recognizer(model, video, label)
# Let's show the results
for result in results:
print(f'{result[0]}: ', result[1])
###Output
arm wrestling: 29.61644
rock scissors paper: 10.754843
shaking hands: 9.908401
clapping: 9.189913
massaging feet: 8.305306
###Markdown
Train a recognizer on customized datasetTo train a new recognizer, there are usually three things to do:1. Support a new dataset2. Modify the config3. Train a new recognizer Support a new datasetIn this tutorial, we gives an example to convert the data into the format of existing datasets. Other methods and more advanced usages can be found in the [doc](/docs/tutorials/new_dataset.md)Firstly, let's download a tiny dataset obtained from [Kinetics-400](https://deepmind.com/research/open-source/open-source-datasets/kinetics/). We select 30 videos with their labels as train dataset and 10 videos with their labels as test dataset.
###Code
# download, decompress the data
!rm kinetics400_tiny.zip*
!rm -rf kinetics400_tiny
!wget https://openmmlab.oss-accelerate.aliyuncs.com/mmaction/kinetics400_tiny.zip
!unzip kinetics400_tiny.zip > /dev/null
# Check the directory structure of the tiny data
# Install tree first
!apt-get -q install tree
!tree kinetics400_tiny
# After downloading the data, we need to check the annotation format
!cat kinetics400_tiny/kinetics_tiny_train_video.txt
###Output
D32_1gwq35E.mp4 0
iRuyZSKhHRg.mp4 1
oXy-e_P_cAI.mp4 0
34XczvTaRiI.mp4 1
h2YqqUhnR34.mp4 0
O46YA8tI530.mp4 0
kFC3KY2bOP8.mp4 1
WWP5HZJsg-o.mp4 1
phDqGd0NKoo.mp4 1
yLC9CtWU5ws.mp4 0
27_CSXByd3s.mp4 1
IyfILH9lBRo.mp4 1
T_TMNGzVrDk.mp4 1
TkkZPZHbAKA.mp4 0
PnOe3GZRVX8.mp4 1
soEcZZsBmDs.mp4 1
FMlSTTpN3VY.mp4 1
WaS0qwP46Us.mp4 0
A-wiliK50Zw.mp4 1
oMrZaozOvdQ.mp4 1
ZQV4U2KQ370.mp4 0
DbX8mPslRXg.mp4 1
h10B9SVE-nk.mp4 1
P5M-hAts7MQ.mp4 0
R8HXQkdgKWA.mp4 0
D92m0HsHjcQ.mp4 0
RqnKtCEoEcA.mp4 0
LvcFDgCAXQs.mp4 0
xGY2dP0YUjA.mp4 0
Wh_YPQdH1Zg.mp4 0
###Markdown
According to the format defined in [`VideoDataset`](./datasets/video_dataset.py), each line indicates a sample video with the filepath and label, which are split with a whitespace. Modify the configIn the next step, we need to modify the config for the training.To accelerate the process, we finetune a recognizer using a pre-trained recognizer.
###Code
from mmcv import Config
cfg = Config.fromfile('./configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py')
###Output
_____no_output_____
###Markdown
Given a config that trains a TSN model on kinetics400-full dataset, we need to modify some values to use it for training TSN on Kinetics400-tiny dataset.
###Code
from mmcv.runner import set_random_seed
# Modify dataset type and path
cfg.dataset_type = 'VideoDataset'
cfg.data_root = 'kinetics400_tiny/train/'
cfg.data_root_val = 'kinetics400_tiny/val/'
cfg.ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.type = 'VideoDataset'
cfg.data.test.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.data_prefix = 'kinetics400_tiny/val/'
cfg.data.train.type = 'VideoDataset'
cfg.data.train.ann_file = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.data.train.data_prefix = 'kinetics400_tiny/train/'
cfg.data.val.type = 'VideoDataset'
cfg.data.val.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.val.data_prefix = 'kinetics400_tiny/val/'
# Modify num classes of the model in cls_head
cfg.model.cls_head.num_classes = 2
# We can use the pre-trained TSN model
cfg.load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Set up working dir to save files and logs.
cfg.work_dir = './tutorial_exps'
# The original learning rate (LR) is set for 8-GPU training.
# We divide it by 8 since we only use one GPU.
cfg.data.videos_per_gpu = cfg.data.videos_per_gpu // 16
cfg.optimizer.lr = cfg.optimizer.lr / 8 / 16
cfg.total_epochs = 30
# We can set the checkpoint saving interval to reduce the storage cost
cfg.checkpoint_config.interval = 10
# We can set the log print interval to reduce the the times of printing log
cfg.log_config.interval = 5
# Set seed thus the results are more reproducible
cfg.seed = 0
set_random_seed(0, deterministic=False)
cfg.gpu_ids = range(1)
###Output
_____no_output_____
###Markdown
Train a new recognizerFinally, lets initialize the dataset and recognizer, then train a new recognizer!
###Code
import os.path as osp
from mmaction.datasets import build_dataset
from mmaction.models import build_model
from mmaction.apis import train_model
import mmcv
# Build the dataset
datasets = [build_dataset(cfg.data.train)]
# Build the recognizer
model = build_model(cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
# Create work_dir
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
train_model(model, datasets, cfg, distributed=False, validate=True)
###Output
/home/run/anaconda3/envs/mmaction/lib/python3.7/site-packages/setuptools/distutils_patch.py:26: UserWarning: Distutils was imported before Setuptools. This usage is discouraged and may exhibit undesirable behaviors or errors. Please use Setuptools' objects directly or at least import Setuptools first.
"Distutils was imported before Setuptools. This usage is discouraged "
###Markdown
Understand the logFrom the log, we can have a basic understanding the training process and know how well the recognizer is trained.Firstly, the ResNet-50 backbone pre-trained on ImageNet is loaded, this is a common practice since training from scratch is more cost. The log shows that all the weights of the ResNet-50 backbone are loaded except the `fc.bias` and `fc.weight`.Second, since the dataset we are using is small, we loaded a TSN model and finetune it for action recognition.The original TSN is trained on original Kinetics-400 dataset which contains 400 classes but Kinetics-400 Tiny dataset only have 2 classes. Therefore, the last FC layer of the pre-trained TSN for classification has different weight shape and is not used.Third, after training, the recognizer is evaluated by the default evaluation. The results show that the recognizer achieves 100% top1 accuracy and 100% top5 accuracy on the val dataset, Not bad! Test the trained recognizerAfter finetuning the recognizer, let's check the prediction results!
###Code
from mmaction.apis import single_gpu_test
from mmaction.datasets import build_dataloader
from mmcv.parallel import MMDataParallel
# Build a test dataloader
dataset = build_dataset(cfg.data.test, dict(test_mode=True))
data_loader = build_dataloader(
dataset,
videos_per_gpu=1,
workers_per_gpu=cfg.data.workers_per_gpu,
dist=False,
shuffle=False)
model = MMDataParallel(model, device_ids=[0])
outputs = single_gpu_test(model, data_loader)
eval_config = cfg.evaluation
eval_config.pop('interval')
eval_res = dataset.evaluate(outputs, **eval_config)
for name, val in eval_res.items():
print(f'{name}: {val:.04f}')
###Output
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 10/10, 0.5 task/s, elapsed: 18s, ETA: 0s
Evaluating top_k_accuracy...
top1_acc 1.0000
top5_acc 1.0000
Evaluating mean_class_accuracy...
mean_acc 1.0000
top1_acc: 1.0000
top5_acc: 1.0000
mean_class_accuracy: 1.0000
###Markdown
MMAction2 TutorialWelcome to MMAction2! This is the official colab tutorial for using MMAction2. In this tutorial, you will learn- Perform inference with a MMAction2 recognizer.- Train a new recognizer with a new dataset.Let's start! Install MMAction2
###Code
# Check nvcc version
!nvcc -V
# Check GCC version
!gcc --version
# install dependencies: (use cu101 because colab has CUDA 10.1)
!pip install -U torch==1.5.1+cu101 torchvision==0.6.1+cu101 -f https://download.pytorch.org/whl/torch_stable.html
# install mmcv-full thus we could use CUDA operators
!pip install mmcv-full==latest+torch1.5.0+cu101 -f https://download.openmmlab.com/mmcv/dist/index.html
# Install mmaction2
!rm -rf mmaction2
!git clone https://github.com/open-mmlab/mmaction2.git
%cd mmaction2
!pip install -e .
# Install some optional requirements
!pip install -r requirements/optional.txt
# Check Pytorch installation
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
# Check MMAction2 installation
import mmaction
print(mmaction.__version__)
# Check MMCV installation
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print(get_compiling_cuda_version())
print(get_compiler_version())
###Output
1.5.1+cu101 True
0.8.0
10.1
GCC 7.3
###Markdown
Perform inference with a MMAction2 recognizerMMAction2 already provides high level APIs to do inference and training.
###Code
!mkdir checkpoints
!wget -c https://download.openmmlab.com/mmaction/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth \
-O checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth
from mmaction.apis import inference_recognizer, init_recognizer
# Choose to use a config and initialize the recognizer
config = 'configs/recognition/tsn/tsn_r50_video_inference_1x1x3_100e_kinetics400_rgb.py'
# Setup a checkpoint file to load
checkpoint = 'checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Initialize the recognizer
model = init_recognizer(config, checkpoint, device='cuda:0')
# Use the recognizer to do inference
video = 'demo/demo.mp4'
label = 'demo/label_map_k400.txt'
results = inference_recognizer(model, video, label)
# Let's show the results
for result in results:
print(f'{result[0]}: ', result[1])
###Output
arm wrestling: 29.616438
rock scissors paper: 10.754841
shaking hands: 9.908401
clapping: 9.189913
massaging feet: 8.305306
###Markdown
Train a recognizer on customized datasetTo train a new recognizer, there are usually three things to do:1. Support a new dataset2. Modify the config3. Train a new recognizer Support a new datasetIn this tutorial, we gives an example to convert the data into the format of existing datasets. Other methods and more advanced usages can be found in the [doc](/docs/tutorials/new_dataset.md)Firstly, let's download a tiny dataset obtained from [Kinetics-400](https://deepmind.com/research/open-source/open-source-datasets/kinetics/). We select 30 videos with their labels as train dataset and 10 videos with their labels as test dataset.
###Code
# download, decompress the data
!rm kinetics400_tiny.zip*
!rm -rf kinetics400_tiny
!wget https://download.openmmlab.com/mmaction/kinetics400_tiny.zip
!unzip kinetics400_tiny.zip > /dev/null
# Check the directory structure of the tiny data
# Install tree first
!apt-get -q install tree
!tree kinetics400_tiny
# After downloading the data, we need to check the annotation format
!cat kinetics400_tiny/kinetics_tiny_train_video.txt
###Output
D32_1gwq35E.mp4 0
iRuyZSKhHRg.mp4 1
oXy-e_P_cAI.mp4 0
34XczvTaRiI.mp4 1
h2YqqUhnR34.mp4 0
O46YA8tI530.mp4 0
kFC3KY2bOP8.mp4 1
WWP5HZJsg-o.mp4 1
phDqGd0NKoo.mp4 1
yLC9CtWU5ws.mp4 0
27_CSXByd3s.mp4 1
IyfILH9lBRo.mp4 1
T_TMNGzVrDk.mp4 1
TkkZPZHbAKA.mp4 0
PnOe3GZRVX8.mp4 1
soEcZZsBmDs.mp4 1
FMlSTTpN3VY.mp4 1
WaS0qwP46Us.mp4 0
A-wiliK50Zw.mp4 1
oMrZaozOvdQ.mp4 1
ZQV4U2KQ370.mp4 0
DbX8mPslRXg.mp4 1
h10B9SVE-nk.mp4 1
P5M-hAts7MQ.mp4 0
R8HXQkdgKWA.mp4 0
D92m0HsHjcQ.mp4 0
RqnKtCEoEcA.mp4 0
LvcFDgCAXQs.mp4 0
xGY2dP0YUjA.mp4 0
Wh_YPQdH1Zg.mp4 0
###Markdown
According to the format defined in [`VideoDataset`](./datasets/video_dataset.py), each line indicates a sample video with the filepath and label, which are split with a whitespace. Modify the configIn the next step, we need to modify the config for the training.To accelerate the process, we finetune a recognizer using a pre-trained recognizer.
###Code
from mmcv import Config
cfg = Config.fromfile('./configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py')
###Output
_____no_output_____
###Markdown
Given a config that trains a TSN model on kinetics400-full dataset, we need to modify some values to use it for training TSN on Kinetics400-tiny dataset.
###Code
from mmcv.runner import set_random_seed
# Modify dataset type and path
cfg.dataset_type = 'VideoDataset'
cfg.data_root = 'kinetics400_tiny/train/'
cfg.data_root_val = 'kinetics400_tiny/val/'
cfg.ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.type = 'VideoDataset'
cfg.data.test.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.data_prefix = 'kinetics400_tiny/val/'
cfg.data.train.type = 'VideoDataset'
cfg.data.train.ann_file = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.data.train.data_prefix = 'kinetics400_tiny/train/'
cfg.data.val.type = 'VideoDataset'
cfg.data.val.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.val.data_prefix = 'kinetics400_tiny/val/'
# The flag is used to determine whether it is omnisource training
cfg.setdefault('omnisource', False)
# Modify num classes of the model in cls_head
cfg.model.cls_head.num_classes = 2
# We can use the pre-trained TSN model
cfg.load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Set up working dir to save files and logs.
cfg.work_dir = './tutorial_exps'
# The original learning rate (LR) is set for 8-GPU training.
# We divide it by 8 since we only use one GPU.
cfg.data.videos_per_gpu = cfg.data.videos_per_gpu // 16
cfg.optimizer.lr = cfg.optimizer.lr / 8 / 16
cfg.total_epochs = 30
# We can set the checkpoint saving interval to reduce the storage cost
cfg.checkpoint_config.interval = 10
# We can set the log print interval to reduce the the times of printing log
cfg.log_config.interval = 5
# Set seed thus the results are more reproducible
cfg.seed = 0
set_random_seed(0, deterministic=False)
cfg.gpu_ids = range(1)
# We can initialize the logger for training and have a look
# at the final config used for training
print(f'Config:\n{cfg.pretty_text}')
###Output
Config:
model = dict(
type='Recognizer2D',
backbone=dict(
type='ResNet',
pretrained='torchvision://resnet50',
depth=50,
norm_eval=False),
cls_head=dict(
type='TSNHead',
num_classes=2,
in_channels=2048,
spatial_type='avg',
consensus=dict(type='AvgConsensus', dim=1),
dropout_ratio=0.4,
init_std=0.01))
train_cfg = None
test_cfg = dict(average_clips=None)
dataset_type = 'VideoDataset'
data_root = 'kinetics400_tiny/train/'
data_root_val = 'kinetics400_tiny/val/'
ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_bgr=False)
train_pipeline = [
dict(type='DecordInit'),
dict(type='SampleFrames', clip_len=1, frame_interval=1, num_clips=8),
dict(type='DecordDecode'),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]
val_pipeline = [
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(type='Flip', flip_ratio=0),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
test_pipeline = [
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='ThreeCrop', crop_size=256),
dict(type='Flip', flip_ratio=0),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
data = dict(
videos_per_gpu=2,
workers_per_gpu=4,
train=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_train_video.txt',
data_prefix='kinetics400_tiny/train/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames', clip_len=1, frame_interval=1,
num_clips=8),
dict(type='DecordDecode'),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]),
val=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_val_video.txt',
data_prefix='kinetics400_tiny/val/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(type='Flip', flip_ratio=0),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]),
test=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_val_video.txt',
data_prefix='kinetics400_tiny/val/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='ThreeCrop', crop_size=256),
dict(type='Flip', flip_ratio=0),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]))
optimizer = dict(type='SGD', lr=7.8125e-05, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=40, norm_type=2))
lr_config = dict(policy='step', step=[40, 80])
total_epochs = 30
checkpoint_config = dict(interval=10)
evaluation = dict(
interval=5, metrics=['top_k_accuracy', 'mean_class_accuracy'])
log_config = dict(interval=5, hooks=[dict(type='TextLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = './tutorial_exps'
load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
resume_from = None
workflow = [('train', 1)]
omnisource = False
seed = 0
gpu_ids = range(0, 1)
###Markdown
Train a new recognizerFinally, lets initialize the dataset and recognizer, then train a new recognizer!
###Code
import os.path as osp
from mmaction.datasets import build_dataset
from mmaction.models import build_model
from mmaction.apis import train_model
import mmcv
# Build the dataset
datasets = [build_dataset(cfg.data.train)]
# Build the recognizer
model = build_model(cfg.model, train_cfg=cfg.get('train_cfg'), test_cfg=cfg.get('test_cfg'))
# Create work_dir
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
train_model(model, datasets, cfg, distributed=False, validate=True)
###Output
2020-11-20 09:38:54,909 - mmaction - INFO - These parameters in pretrained checkpoint are not loaded: {'fc.weight', 'fc.bias'}
2020-11-20 09:38:54,960 - mmaction - INFO - load checkpoint from ./checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth
2020-11-20 09:38:55,052 - mmaction - WARNING - The model and loaded state dict do not match exactly
size mismatch for cls_head.fc_cls.weight: copying a param with shape torch.Size([400, 2048]) from checkpoint, the shape in current model is torch.Size([2, 2048]).
size mismatch for cls_head.fc_cls.bias: copying a param with shape torch.Size([400]) from checkpoint, the shape in current model is torch.Size([2]).
2020-11-20 09:38:55,056 - mmaction - INFO - Start running, host: root@74e02ee9f123, work_dir: /content/mmaction2/tutorial_exps
2020-11-20 09:38:55,061 - mmaction - INFO - workflow: [('train', 1)], max: 30 epochs
2020-11-20 09:38:59,709 - mmaction - INFO - Epoch [1][5/15] lr: 7.813e-05, eta: 0:06:52, time: 0.927, data_time: 0.708, memory: 2918, top1_acc: 0.7000, top5_acc: 1.0000, loss_cls: 0.6865, loss: 0.6865, grad_norm: 12.7663
2020-11-20 09:39:01,247 - mmaction - INFO - Epoch [1][10/15] lr: 7.813e-05, eta: 0:04:32, time: 0.309, data_time: 0.106, memory: 2918, top1_acc: 0.6000, top5_acc: 1.0000, loss_cls: 0.7171, loss: 0.7171, grad_norm: 13.7446
2020-11-20 09:39:02,112 - mmaction - INFO - Epoch [1][15/15] lr: 7.813e-05, eta: 0:03:24, time: 0.173, data_time: 0.001, memory: 2918, top1_acc: 0.2000, top5_acc: 1.0000, loss_cls: 0.8884, loss: 0.8884, grad_norm: 14.7140
2020-11-20 09:39:06,596 - mmaction - INFO - Epoch [2][5/15] lr: 7.813e-05, eta: 0:04:05, time: 0.876, data_time: 0.659, memory: 2918, top1_acc: 0.8000, top5_acc: 1.0000, loss_cls: 0.6562, loss: 0.6562, grad_norm: 10.5716
2020-11-20 09:39:08,104 - mmaction - INFO - Epoch [2][10/15] lr: 7.813e-05, eta: 0:03:39, time: 0.300, data_time: 0.081, memory: 2918, top1_acc: 0.2000, top5_acc: 1.0000, loss_cls: 0.7480, loss: 0.7480, grad_norm: 11.7083
2020-11-20 09:39:09,075 - mmaction - INFO - Epoch [2][15/15] lr: 7.813e-05, eta: 0:03:14, time: 0.195, data_time: 0.008, memory: 2918, top1_acc: 0.6000, top5_acc: 1.0000, loss_cls: 0.6735, loss: 0.6735, grad_norm: 12.8046
2020-11-20 09:39:13,756 - mmaction - INFO - Epoch [3][5/15] lr: 7.813e-05, eta: 0:03:39, time: 0.914, data_time: 0.693, memory: 2918, top1_acc: 0.4000, top5_acc: 1.0000, loss_cls: 0.7218, loss: 0.7218, grad_norm: 12.4893
2020-11-20 09:39:15,203 - mmaction - INFO - Epoch [3][10/15] lr: 7.813e-05, eta: 0:03:24, time: 0.291, data_time: 0.092, memory: 2918, top1_acc: 0.8000, top5_acc: 1.0000, loss_cls: 0.6188, loss: 0.6188, grad_norm: 11.8106
2020-11-20 09:39:16,108 - mmaction - INFO - Epoch [3][15/15] lr: 7.813e-05, eta: 0:03:07, time: 0.181, data_time: 0.003, memory: 2918, top1_acc: 0.4000, top5_acc: 1.0000, loss_cls: 0.7298, loss: 0.7298, grad_norm: 12.5043
2020-11-20 09:39:21,525 - mmaction - INFO - Epoch [4][5/15] lr: 7.813e-05, eta: 0:03:29, time: 1.062, data_time: 0.832, memory: 2918, top1_acc: 0.5000, top5_acc: 1.0000, loss_cls: 0.6833, loss: 0.6833, grad_norm: 10.1046
2020-11-20 09:39:22,815 - mmaction - INFO - Epoch [4][10/15] lr: 7.813e-05, eta: 0:03:17, time: 0.258, data_time: 0.059, memory: 2918, top1_acc: 0.7000, top5_acc: 1.0000, loss_cls: 0.6640, loss: 0.6640, grad_norm: 11.7589
2020-11-20 09:39:23,686 - mmaction - INFO - Epoch [4][15/15] lr: 7.813e-05, eta: 0:03:03, time: 0.174, data_time: 0.001, memory: 2918, top1_acc: 0.3000, top5_acc: 1.0000, loss_cls: 0.7372, loss: 0.7372, grad_norm: 13.6163
2020-11-20 09:39:28,818 - mmaction - INFO - Epoch [5][5/15] lr: 7.813e-05, eta: 0:03:17, time: 1.001, data_time: 0.767, memory: 2918, top1_acc: 0.8000, top5_acc: 1.0000, loss_cls: 0.6309, loss: 0.6309, grad_norm: 11.1864
2020-11-20 09:39:29,915 - mmaction - INFO - Epoch [5][10/15] lr: 7.813e-05, eta: 0:03:06, time: 0.220, data_time: 0.005, memory: 2918, top1_acc: 0.4000, top5_acc: 1.0000, loss_cls: 0.7178, loss: 0.7178, grad_norm: 12.4574
2020-11-20 09:39:31,063 - mmaction - INFO - Epoch [5][15/15] lr: 7.813e-05, eta: 0:02:57, time: 0.229, data_time: 0.052, memory: 2918, top1_acc: 0.4000, top5_acc: 1.0000, loss_cls: 0.7094, loss: 0.7094, grad_norm: 12.4649
###Markdown
Understand the logFrom the log, we can have a basic understanding the training process and know how well the recognizer is trained.Firstly, the ResNet-50 backbone pre-trained on ImageNet is loaded, this is a common practice since training from scratch is more cost. The log shows that all the weights of the ResNet-50 backbone are loaded except the `fc.bias` and `fc.weight`.Second, since the dataset we are using is small, we loaded a TSN model and finetune it for action recognition.The original TSN is trained on original Kinetics-400 dataset which contains 400 classes but Kinetics-400 Tiny dataset only have 2 classes. Therefore, the last FC layer of the pre-trained TSN for classification has different weight shape and is not used.Third, after training, the recognizer is evaluated by the default evaluation. The results show that the recognizer achieves 100% top1 accuracy and 100% top5 accuracy on the val dataset, Not bad! Test the trained recognizerAfter finetuning the recognizer, let's check the prediction results!
###Code
from mmaction.apis import single_gpu_test
from mmaction.datasets import build_dataloader
from mmcv.parallel import MMDataParallel
# Build a test dataloader
dataset = build_dataset(cfg.data.test, dict(test_mode=True))
data_loader = build_dataloader(
dataset,
videos_per_gpu=1,
workers_per_gpu=cfg.data.workers_per_gpu,
dist=False,
shuffle=False)
model = MMDataParallel(model, device_ids=[0])
outputs = single_gpu_test(model, data_loader)
eval_config = cfg.evaluation
eval_config.pop('interval')
eval_res = dataset.evaluate(outputs, **eval_config)
for name, val in eval_res.items():
print(f'{name}: {val:.04f}')
###Output
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 10/10, 1.9 task/s, elapsed: 5s, ETA: 0s
Evaluating top_k_accuracy ...
top1_acc 1.0000
top5_acc 1.0000
Evaluating mean_class_accuracy ...
mean_acc 1.0000
top1_acc: 1.0000
top5_acc: 1.0000
mean_class_accuracy: 1.0000
###Markdown
MMAction2 TutorialWelcome to MMAction2! This is the official colab tutorial for using MMAction2. In this tutorial, you will learn- Perform inference with a MMAction2 recognizer.- Train a new recognizer with a new dataset.Let's start! Install MMAction2
###Code
# Check nvcc version
!nvcc -V
# Check GCC version
!gcc --version
# install dependencies: (use cu101 because colab has CUDA 10.1)
!pip install -U torch==1.5.1+cu101 torchvision==0.6.1+cu101 -f https://download.pytorch.org/whl/torch_stable.html
# install mmcv-full thus we could use CUDA operators
!pip install mmcv-full==latest+torch1.5.0+cu101 -f https://download.openmmlab.com/mmcv/dist/index.html
# Install mmaction2
!rm -rf mmaction2
!git clone https://github.com/open-mmlab/mmaction2.git
%cd mmaction2
!pip install -e .
# Install some optional requirements
!pip install -r requirements/optional.txt
# Check Pytorch installation
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
# Check MMAction2 installation
import mmaction
print(mmaction.__version__)
# Check MMCV installation
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print(get_compiling_cuda_version())
print(get_compiler_version())
###Output
1.5.1+cu101 True
0.8.0
10.1
GCC 7.3
###Markdown
Perform inference with a MMAction2 recognizerMMAction2 already provides high level APIs to do inference and training.
###Code
!mkdir checkpoints
!wget -c https://download.openmmlab.com/mmaction/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth \
-O checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth
from mmaction.apis import inference_recognizer, init_recognizer
# Choose to use a config and initialize the recognizer
config = 'configs/recognition/tsn/tsn_r50_video_inference_1x1x3_100e_kinetics400_rgb.py'
# Setup a checkpoint file to load
checkpoint = 'checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Initialize the recognizer
model = init_recognizer(config, checkpoint, device='cuda:0')
# Use the recognizer to do inference
video = 'demo/demo.mp4'
label = 'demo/label_map_k400.txt'
results = inference_recognizer(model, video, label)
# Let's show the results
for result in results:
print(f'{result[0]}: ', result[1])
###Output
arm wrestling: 29.616438
rock scissors paper: 10.754841
shaking hands: 9.908401
clapping: 9.189913
massaging feet: 8.305306
###Markdown
Train a recognizer on customized datasetTo train a new recognizer, there are usually three things to do:1. Support a new dataset2. Modify the config3. Train a new recognizer Support a new datasetIn this tutorial, we gives an example to convert the data into the format of existing datasets. Other methods and more advanced usages can be found in the [doc](/docs/tutorials/new_dataset.md)Firstly, let's download a tiny dataset obtained from [Kinetics-400](https://deepmind.com/research/open-source/open-source-datasets/kinetics/). We select 30 videos with their labels as train dataset and 10 videos with their labels as test dataset.
###Code
# download, decompress the data
!rm kinetics400_tiny.zip*
!rm -rf kinetics400_tiny
!wget https://download.openmmlab.com/mmaction/kinetics400_tiny.zip
!unzip kinetics400_tiny.zip > /dev/null
# Check the directory structure of the tiny data
# Install tree first
!apt-get -q install tree
!tree kinetics400_tiny
# After downloading the data, we need to check the annotation format
!cat kinetics400_tiny/kinetics_tiny_train_video.txt
###Output
D32_1gwq35E.mp4 0
iRuyZSKhHRg.mp4 1
oXy-e_P_cAI.mp4 0
34XczvTaRiI.mp4 1
h2YqqUhnR34.mp4 0
O46YA8tI530.mp4 0
kFC3KY2bOP8.mp4 1
WWP5HZJsg-o.mp4 1
phDqGd0NKoo.mp4 1
yLC9CtWU5ws.mp4 0
27_CSXByd3s.mp4 1
IyfILH9lBRo.mp4 1
T_TMNGzVrDk.mp4 1
TkkZPZHbAKA.mp4 0
PnOe3GZRVX8.mp4 1
soEcZZsBmDs.mp4 1
FMlSTTpN3VY.mp4 1
WaS0qwP46Us.mp4 0
A-wiliK50Zw.mp4 1
oMrZaozOvdQ.mp4 1
ZQV4U2KQ370.mp4 0
DbX8mPslRXg.mp4 1
h10B9SVE-nk.mp4 1
P5M-hAts7MQ.mp4 0
R8HXQkdgKWA.mp4 0
D92m0HsHjcQ.mp4 0
RqnKtCEoEcA.mp4 0
LvcFDgCAXQs.mp4 0
xGY2dP0YUjA.mp4 0
Wh_YPQdH1Zg.mp4 0
###Markdown
According to the format defined in [`VideoDataset`](./datasets/video_dataset.py), each line indicates a sample video with the filepath and label, which are split with a whitespace. Modify the configIn the next step, we need to modify the config for the training.To accelerate the process, we finetune a recognizer using a pre-trained recognizer.
###Code
from mmcv import Config
cfg = Config.fromfile('./configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py')
###Output
_____no_output_____
###Markdown
Given a config that trains a TSN model on kinetics400-full dataset, we need to modify some values to use it for training TSN on Kinetics400-tiny dataset.
###Code
from mmcv.runner import set_random_seed
# Modify dataset type and path
cfg.dataset_type = 'VideoDataset'
cfg.data_root = 'kinetics400_tiny/train/'
cfg.data_root_val = 'kinetics400_tiny/val/'
cfg.ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.type = 'VideoDataset'
cfg.data.test.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.data_prefix = 'kinetics400_tiny/val/'
cfg.data.train.type = 'VideoDataset'
cfg.data.train.ann_file = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.data.train.data_prefix = 'kinetics400_tiny/train/'
cfg.data.val.type = 'VideoDataset'
cfg.data.val.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.val.data_prefix = 'kinetics400_tiny/val/'
# The flag is used to determine whether it is omnisource training
cfg.setdefault('omnisource', False)
# Modify num classes of the model in cls_head
cfg.model.cls_head.num_classes = 2
# We can use the pre-trained TSN model
cfg.load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Set up working dir to save files and logs.
cfg.work_dir = './tutorial_exps'
# The original learning rate (LR) is set for 8-GPU training.
# We divide it by 8 since we only use one GPU.
cfg.data.videos_per_gpu = cfg.data.videos_per_gpu // 16
cfg.optimizer.lr = cfg.optimizer.lr / 8 / 16
cfg.total_epochs = 30
# We can set the checkpoint saving interval to reduce the storage cost
cfg.checkpoint_config.interval = 10
# We can set the log print interval to reduce the the times of printing log
cfg.log_config.interval = 5
# Set seed thus the results are more reproducible
cfg.seed = 0
set_random_seed(0, deterministic=False)
cfg.gpu_ids = range(1)
# We can initialize the logger for training and have a look
# at the final config used for training
print(f'Config:\n{cfg.pretty_text}')
###Output
Config:
model = dict(
type='Recognizer2D',
backbone=dict(
type='ResNet',
pretrained='torchvision://resnet50',
depth=50,
norm_eval=False),
cls_head=dict(
type='TSNHead',
num_classes=2,
in_channels=2048,
spatial_type='avg',
consensus=dict(type='AvgConsensus', dim=1),
dropout_ratio=0.4,
init_std=0.01))
train_cfg = None
test_cfg = dict(average_clips=None)
dataset_type = 'VideoDataset'
data_root = 'kinetics400_tiny/train/'
data_root_val = 'kinetics400_tiny/val/'
ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_bgr=False)
train_pipeline = [
dict(type='DecordInit'),
dict(type='SampleFrames', clip_len=1, frame_interval=1, num_clips=8),
dict(type='DecordDecode'),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]
val_pipeline = [
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(type='Flip', flip_ratio=0),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
test_pipeline = [
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='ThreeCrop', crop_size=256),
dict(type='Flip', flip_ratio=0),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
data = dict(
videos_per_gpu=2,
workers_per_gpu=4,
train=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_train_video.txt',
data_prefix='kinetics400_tiny/train/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames', clip_len=1, frame_interval=1,
num_clips=8),
dict(type='DecordDecode'),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]),
val=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_val_video.txt',
data_prefix='kinetics400_tiny/val/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(type='Flip', flip_ratio=0),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]),
test=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_val_video.txt',
data_prefix='kinetics400_tiny/val/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='ThreeCrop', crop_size=256),
dict(type='Flip', flip_ratio=0),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]))
optimizer = dict(type='SGD', lr=7.8125e-05, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=40, norm_type=2))
lr_config = dict(policy='step', step=[40, 80])
total_epochs = 30
checkpoint_config = dict(interval=10)
evaluation = dict(
interval=5, metrics=['top_k_accuracy', 'mean_class_accuracy'])
log_config = dict(interval=5, hooks=[dict(type='TextLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = './tutorial_exps'
load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
resume_from = None
workflow = [('train', 1)]
omnisource = False
seed = 0
gpu_ids = range(0, 1)
###Markdown
Train a new recognizerFinally, lets initialize the dataset and recognizer, then train a new recognizer!
###Code
import os.path as osp
from mmaction.datasets import build_dataset
from mmaction.models import build_model
from mmaction.apis import train_model
import mmcv
# Build the dataset
datasets = [build_dataset(cfg.data.train)]
# Build the recognizer
model = build_model(cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
# Create work_dir
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
train_model(model, datasets, cfg, distributed=False, validate=True)
###Output
2020-11-20 09:38:54,909 - mmaction - INFO - These parameters in pretrained checkpoint are not loaded: {'fc.weight', 'fc.bias'}
2020-11-20 09:38:54,960 - mmaction - INFO - load checkpoint from ./checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth
2020-11-20 09:38:55,052 - mmaction - WARNING - The model and loaded state dict do not match exactly
size mismatch for cls_head.fc_cls.weight: copying a param with shape torch.Size([400, 2048]) from checkpoint, the shape in current model is torch.Size([2, 2048]).
size mismatch for cls_head.fc_cls.bias: copying a param with shape torch.Size([400]) from checkpoint, the shape in current model is torch.Size([2]).
2020-11-20 09:38:55,056 - mmaction - INFO - Start running, host: root@74e02ee9f123, work_dir: /content/mmaction2/tutorial_exps
2020-11-20 09:38:55,061 - mmaction - INFO - workflow: [('train', 1)], max: 30 epochs
2020-11-20 09:38:59,709 - mmaction - INFO - Epoch [1][5/15] lr: 7.813e-05, eta: 0:06:52, time: 0.927, data_time: 0.708, memory: 2918, top1_acc: 0.7000, top5_acc: 1.0000, loss_cls: 0.6865, loss: 0.6865, grad_norm: 12.7663
2020-11-20 09:39:01,247 - mmaction - INFO - Epoch [1][10/15] lr: 7.813e-05, eta: 0:04:32, time: 0.309, data_time: 0.106, memory: 2918, top1_acc: 0.6000, top5_acc: 1.0000, loss_cls: 0.7171, loss: 0.7171, grad_norm: 13.7446
2020-11-20 09:39:02,112 - mmaction - INFO - Epoch [1][15/15] lr: 7.813e-05, eta: 0:03:24, time: 0.173, data_time: 0.001, memory: 2918, top1_acc: 0.2000, top5_acc: 1.0000, loss_cls: 0.8884, loss: 0.8884, grad_norm: 14.7140
2020-11-20 09:39:06,596 - mmaction - INFO - Epoch [2][5/15] lr: 7.813e-05, eta: 0:04:05, time: 0.876, data_time: 0.659, memory: 2918, top1_acc: 0.8000, top5_acc: 1.0000, loss_cls: 0.6562, loss: 0.6562, grad_norm: 10.5716
2020-11-20 09:39:08,104 - mmaction - INFO - Epoch [2][10/15] lr: 7.813e-05, eta: 0:03:39, time: 0.300, data_time: 0.081, memory: 2918, top1_acc: 0.2000, top5_acc: 1.0000, loss_cls: 0.7480, loss: 0.7480, grad_norm: 11.7083
2020-11-20 09:39:09,075 - mmaction - INFO - Epoch [2][15/15] lr: 7.813e-05, eta: 0:03:14, time: 0.195, data_time: 0.008, memory: 2918, top1_acc: 0.6000, top5_acc: 1.0000, loss_cls: 0.6735, loss: 0.6735, grad_norm: 12.8046
2020-11-20 09:39:13,756 - mmaction - INFO - Epoch [3][5/15] lr: 7.813e-05, eta: 0:03:39, time: 0.914, data_time: 0.693, memory: 2918, top1_acc: 0.4000, top5_acc: 1.0000, loss_cls: 0.7218, loss: 0.7218, grad_norm: 12.4893
2020-11-20 09:39:15,203 - mmaction - INFO - Epoch [3][10/15] lr: 7.813e-05, eta: 0:03:24, time: 0.291, data_time: 0.092, memory: 2918, top1_acc: 0.8000, top5_acc: 1.0000, loss_cls: 0.6188, loss: 0.6188, grad_norm: 11.8106
2020-11-20 09:39:16,108 - mmaction - INFO - Epoch [3][15/15] lr: 7.813e-05, eta: 0:03:07, time: 0.181, data_time: 0.003, memory: 2918, top1_acc: 0.4000, top5_acc: 1.0000, loss_cls: 0.7298, loss: 0.7298, grad_norm: 12.5043
2020-11-20 09:39:21,525 - mmaction - INFO - Epoch [4][5/15] lr: 7.813e-05, eta: 0:03:29, time: 1.062, data_time: 0.832, memory: 2918, top1_acc: 0.5000, top5_acc: 1.0000, loss_cls: 0.6833, loss: 0.6833, grad_norm: 10.1046
2020-11-20 09:39:22,815 - mmaction - INFO - Epoch [4][10/15] lr: 7.813e-05, eta: 0:03:17, time: 0.258, data_time: 0.059, memory: 2918, top1_acc: 0.7000, top5_acc: 1.0000, loss_cls: 0.6640, loss: 0.6640, grad_norm: 11.7589
2020-11-20 09:39:23,686 - mmaction - INFO - Epoch [4][15/15] lr: 7.813e-05, eta: 0:03:03, time: 0.174, data_time: 0.001, memory: 2918, top1_acc: 0.3000, top5_acc: 1.0000, loss_cls: 0.7372, loss: 0.7372, grad_norm: 13.6163
2020-11-20 09:39:28,818 - mmaction - INFO - Epoch [5][5/15] lr: 7.813e-05, eta: 0:03:17, time: 1.001, data_time: 0.767, memory: 2918, top1_acc: 0.8000, top5_acc: 1.0000, loss_cls: 0.6309, loss: 0.6309, grad_norm: 11.1864
2020-11-20 09:39:29,915 - mmaction - INFO - Epoch [5][10/15] lr: 7.813e-05, eta: 0:03:06, time: 0.220, data_time: 0.005, memory: 2918, top1_acc: 0.4000, top5_acc: 1.0000, loss_cls: 0.7178, loss: 0.7178, grad_norm: 12.4574
2020-11-20 09:39:31,063 - mmaction - INFO - Epoch [5][15/15] lr: 7.813e-05, eta: 0:02:57, time: 0.229, data_time: 0.052, memory: 2918, top1_acc: 0.4000, top5_acc: 1.0000, loss_cls: 0.7094, loss: 0.7094, grad_norm: 12.4649
###Markdown
Understand the logFrom the log, we can have a basic understanding the training process and know how well the recognizer is trained.Firstly, the ResNet-50 backbone pre-trained on ImageNet is loaded, this is a common practice since training from scratch is more cost. The log shows that all the weights of the ResNet-50 backbone are loaded except the `fc.bias` and `fc.weight`.Second, since the dataset we are using is small, we loaded a TSN model and finetune it for action recognition.The original TSN is trained on original Kinetics-400 dataset which contains 400 classes but Kinetics-400 Tiny dataset only have 2 classes. Therefore, the last FC layer of the pre-trained TSN for classification has different weight shape and is not used.Third, after training, the recognizer is evaluated by the default evaluation. The results show that the recognizer achieves 100% top1 accuracy and 100% top5 accuracy on the val dataset, Not bad! Test the trained recognizerAfter finetuning the recognizer, let's check the prediction results!
###Code
from mmaction.apis import single_gpu_test
from mmaction.datasets import build_dataloader
from mmcv.parallel import MMDataParallel
# Build a test dataloader
dataset = build_dataset(cfg.data.test, dict(test_mode=True))
data_loader = build_dataloader(
dataset,
videos_per_gpu=1,
workers_per_gpu=cfg.data.workers_per_gpu,
dist=False,
shuffle=False)
model = MMDataParallel(model, device_ids=[0])
outputs = single_gpu_test(model, data_loader)
eval_config = cfg.evaluation
eval_config.pop('interval')
eval_res = dataset.evaluate(outputs, **eval_config)
for name, val in eval_res.items():
print(f'{name}: {val:.04f}')
###Output
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 10/10, 1.9 task/s, elapsed: 5s, ETA: 0s
Evaluating top_k_accuracy ...
top1_acc 1.0000
top5_acc 1.0000
Evaluating mean_class_accuracy ...
mean_acc 1.0000
top1_acc: 1.0000
top5_acc: 1.0000
mean_class_accuracy: 1.0000
###Markdown
MMAction2 TutorialWelcome to MMAction2! This is the official colab tutorial for using MMAction2. In this tutorial, you will learn- Perform inference with a MMAction2 recognizer.- Train a new recognizer with a new dataset.Let's start! Install MMAction2
###Code
# Check nvcc version
!nvcc -V
# Check GCC version
!gcc --version
# install dependencies: (use cu101 because colab has CUDA 10.1)
!pip install -U torch==1.8.0+cu101 torchvision==0.9.0+cu101 torchtext==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
# install mmcv-full thus we could use CUDA operators
!pip install mmcv-full==1.3.9 -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.8.0/index.html
# Install mmaction2
!rm -rf mmaction2
!git clone https://github.com/open-mmlab/mmaction2.git
%cd mmaction2
!pip install -e .
# Install some optional requirements
!pip install -r requirements/optional.txt
# Check Pytorch installation
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
# Check MMAction2 installation
import mmaction
print(mmaction.__version__)
# Check MMCV installation
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print(get_compiling_cuda_version())
print(get_compiler_version())
###Output
1.9.0+cu111 True
0.17.0
11.1
GCC 7.3
###Markdown
Perform inference with a MMAction2 recognizerMMAction2 already provides high level APIs to do inference and training.
###Code
!mkdir checkpoints
!wget -c https://download.openmmlab.com/mmaction/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth \
-O checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth
from mmaction.apis import inference_recognizer, init_recognizer
# Choose to use a config and initialize the recognizer
config = '/home/pepeotalk/mmaction2/configs/recognition/tsn/tsn_r50_video_inference_1x1x3_100e_kinetics400_rgb.py'
# Setup a checkpoint file to load
checkpoint = 'checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Initialize the recognizer
model = init_recognizer(config, checkpoint, device='cuda:0')
# Use the recognizer to do inference
video = '/home/pepeotalk/mmaction2/demo/demo.mp4'
label = '/home/pepeotalk/mmaction2/demo/label_map_k400.txt'
results = inference_recognizer(model, video, label)
# Let's show the results
for result in results:
print(f'{result[0]}: ', result[1])
###Output
arm wrestling: 29.617208
rock scissors paper: 10.753862
shaking hands: 9.907873
clapping: 9.189154
massaging feet: 8.303853
###Markdown
Train a recognizer on customized datasetTo train a new recognizer, there are usually three things to do:1. Support a new dataset2. Modify the config3. Train a new recognizer Support a new datasetIn this tutorial, we gives an example to convert the data into the format of existing datasets. Other methods and more advanced usages can be found in the [doc](/docs/tutorials/new_dataset.md)Firstly, let's download a tiny dataset obtained from [Kinetics-400](https://deepmind.com/research/open-source/open-source-datasets/kinetics/). We select 30 videos with their labels as train dataset and 10 videos with their labels as test dataset.
###Code
# download, decompress the data
!rm kinetics400_tiny.zip*
!rm -rf kinetics400_tiny
!wget https://download.openmmlab.com/mmaction/kinetics400_tiny.zip
!unzip kinetics400_tiny.zip > /dev/null
# Check the directory structure of the tiny data
# Install tree first
!apt-get -q install tree
!tree kinetics400_tiny
# After downloading the data, we need to check the annotation format
!cat kinetics400_tiny/kinetics_tiny_train_video.txt
###Output
D32_1gwq35E.mp4 0
iRuyZSKhHRg.mp4 1
oXy-e_P_cAI.mp4 0
34XczvTaRiI.mp4 1
h2YqqUhnR34.mp4 0
O46YA8tI530.mp4 0
kFC3KY2bOP8.mp4 1
WWP5HZJsg-o.mp4 1
phDqGd0NKoo.mp4 1
yLC9CtWU5ws.mp4 0
27_CSXByd3s.mp4 1
IyfILH9lBRo.mp4 1
T_TMNGzVrDk.mp4 1
TkkZPZHbAKA.mp4 0
PnOe3GZRVX8.mp4 1
soEcZZsBmDs.mp4 1
FMlSTTpN3VY.mp4 1
WaS0qwP46Us.mp4 0
A-wiliK50Zw.mp4 1
oMrZaozOvdQ.mp4 1
ZQV4U2KQ370.mp4 0
DbX8mPslRXg.mp4 1
h10B9SVE-nk.mp4 1
P5M-hAts7MQ.mp4 0
R8HXQkdgKWA.mp4 0
D92m0HsHjcQ.mp4 0
RqnKtCEoEcA.mp4 0
LvcFDgCAXQs.mp4 0
xGY2dP0YUjA.mp4 0
Wh_YPQdH1Zg.mp4 0
###Markdown
According to the format defined in [`VideoDataset`](./datasets/video_dataset.py), each line indicates a sample video with the filepath and label, which are split with a whitespace. Modify the configIn the next step, we need to modify the config for the training.To accelerate the process, we finetune a recognizer using a pre-trained recognizer.
###Code
from mmcv import Config
cfg = Config.fromfile('./configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py')
###Output
_____no_output_____
###Markdown
Given a config that trains a TSN model on kinetics400-full dataset, we need to modify some values to use it for training TSN on Kinetics400-tiny dataset.
###Code
from mmcv.runner import set_random_seed
# Modify dataset type and path
cfg.dataset_type = 'VideoDataset'
cfg.data_root = 'kinetics400_tiny/train/'
cfg.data_root_val = 'kinetics400_tiny/val/'
cfg.ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.type = 'VideoDataset'
cfg.data.test.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.data_prefix = 'kinetics400_tiny/val/'
cfg.data.train.type = 'VideoDataset'
cfg.data.train.ann_file = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.data.train.data_prefix = 'kinetics400_tiny/train/'
cfg.data.val.type = 'VideoDataset'
cfg.data.val.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.val.data_prefix = 'kinetics400_tiny/val/'
# The flag is used to determine whether it is omnisource training
cfg.setdefault('omnisource', False)
# Modify num classes of the model in cls_head
cfg.model.cls_head.num_classes = 2
# We can use the pre-trained TSN model
cfg.load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Set up working dir to save files and logs.
cfg.work_dir = './tutorial_exps'
# The original learning rate (LR) is set for 8-GPU training.
# We divide it by 8 since we only use one GPU.
cfg.data.videos_per_gpu = cfg.data.videos_per_gpu // 16
cfg.optimizer.lr = cfg.optimizer.lr / 8 / 16
cfg.total_epochs = 10
# We can set the checkpoint saving interval to reduce the storage cost
cfg.checkpoint_config.interval = 5
# We can set the log print interval to reduce the the times of printing log
cfg.log_config.interval = 5
# Set seed thus the results are more reproducible
cfg.seed = 0
set_random_seed(0, deterministic=False)
cfg.gpu_ids = range(1)
# Save the best
cfg.evaluation.save_best='auto'
# We can initialize the logger for training and have a look
# at the final config used for training
print(f'Config:\n{cfg.pretty_text}')
###Output
Config:
model = dict(
type='Recognizer2D',
backbone=dict(
type='ResNet',
pretrained='torchvision://resnet50',
depth=50,
norm_eval=False),
cls_head=dict(
type='TSNHead',
num_classes=2,
in_channels=2048,
spatial_type='avg',
consensus=dict(type='AvgConsensus', dim=1),
dropout_ratio=0.4,
init_std=0.01),
train_cfg=None,
test_cfg=dict(average_clips=None))
optimizer = dict(type='SGD', lr=7.8125e-05, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=40, norm_type=2))
lr_config = dict(policy='step', step=[40, 80])
total_epochs = 10
checkpoint_config = dict(interval=5)
log_config = dict(interval=5, hooks=[dict(type='TextLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
resume_from = None
workflow = [('train', 1)]
dataset_type = 'VideoDataset'
data_root = 'kinetics400_tiny/train/'
data_root_val = 'kinetics400_tiny/val/'
ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_bgr=False)
train_pipeline = [
dict(type='DecordInit'),
dict(type='SampleFrames', clip_len=1, frame_interval=1, num_clips=8),
dict(type='DecordDecode'),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]
val_pipeline = [
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
test_pipeline = [
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='ThreeCrop', crop_size=256),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
data = dict(
videos_per_gpu=2,
workers_per_gpu=2,
train=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_train_video.txt',
data_prefix='kinetics400_tiny/train/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames', clip_len=1, frame_interval=1,
num_clips=8),
dict(type='DecordDecode'),
dict(
type='MultiScaleCrop',
input_size=224,
scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='Resize', scale=(224, 224), keep_ratio=False),
dict(type='Flip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs', 'label'])
]),
val=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_val_video.txt',
data_prefix='kinetics400_tiny/val/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=8,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]),
test=dict(
type='VideoDataset',
ann_file='kinetics400_tiny/kinetics_tiny_val_video.txt',
data_prefix='kinetics400_tiny/val/',
pipeline=[
dict(type='DecordInit'),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='DecordDecode'),
dict(type='Resize', scale=(-1, 256)),
dict(type='ThreeCrop', crop_size=256),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_bgr=False),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]))
evaluation = dict(
interval=5,
metrics=['top_k_accuracy', 'mean_class_accuracy'],
save_best='auto')
work_dir = './tutorial_exps'
omnisource = False
seed = 0
gpu_ids = range(0, 1)
###Markdown
Train a new recognizerFinally, lets initialize the dataset and recognizer, then train a new recognizer!
###Code
import os.path as osp
from mmaction.datasets import build_dataset
from mmaction.models import build_model
from mmaction.apis import train_model
import mmcv
# Build the dataset
datasets = [build_dataset(cfg.data.train)]
# Build the recognizer
model = build_model(cfg.model, train_cfg=cfg.get('train_cfg'), test_cfg=cfg.get('test_cfg'))
# Create work_dir
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
train_model(model, datasets, cfg, distributed=False, validate=True)
###Output
Use load_from_torchvision loader
###Markdown
Understand the logFrom the log, we can have a basic understanding the training process and know how well the recognizer is trained.Firstly, the ResNet-50 backbone pre-trained on ImageNet is loaded, this is a common practice since training from scratch is more cost. The log shows that all the weights of the ResNet-50 backbone are loaded except the `fc.bias` and `fc.weight`.Second, since the dataset we are using is small, we loaded a TSN model and finetune it for action recognition.The original TSN is trained on original Kinetics-400 dataset which contains 400 classes but Kinetics-400 Tiny dataset only have 2 classes. Therefore, the last FC layer of the pre-trained TSN for classification has different weight shape and is not used.Third, after training, the recognizer is evaluated by the default evaluation. The results show that the recognizer achieves 100% top1 accuracy and 100% top5 accuracy on the val dataset, Not bad! Test the trained recognizerAfter finetuning the recognizer, let's check the prediction results!
###Code
from mmaction.apis import single_gpu_test
from mmaction.datasets import build_dataloader
from mmcv.parallel import MMDataParallel
# Build a test dataloader
dataset = build_dataset(cfg.data.test, dict(test_mode=True))
data_loader = build_dataloader(
dataset,
videos_per_gpu=1,
workers_per_gpu=cfg.data.workers_per_gpu,
dist=False,
shuffle=False)
model = MMDataParallel(model, device_ids=[0])
outputs = single_gpu_test(model, data_loader)
eval_config = cfg.evaluation
eval_config.pop('interval')
eval_res = dataset.evaluate(outputs, **eval_config)
for name, val in eval_res.items():
print(f'{name}: {val:.04f}')
###Output
_____no_output_____
###Markdown
MMAction2 TutorialWelcome to MMAction2! This is the official colab tutorial for using MMAction2. In this tutorial, you will learn- Perform inference with a MMAction2 recognizer.- Train a new recognizer with a new dataset.Let's start! Install MMAction2
###Code
# Check nvcc version
!nvcc -V
# Check GCC version
!gcc --version
# install dependencies: (use cu101 because colab has CUDA 10.1)
!pip install -U torch==1.5.1+cu101 torchvision==0.6.1+cu101 -f https://download.pytorch.org/whl/torch_stable.html
# Install mmaction2
!rm -rf mmaction2
!git clone https://github.com/open-mmlab/mmaction2.git
%cd mmaction2
!pip install -e .
!pip install -r requirements/optional.txt
# Check Pytorch installation
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
# Check MMAction2 installation
import mmaction
print(mmaction.__version__)
###Output
1.5.1+cu101 True
0.1.0+78b68d5
###Markdown
Perform inference with a MMAction2 recognizerMMAction2 already provides high level APIs to do inference and training.
###Code
!mkdir checkpoints
!wget -c https://openmmlab.oss-accelerate.aliyuncs.com/mmaction/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth \
-O checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth
from mmaction.apis import inference_recognizer, init_recognizer
# Choose to use a config and initialize the recognizer
config = 'configs/recognition/tsn/tsn_r50_video_inference_1x1x3_100e_kinetics400_rgb.py'
# Setup a checkpoint file to load
checkpoint = 'checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Initialize the recognizer
model = init_recognizer(config, checkpoint, device='cuda:0')
# Use the recognizer to do inference
video = 'demo/demo.mp4'
label = 'demo/label_map.txt'
results = inference_recognizer(model, video, label)
# Let's show the results
for result in results:
print(f'{result[0]}: ', result[1])
###Output
arm wrestling: 29.61644
rock scissors paper: 10.754843
shaking hands: 9.908401
clapping: 9.189913
massaging feet: 8.305306
###Markdown
Train a recognizer on customized datasetTo train a new recognizer, there are usually three things to do:1. Support a new dataset2. Modify the config3. Train a new recognizer Support a new datasetIn this tutorial, we gives an example to convert the data into the format of existing datasets. Other methods and more advanced usages can be found in the [doc](/docs/tutorials/new_dataset.md)Firstly, let's download a tiny dataset obtained from [Kinetics-400](https://deepmind.com/research/open-source/open-source-datasets/kinetics/). We select 30 videos with their labels as train dataset and 10 videos with their labels as test dataset.
###Code
# download, decompress the data
!rm kinetics400_tiny.zip*
!rm -rf kinetics400_tiny
!wget https://openmmlab.oss-accelerate.aliyuncs.com/mmaction/kinetics400_tiny.zip
!unzip kinetics400_tiny.zip > /dev/null
# Check the directory structure of the tiny data
# Install tree first
!apt-get -q install tree
!tree kinetics400_tiny
# After downloading the data, we need to check the annotation format
!cat kinetics400_tiny/kinetics_tiny_train_video.txt
###Output
D32_1gwq35E.mp4 0
iRuyZSKhHRg.mp4 1
oXy-e_P_cAI.mp4 0
34XczvTaRiI.mp4 1
h2YqqUhnR34.mp4 0
O46YA8tI530.mp4 0
kFC3KY2bOP8.mp4 1
WWP5HZJsg-o.mp4 1
phDqGd0NKoo.mp4 1
yLC9CtWU5ws.mp4 0
27_CSXByd3s.mp4 1
IyfILH9lBRo.mp4 1
T_TMNGzVrDk.mp4 1
TkkZPZHbAKA.mp4 0
PnOe3GZRVX8.mp4 1
soEcZZsBmDs.mp4 1
FMlSTTpN3VY.mp4 1
WaS0qwP46Us.mp4 0
A-wiliK50Zw.mp4 1
oMrZaozOvdQ.mp4 1
ZQV4U2KQ370.mp4 0
DbX8mPslRXg.mp4 1
h10B9SVE-nk.mp4 1
P5M-hAts7MQ.mp4 0
R8HXQkdgKWA.mp4 0
D92m0HsHjcQ.mp4 0
RqnKtCEoEcA.mp4 0
LvcFDgCAXQs.mp4 0
xGY2dP0YUjA.mp4 0
Wh_YPQdH1Zg.mp4 0
###Markdown
According to the format defined in [`VideoDataset`](./datasets/video_dataset.py), each line indicates a sample video with the filepath and label, which are split with a whitespace. Modify the configIn the next step, we need to modify the config for the training.To accelerate the process, we finetune a recognizer using a pre-trained recognizer.
###Code
from mmcv import Config
cfg = Config.fromfile('./configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py')
###Output
_____no_output_____
###Markdown
Given a config that trains a TSN model on kinetics400-full dataset, we need to modify some values to use it for training TSN on Kinetics400-tiny dataset.
###Code
from mmcv.runner import set_random_seed
# Modify dataset type and path
cfg.dataset_type = 'VideoDataset'
cfg.data_root = 'kinetics400_tiny/train/'
cfg.data_root_val = 'kinetics400_tiny/val/'
cfg.ann_file_train = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.ann_file_val = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.ann_file_test = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.type = 'VideoDataset'
cfg.data.test.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.test.data_prefix = 'kinetics400_tiny/val/'
cfg.data.train.type = 'VideoDataset'
cfg.data.train.ann_file = 'kinetics400_tiny/kinetics_tiny_train_video.txt'
cfg.data.train.data_prefix = 'kinetics400_tiny/train/'
cfg.data.val.type = 'VideoDataset'
cfg.data.val.ann_file = 'kinetics400_tiny/kinetics_tiny_val_video.txt'
cfg.data.val.data_prefix = 'kinetics400_tiny/val/'
# Modify num classes of the model in cls_head
cfg.model.cls_head.num_classes = 2
# We can use the pre-trained TSN model
cfg.load_from = './checkpoints/tsn_r50_1x1x3_100e_kinetics400_rgb_20200614-e508be42.pth'
# Set up working dir to save files and logs.
cfg.work_dir = './tutorial_exps'
# The original learning rate (LR) is set for 8-GPU training.
# We divide it by 8 since we only use one GPU.
cfg.data.videos_per_gpu = cfg.data.videos_per_gpu // 16
cfg.optimizer.lr = cfg.optimizer.lr / 8 / 16
cfg.total_epochs = 30
# We can set the checkpoint saving interval to reduce the storage cost
cfg.checkpoint_config.interval = 10
# We can set the log print interval to reduce the the times of printing log
cfg.log_config.interval = 5
# Set seed thus the results are more reproducible
cfg.seed = 0
set_random_seed(0, deterministic=False)
cfg.gpu_ids = range(1)
###Output
_____no_output_____
###Markdown
Train a new recognizerFinally, lets initialize the dataset and recognizer, then train a new recognizer!
###Code
import os.path as osp
from mmaction.datasets import build_dataset
from mmaction.models import build_model
from mmaction.apis import train_model
import mmcv
# Build the dataset
datasets = [build_dataset(cfg.data.train)]
# Build the recognizer
model = build_model(cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
# Create work_dir
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
train_model(model, datasets, cfg, distributed=False, validate=True)
###Output
Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /root/.cache/torch/checkpoints/resnet50-19c8e357.pth
###Markdown
Understand the logFrom the log, we can have a basic understanding the training process and know how well the recognizer is trained.Firstly, the ResNet-50 backbone pre-trained on ImageNet is loaded, this is a common practice since training from scratch is more cost. The log shows that all the weights of the ResNet-50 backbone are loaded except the `fc.bias` and `fc.weight`.Second, since the dataset we are using is small, we loaded a TSN model and finetune it for action recognition.The original TSN is trained on original Kinetics-400 dataset which contains 400 classes but Kinetics-400 Tiny dataset only have 2 classes. Therefore, the last FC layer of the pre-trained TSN for classification has different weight shape and is not used.Third, after training, the recognizer is evaluated by the default evaluation. The results show that the recognizer achieves 100% top1 accuracy and 100% top5 accuracy on the val dataset, Not bad! Test the trained recognizerAfter finetuning the recognizer, let's check the prediction results!
###Code
from mmaction.apis import single_gpu_test
from mmaction.datasets import build_dataloader
from mmcv.parallel import MMDataParallel
# Build a test dataloader
dataset = build_dataset(cfg.data.test, dict(test_mode=True))
data_loader = build_dataloader(
dataset,
videos_per_gpu=1,
workers_per_gpu=cfg.data.workers_per_gpu,
dist=False,
shuffle=False)
model = MMDataParallel(model, device_ids=[0])
outputs = single_gpu_test(model, data_loader)
eval_config = cfg.evaluation
eval_config.pop('interval')
eval_res = dataset.evaluate(outputs, **eval_config)
for name, val in eval_res.items():
print(f'{name}: {val:.04f}')
###Output
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 10/10, 0.5 task/s, elapsed: 18s, ETA: 0s
Evaluating top_k_accuracy...
top1_acc 1.0000
top5_acc 1.0000
Evaluating mean_class_accuracy...
mean_acc 1.0000
top1_acc: 1.0000
top5_acc: 1.0000
mean_class_accuracy: 1.0000
|
DirectProgramming/DPC++/Jupyter/oneapi-essentials-training/00_Introduction_to_Jupyter/Introduction_to_Jupyter.ipynb | ###Markdown
Introduction to JupyterLab and Notebooks If you are familiar with Jupyter skip below and head to the first exercise. __JupyterLab__ is a sequence of boxes referred to as "cells". Each cell will contain text, like this one, or C++ or Python code that may be executed as part of this tutorial. As you proceed, please note the following: * The active cell is indicated by the blue bar on the left. Click on a cell to select it. * Use the __"run"__ ▶ button at the top or __Shift+Enter__ to execute a selected cell, starting with this one. * Note: If you mistakenly press just Enter, you will enter the editing mode for the cell. To exit editing mode and continue, press Shift+Enter.* Unless stated otherwise, the cells containing code within this tutorial MUST be executed in sequence. * You may save the tutorial at any time, which will save the output, but not the state. Saved Jupyter Notebooks will save sequence numbers which may make a cell appear to have been executed when it has not been executed for the new session. Because state is not saved, re-opening or __restarting a Jupyter Notebook__ will required re-executing all the executable steps, starting in order from the beginning. * If for any reason you need to restart the tutorial from the beginning, you may reset the state of the Jupyter Notebook and clear all output. Use the menu at the top to select __Kernel -> "Restart Kernel and Clear All Outputs"__ * Cells containing Markdown can be executed and will render. However, there is no indication of execution, and it is not necessary to explicitly execute Markdown cells. * Cells containing executable code will have "a [ ]:" to the left of the cell: * __[ ]__ blank indicates that the cell has not yet been executed. * __[\*]__ indicates that the cell is currently executing. * Once a cell is done executing, a number will appear in the small brackets with each cell execution to indicate where in the sequence the cell has been executed. Any output (e.g. print()'s) from the code will appear below the cell. Code editing, Compiling and Running in Jupyter NotebooksThis code shows a simple C++ Hello world. Inspect code, there are no modifications necessary:1. Inspect the code cell below and click run ▶ to save the code to file2. Next run ▶ the cell in the __Build and Run__ section below the code to compile and execute the code.
###Code
%%writefile src/hello.cpp
#include <iostream>
#define RESET "\033[0m"
#define RED "\033[31m" /* Red */
#define BLUE "\033[34m" /* Blue */
int main(){
std::cout << RED << "Hello World" << RESET << "\n";
}
###Output
_____no_output_____
###Markdown
Build and RunSelect the cell below and click run ▶ to compile and execute the code above:
###Code
! chmod 755 q; chmod 755 run_hello.sh;if [ -x "$(command -v qsub)" ]; then ./q run_hello.sh; else run_hello.sh; fi
###Output
_____no_output_____
###Markdown
Get started on Module 1[Click Here](../01_oneAPI_Intro/oneAPI_Intro.ipynb) Refresh Your Jupyter NotebooksIf it's been awhile since you started exploring the notebooks, you will likely need to update them for compatibility with Intel® DevCloud for oneAPI and the Intel® oneAPI DPC++ Compiler to keep up with the latest updates.Run the cell below to get latest and replace with latest version of oneAPI Essentials Modules:
###Code
!/data/oneapi_workshop/get_jupyter_notebooks.sh
###Output
_____no_output_____
###Markdown
Introduction to JupyterLab and Notebooks If you are familiar with Jupyter skip below and head to the first exercise. __JupyterLab__ is a sequence of boxes referred to as "cells". Each cell will contain text, like this one, or C++ or Python code that may be executed as part of this tutorial. As you proceed, please note the following: * The active cell is indicated by the blue bar on the left. Click on a cell to select it. * Use the __"run"__ ▶ button at the top or __Shift+Enter__ to execute a selected cell, starting with this one. * Note: If you mistakenly press just Enter, you will enter the editing mode for the cell. To exit editing mode and continue, press Shift+Enter.* Unless stated otherwise, the cells containing code within this tutorial MUST be executed in sequence. * You may save the tutorial at any time, which will save the output, but not the state. Saved Jupyter Notebooks will save sequence numbers which may make a cell appear to have been executed when it has not been executed for the new session. Because state is not saved, re-opening or __restarting a Jupyter Notebook__ will required re-executing all the executable steps, starting in order from the beginning. * If for any reason you need to restart the tutorial from the beginning, you may reset the state of the Jupyter Notebook and clear all output. Use the menu at the top to select __Kernel -> "Restart Kernel and Clear All Outputs"__ * Cells containing Markdown can be executed and will render. However, there is no indication of execution, and it is not necessary to explicitly execute Markdown cells. * Cells containing executable code will have "a [ ]:" to the left of the cell: * __[ ]__ blank indicates that the cell has not yet been executed. * __[\*]__ indicates that the cell is currently executing. * Once a cell is done executing, a number will appear in the small brackets with each cell execution to indicate where in the sequence the cell has been executed. Any output (e.g. print()'s) from the code will appear below the cell. Code editing, Compiling and Running in Jupyter NotebooksThis code shows a simple C++ Hello world. Inspect code, there are no modifications necessary:1. Inspect the code cell below and click run ▶ to save the code to file2. Next run ▶ the cell in the __Build and Run__ section below the code to compile and execute the code.
###Code
%%writefile src/hello.cpp
#include <iostream>
#define RESET "\033[0m"
#define RED "\033[31m" /* Red */
#define BLUE "\033[34m" /* Blue */
int main(){
std::cout << RED << "Hello World" << RESET << std::endl;
}
###Output
_____no_output_____
###Markdown
Build and RunSelect the cell below and click run ▶ to compile and execute the code above:
###Code
! chmod 755 q; chmod 755 run_hello.sh;if [ -x "$(command -v qsub)" ]; then ./q run_hello.sh; else run_hello.sh; fi
###Output
_____no_output_____
###Markdown
Get started on Module 1[Click Here](../01_oneAPI_Intro/oneAPI_Intro.ipynb) Refresh Your Jupyter NotebooksIf it's been awhile since you started exploring the notebooks, you will likely need to update them for compatibility with Intel® DevCloud for oneAPI and the Intel® oneAPI DPC++ Compiler to keep up with the latest updates.Run the cell below to get latest and replace with latest version of oneAPI Essentials Modules:
###Code
!/data/oneapi_workshop/get_jupyter_notebooks.sh
###Output
_____no_output_____
###Markdown
Introduction to JupyterLab and Notebooks If you are familiar with Jupyter skip below and head to the first exercise. __JupyterLab__ is a sequence of boxes referred to as "cells". Each cell will contain text, like this one, or C++ or Python code that may be executed as part of this tutorial. As you proceed, please note the following: * The active cell is indicated by the blue bar on the left. Click on a cell to select it. * Use the __"run"__ ▶ button at the top or __Shift+Enter__ to execute a selected cell, starting with this one. * Note: If you mistakenly press just Enter, you will enter the editing mode for the cell. To exit editing mode and continue, press Shift+Enter.* Unless stated otherwise, the cells containing code within this tutorial MUST be executed in sequence. * You may save the tutorial at any time, which will save the output, but not the state. Saved Jupyter Notebooks will save sequence numbers which may make a cell appear to have been executed when it has not been executed for the new session. Because state is not saved, re-opening or __restarting a Jupyter Notebook__ will required re-executing all the executable steps, starting in order from the beginning. * If for any reason you need to restart the tutorial from the beginning, you may reset the state of the Jupyter Notebook and clear all output. Use the menu at the top to select __Kernel -> "Restart Kernel and Clear All Outputs"__ * Cells containing Markdown can be executed and will render. However, there is no indication of execution, and it is not necessary to explicitly execute Markdown cells. * Cells containing executable code will have "a [ ]:" to the left of the cell: * __[ ]__ blank indicates that the cell has not yet been executed. * __[\*]__ indicates that the cell is currently executing. * Once a cell is done executing, a number will appear in the small brackets with each cell execution to indicate where in the sequence the cell has been executed. Any output (e.g. print()'s) from the code will appear below the cell. Code editing, Compiling and Running in Jupyter NotebooksThis code shows a simple C++ Hello world. Inspect code, there are no modifications necessary:1. Inspect the code cell below and click run ▶ to save the code to file2. Next run ▶ the cell in the __Build and Run__ section below the code to compile and execute the code.
###Code
%%writefile src/hello.cpp
#include <iostream>
#define RESET "\033[0m"
#define RED "\033[31m" /* Red */
#define BLUE "\033[34m" /* Blue */
int main(){
std::cout << RED << "Hello World" << RESET << "\n";
}
###Output
_____no_output_____
###Markdown
Build and RunSelect the cell below and click run ▶ to compile and execute the code above:
###Code
! chmod 755 q; chmod 755 run_hello.sh;if [ -x "$(command -v qsub)" ]; then ./q run_hello.sh; else run_hello.sh; fi
###Output
_____no_output_____
###Markdown
Get started on Module 1[Click Here](../01_oneAPI_Intro/oneAPI_Intro.ipynb) Refresh Your Jupyter NotebooksIf it's been awhile since you started exploring the notebooks, you will likely need to update them for compatibility with Intel® DevCloud for oneAPI and the Intel® oneAPI DPC++ Compiler to keep up with the latest updates.Run the cell below to get latest and replace with latest version of oneAPI Essentials Modules:
###Code
!/data/oneapi_workshop/get_jupyter_notebooks.sh
###Output
_____no_output_____
###Markdown
Introduction to JupyterLab and Notebooks If you are familiar with Jupyter skip below and head to the first exercise. __JupyterLab__ is a sequence of boxes referred to as "cells". Each cell will contain text, like this one, or C++ or Python code that may be executed as part of this tutorial. As you proceed, please note the following: * The active cell is indicated by the blue bar on the left. Click on a cell to select it. * Use the __"run"__ ▶ button at the top or __Shift+Enter__ to execute a selected cell, starting with this one. * Note: If you mistakenly press just Enter, you will enter the editing mode for the cell. To exit editing mode and continue, press Shift+Enter.* Unless stated otherwise, the cells containing code within this tutorial MUST be executed in sequence. * You may save the tutorial at any time, which will save the output, but not the state. Saved Jupyter Notebooks will save sequence numbers which may make a cell appear to have been executed when it has not been executed for the new session. Because state is not saved, re-opening or __restarting a Jupyter Notebook__ will required re-executing all the executable steps, starting in order from the beginning. * If for any reason you need to restart the tutorial from the beginning, you may reset the state of the Jupyter Notebook and clear all output. Use the menu at the top to select __Kernel -> "Restart Kernel and Clear All Outputs"__ * Cells containing Markdown can be executed and will render. However, there is no indication of execution, and it is not necessary to explicitly execute Markdown cells. * Cells containing executable code will have "a [ ]:" to the left of the cell: * __[ ]__ blank indicates that the cell has not yet been executed. * __[\*]__ indicates that the cell is currently executing. * Once a cell is done executing, a number will appear in the small brackets with each cell execution to indicate where in the sequence the cell has been executed. Any output (e.g. print()'s) from the code will appear below the cell. Code editing, Compiling and Running in Jupyter NotebooksThis code shows a simple C++ Hello world. Inspect code, there are no modifications necessary:1. Inspect the code cell below and click run ▶ to save the code to file2. Next run ▶ the cell in the __Build and Run__ section below the code to compile and execute the code.
###Code
%%writefile src/hello.cpp
#include <iostream>
#define RESET "\033[0m"
#define RED "\033[31m" /* Red */
#define BLUE "\033[34m" /* Blue */
int main(){
std::cout << RED << "Hello World" << RESET << std::endl;
}
###Output
_____no_output_____
###Markdown
Build and RunSelect the cell below and click run ▶ to compile and execute the code above:
###Code
! chmod 755 q; chmod 755 run_hello.sh;if [ -x "$(command -v qsub)" ]; then ./q run_hello.sh; else run_hello.sh; fi
###Output
_____no_output_____
###Markdown
Get started on Module 1[Click Here](../01_oneAPI_Intro/oneAPI_Intro.ipynb) Refresh Your Jupyter NotebooksIf it's been awhile since you started exploring the notebooks, you will likely need to update them for compatibility with Intel® DevCloud for oneAPI and the Intel® oneAPI DPC++ Compiler to keep up with the latest updates.Run the cell below to get latest and replace with latest version of oneAPI Essentials Modules:
###Code
!/data/oneapi_workshop/get_jupyter_notebooks.sh
###Output
_____no_output_____
###Markdown
Introduction to JupyterLab and Notebooks If you are familiar with Jupyter skip below and head to the first exercise. __JupyterLab__ is a sequence of boxes referred to as "cells". Each cell will contain text, like this one, or C++ or Python code that may be executed as part of this tutorial. As you proceed, please note the following: * The active cell is indicated by the blue bar on the left. Click on a cell to select it. * Use the __"run"__ ▶ button at the top or __Shift+Enter__ to execute a selected cell, starting with this one. * Note: If you mistakenly press just Enter, you will enter the editing mode for the cell. To exit editing mode and continue, press Shift+Enter.* Unless stated otherwise, the cells containing code within this tutorial MUST be executed in sequence. * You may save the tutorial at any time, which will save the output, but not the state. Saved Jupyter Notebooks will save sequence numbers which may make a cell appear to have been executed when it has not been executed for the new session. Because state is not saved, re-opening or __restarting a Jupyter Notebook__ will required re-executing all the executable steps, starting in order from the beginning. * If for any reason you need to restart the tutorial from the beginning, you may reset the state of the Jupyter Notebook and clear all output. Use the menu at the top to select __Kernel -> "Restart Kernel and Clear All Outputs"__ * Cells containing Markdown can be executed and will render. However, there is no indication of execution, and it is not necessary to explicitly execute Markdown cells. * Cells containing executable code will have "a [ ]:" to the left of the cell: * __[ ]__ blank indicates that the cell has not yet been executed. * __[\*]__ indicates that the cell is currently executing. * Once a cell is done executing, a number will appear in the small brackets with each cell execution to indicate where in the sequence the cell has been executed. Any output (e.g. print()'s) from the code will appear below the cell. Code editing, Compiling and Running in Jupyter NotebooksThis code shows a simple C++ Hello world. Inspect code, there are no modifications necessary:1. Inspect the code cell below and click run ▶ to save the code to file2. Next run ▶ the cell in the __Build and Run__ section below the code to compile and execute the code.
###Code
%%writefile src/hello.cpp
#include <iostream>
#define RESET "\033[0m"
#define RED "\033[31m" /* Red */
#define BLUE "\033[34m" /* Blue */
int main(){
std::cout << RED << "Hello World" << RESET << std::endl;
}
###Output
_____no_output_____
###Markdown
Build and RunSelect the cell below and click run ▶ to compile and execute the code above:
###Code
! chmod 755 q; chmod 755 run_hello.sh;if [ -x "$(command -v qsub)" ]; then ./q run_hello.sh; else run_hello.sh; fi
###Output
_____no_output_____
###Markdown
Get started on Module 1[Click Here](../01_oneAPI_Intro/oneAPI_Intro.ipynb) Refresh Your Jupyter NotebooksIf it's been awhile since you started exploring the notebooks, you will likely need to update them for compatibility with Intel® DevCloud for oneAPI and the Intel® oneAPI DPC++ Compiler to keep up with the latest updates.Run the cell below to get latest and replace with latest version of oneAPI Essentials Modules:
###Code
!/data/oneapi_workshop/get_jupyter_notebooks.sh
###Output
_____no_output_____ |
Model backlog/Train/125-melanoma-5fold-efficientnetb4-256-oversampling.ipynb | ###Markdown
Dependencies
###Code
!pip install --quiet efficientnet
# !pip install --quiet image-classifiers
import warnings, json, re, glob, math
# from scripts_step_lr_schedulers import *
from melanoma_utility_scripts import *
from kaggle_datasets import KaggleDatasets
from sklearn.model_selection import KFold
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, LearningRateScheduler
from tensorflow.keras import optimizers, layers, metrics, losses, Model
import tensorflow_addons as tfa
import efficientnet.tfkeras as efn
# from classification_models.tfkeras import Classifiers
SEED = 42
seed_everything(SEED)
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
TPU configuration
###Code
strategy, tpu = set_up_strategy()
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
AUTO = tf.data.experimental.AUTOTUNE
###Output
Running on TPU grpc://10.0.0.2:8470
REPLICAS: 8
###Markdown
Model parameters
###Code
config = {
"HEIGHT": 256,
"WIDTH": 256,
"CHANNELS": 3,
"BATCH_SIZE": 32,
"EPOCHS": 12,
"LEARNING_RATE": 3e-4,
"ES_PATIENCE": 5,
"N_FOLDS": 5,
"N_USED_FOLDS": 5,
"TTA_STEPS": 25,
"BASE_MODEL": 'EfficientNetB4',
"BASE_MODEL_WEIGHTS": 'imagenet',
"DATASET_PATH": 'melanoma-256x256'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
config
###Output
_____no_output_____
###Markdown
Load data
###Code
database_base_path = '/kaggle/input/siim-isic-melanoma-classification/'
train = pd.read_csv(database_base_path + 'train.csv')
test = pd.read_csv(database_base_path + 'test.csv')
print('Train samples: %d' % len(train))
display(train.head())
print(f'Test samples: {len(test)}')
display(test.head())
GCS_PATH = KaggleDatasets().get_gcs_path(f"melanoma-{config['HEIGHT']}x{config['WIDTH']}")
GCS_2019_PATH = KaggleDatasets().get_gcs_path(f"isic2019-{config['HEIGHT']}x{config['WIDTH']}")
GCS_MALIGNANT_PATH = KaggleDatasets().get_gcs_path(f"malignant-v2-{config['HEIGHT']}x{config['WIDTH']}")
###Output
Train samples: 33126
###Markdown
Augmentations
###Code
def data_augment(image):
p_rotation = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_cutout = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_shear = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
if p_shear > .2:
if p_shear > .6:
image = transform_shear(image, config['HEIGHT'], shear=5.)
else:
image = transform_shear(image, config['HEIGHT'], shear=-5.)
if p_rotation > .2:
if p_rotation > .6:
image = transform_rotation(image, config['HEIGHT'], rotation=45.)
else:
image = transform_rotation(image, config['HEIGHT'], rotation=-45.)
if p_crop > .5:
image = data_augment_crop(image)
if p_rotate > .2:
image = data_augment_rotate(image)
image = data_augment_spatial(image)
image = tf.image.random_saturation(image, 0.7, 1.3)
image = tf.image.random_contrast(image, 0.8, 1.2)
image = tf.image.random_brightness(image, 0.1)
if p_cutout > .7:
image = data_augment_cutout(image)
return image
def data_augment_tta(image):
p_rotation = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
if p_rotation > .2:
if p_rotation > .6:
image = transform_rotation(image, config['HEIGHT'], rotation=45.)
else:
image = transform_rotation(image, config['HEIGHT'], rotation=-45.)
if p_crop > .5:
image = data_augment_crop(image)
if p_rotate > .2:
image = data_augment_rotate(image)
image = data_augment_spatial(image)
image = tf.image.random_saturation(image, 0.7, 1.3)
image = tf.image.random_contrast(image, 0.8, 1.2)
image = tf.image.random_brightness(image, 0.1)
return image
def data_augment_spatial(image):
p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)
if p_spatial > .75:
image = tf.image.transpose(image)
return image
def data_augment_rotate(image):
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
if p_rotate > .66:
image = tf.image.rot90(image, k=3) # rotate 270º
elif p_rotate > .33:
image = tf.image.rot90(image, k=2) # rotate 180º
else:
image = tf.image.rot90(image, k=1) # rotate 90º
return image
def data_augment_crop(image):
p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
crop_size = tf.random.uniform([], int(config['HEIGHT']*.7), config['HEIGHT'], dtype=tf.int32)
if p_crop > .5:
image = tf.image.random_crop(image, size=[crop_size, crop_size, config['CHANNELS']])
else:
if p_crop > .4:
image = tf.image.central_crop(image, central_fraction=.7)
elif p_crop > .2:
image = tf.image.central_crop(image, central_fraction=.8)
else:
image = tf.image.central_crop(image, central_fraction=.9)
image = tf.image.resize(image, size=[config['HEIGHT'], config['WIDTH']])
return image
def data_augment_cutout(image, min_mask_size=(int(config['HEIGHT'] * .1), int(config['HEIGHT'] * .1)),
max_mask_size=(int(config['HEIGHT'] * .125), int(config['HEIGHT'] * .125))):
p_cutout = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
if p_cutout > .85: # 10~15 cut outs
n_cutout = tf.random.uniform([], 10, 15, dtype=tf.int32)
image = random_cutout(image, config['HEIGHT'], config['WIDTH'],
min_mask_size=min_mask_size, max_mask_size=max_mask_size, k=n_cutout)
elif p_cutout > .6: # 5~10 cut outs
n_cutout = tf.random.uniform([], 5, 10, dtype=tf.int32)
image = random_cutout(image, config['HEIGHT'], config['WIDTH'],
min_mask_size=min_mask_size, max_mask_size=max_mask_size, k=n_cutout)
elif p_cutout > .25: # 2~5 cut outs
n_cutout = tf.random.uniform([], 2, 5, dtype=tf.int32)
image = random_cutout(image, config['HEIGHT'], config['WIDTH'],
min_mask_size=min_mask_size, max_mask_size=max_mask_size, k=n_cutout)
else: # 1 cut out
image = random_cutout(image, config['HEIGHT'], config['WIDTH'],
min_mask_size=min_mask_size, max_mask_size=max_mask_size, k=1)
return image
###Output
_____no_output_____
###Markdown
Auxiliary functions
###Code
def read_labeled_tfrecord(example):
tfrec_format = {
'image' : tf.io.FixedLenFeature([], tf.string),
'image_name' : tf.io.FixedLenFeature([], tf.string),
'patient_id' : tf.io.FixedLenFeature([], tf.int64),
'sex' : tf.io.FixedLenFeature([], tf.int64),
'age_approx' : tf.io.FixedLenFeature([], tf.int64),
'anatom_site_general_challenge': tf.io.FixedLenFeature([], tf.int64),
'diagnosis' : tf.io.FixedLenFeature([], tf.int64),
'target' : tf.io.FixedLenFeature([], tf.int64)
}
example = tf.io.parse_single_example(example, tfrec_format)
return example['image'], example['target']
def read_unlabeled_tfrecord(example, return_image_name):
tfrec_format = {
'image' : tf.io.FixedLenFeature([], tf.string),
'image_name' : tf.io.FixedLenFeature([], tf.string),
}
example = tf.io.parse_single_example(example, tfrec_format)
return example['image'], example['image_name'] if return_image_name else 0
def prepare_image(img, augment=None, dim=256):
img = tf.image.decode_jpeg(img, channels=3)
img = tf.cast(img, tf.float32) / 255.0
if augment:
img = augment(img)
img = tf.reshape(img, [dim, dim, 3])
return img
def get_dataset(files, augment=None, shuffle=False, repeat=False,
labeled=True, return_image_names=True, batch_size=16, dim=256):
ds = tf.data.TFRecordDataset(files, num_parallel_reads=AUTO)
ds = ds.cache()
if repeat:
ds = ds.repeat()
if shuffle:
ds = ds.shuffle(1024*8)
opt = tf.data.Options()
opt.experimental_deterministic = False
ds = ds.with_options(opt)
if labeled:
ds = ds.map(read_labeled_tfrecord, num_parallel_calls=AUTO)
else:
ds = ds.map(lambda example: read_unlabeled_tfrecord(example, return_image_names),
num_parallel_calls=AUTO)
ds = ds.map(lambda img, imgname_or_label: (prepare_image(img, augment=augment, dim=dim),
imgname_or_label), num_parallel_calls=AUTO)
ds = ds.batch(batch_size * REPLICAS)
ds = ds.prefetch(AUTO)
return ds
def get_dataset_sampling(files, augment=None, shuffle=False, repeat=False,
labeled=True, return_image_names=True, batch_size=16, dim=256):
ds = tf.data.TFRecordDataset(files, num_parallel_reads=AUTO)
ds = ds.cache()
if repeat:
ds = ds.repeat()
if shuffle:
ds = ds.shuffle(1024*8)
opt = tf.data.Options()
opt.experimental_deterministic = False
ds = ds.with_options(opt)
if labeled:
ds = ds.map(read_labeled_tfrecord, num_parallel_calls=AUTO)
else:
ds = ds.map(lambda example: read_unlabeled_tfrecord(example, return_image_names),
num_parallel_calls=AUTO)
ds = ds.map(lambda img, imgname_or_label: (prepare_image(img, augment=augment, dim=dim),
imgname_or_label), num_parallel_calls=AUTO)
return ds
def count_data_items(filenames):
n = [int(re.compile(r"-([0-9]*)\.").search(filename).group(1))
for filename in filenames]
return np.sum(n)
###Output
_____no_output_____
###Markdown
Learning rate scheduler
###Code
def lrfn(epoch):
lr_start = 0.000005
lr_max = 0.00000125 * REPLICAS * config['BATCH_SIZE']
lr_min = 0.000001
lr_ramp_ep = 15 #5
lr_sus_ep = 5
lr_decay = 0.9 #0.8
if epoch < lr_ramp_ep:
lr = (lr_max - lr_start) / lr_ramp_ep * epoch + lr_start
elif epoch < lr_ramp_ep + lr_sus_ep:
lr = lr_max
else:
lr = (lr_max - lr_min) * lr_decay**(epoch - lr_ramp_ep - lr_sus_ep) + lr_min
return lr
def get_lr_callback():
lr_callback = tf.keras.callbacks.LearningRateScheduler(lrfn, verbose=False)
return lr_callback
total_steps = config['EPOCHS'] * 7
rng = [i for i in range(0, total_steps)]
y = [lrfn(x) for x in rng]
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
###Output
Learning rate schedule: 5e-06 to 0.00032 to 1.42e-06
###Markdown
Model
###Code
# Positive dist (datasets):
# Data: Pos | Neg
# 2020: 584 | 32108
# 2018: 1627 | 11232
# 2019: 2858 | 9555
# Positive dist (upsampled):
# 2020: 581
# New: 580
# 2018: 1614
# 2019: 1185
# Initial bias
pos = (584 + 1627) + (581 + 580 + 1614 + 1185)
neg = 32108 + 11232
initial_bias = np.log([pos/neg])
print('Bias')
print(pos)
print(neg)
print(initial_bias)
# class weights
total = len(train)
weight_for_0 = (1 / neg)*(total)/2.0
weight_for_1 = (1 / pos)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Class weight')
print(class_weight)
def model_fn(input_shape=(256, 256, 3)):
input_image = L.Input(shape=input_shape, name='input_image')
base_model = efn.EfficientNetB4(input_shape=input_shape,
weights=config['BASE_MODEL_WEIGHTS'],
include_top=False)
x = base_model(input_image)
x = L.GlobalAveragePooling2D()(x)
output = L.Dense(1, activation='sigmoid', name='output')(x)
model = Model(inputs=input_image, outputs=output)
opt = optimizers.Adam(learning_rate=0.001)
loss = losses.BinaryCrossentropy(label_smoothing=0.05)
model.compile(optimizer=opt, loss=loss, metrics=['AUC'])
return model
###Output
_____no_output_____
###Markdown
Training
###Code
skf = KFold(n_splits=config['N_USED_FOLDS'], shuffle=True, random_state=SEED)
oof_pred = []; oof_tar = []; oof_val = []; oof_names = []; oof_folds = []; history_list = []; oof_pred_last = []
preds = np.zeros((len(test), 1))
preds_last = np.zeros((len(test), 1))
for fold,(idxT, idxV) in enumerate(skf.split(np.arange(15))):
if tpu: tf.tpu.experimental.initialize_tpu_system(tpu)
print(f'\nFOLD: {fold+1}')
print(f'TRAIN: {idxT} VALID: {idxV}')
# CREATE TRAIN AND VALIDATION SUBSETS
TRAINING_FILENAMES = tf.io.gfile.glob([GCS_PATH + '/train%.2i*.tfrec' % x for x in idxT])
# Add external data
# TRAINING_FILENAMES += tf.io.gfile.glob([GCS_2019_PATH + '/train%.2i*.tfrec' % (x*2+1) for x in idxT]) # 2019 data
TRAINING_FILENAMES += tf.io.gfile.glob([GCS_2019_PATH + '/train%.2i*.tfrec' % (x*2) for x in idxT]) # 2018 data
# Add extra malignant data
TRAINING_MALIG_FILENAMES = tf.io.gfile.glob([GCS_MALIGNANT_PATH + '/train%.2i*.tfrec' % x for x in idxT]) # 2020 data
TRAINING_MALIG_FILENAMES += tf.io.gfile.glob([GCS_MALIGNANT_PATH + '/train%.2i*.tfrec' % ((x*2+1)+30) for x in idxT]) # 2019 data
TRAINING_MALIG_FILENAMES += tf.io.gfile.glob([GCS_MALIGNANT_PATH + '/train%.2i*.tfrec' % ((x*2)+30) for x in idxT]) # 2018 data
TRAINING_MALIG_FILENAMES += tf.io.gfile.glob([GCS_MALIGNANT_PATH + '/train%.2i*.tfrec' % (x+15) for x in idxT]) # new data
np.random.shuffle(TRAINING_FILENAMES)
np.random.shuffle(TRAINING_MALIG_FILENAMES)
ds_regular = get_dataset_sampling(TRAINING_FILENAMES, augment=data_augment, shuffle=True, repeat=True,
dim=config['HEIGHT'], batch_size=config['BATCH_SIZE'])
ds_malig = get_dataset_sampling(TRAINING_MALIG_FILENAMES, augment=data_augment, shuffle=True, repeat=True,
dim=config['HEIGHT'], batch_size=config['BATCH_SIZE'])
# Resampled TF Dataset
resampled_ds = tf.data.experimental.sample_from_datasets([ds_regular, ds_malig], weights=[0.6, 0.4])
resampled_ds = resampled_ds.batch(config['BATCH_SIZE'] * REPLICAS).prefetch(AUTO)
files_valid = tf.io.gfile.glob([GCS_PATH + '/train%.2i*.tfrec'%x for x in idxV])
TEST_FILENAMES = np.sort(np.array(tf.io.gfile.glob(GCS_PATH + '/test*.tfrec')))
ct_valid = count_data_items(files_valid)
ct_test = count_data_items(TEST_FILENAMES)
VALID_STEPS = config['TTA_STEPS'] * ct_valid/config['BATCH_SIZE']/4/REPLICAS
TEST_STEPS = config['TTA_STEPS'] * ct_test/config['BATCH_SIZE']/4/REPLICAS
# MODEL
K.clear_session()
with strategy.scope():
model = model_fn((config['HEIGHT'], config['WIDTH'], config['CHANNELS']))
model_path_best_auc = f'model_{fold}_auc.h5'
model_path_best_loss = f'model_{fold}_loss.h5'
# model_path_last = f'model_{fold}_last.h5'
checkpoint_auc = ModelCheckpoint(model_path_best_auc, monitor='val_auc', mode='max', save_best_only=True,
save_weights_only=True, verbose=0)
checkpoint_loss = ModelCheckpoint(model_path_best_loss, monitor='val_loss', mode='min', save_best_only=True,
save_weights_only=True, verbose=0)
# TRAIN
history = model.fit(resampled_ds,
validation_data=get_dataset(files_valid, augment=None, shuffle=False,
repeat=False, dim=config['HEIGHT']),
steps_per_epoch=((count_data_items(TRAINING_FILENAMES) / 0.6)/config['BATCH_SIZE']//REPLICAS / 10),
callbacks=[checkpoint_auc, checkpoint_loss, get_lr_callback()],
epochs=config['EPOCHS'] * 7,
verbose=2).history
history_list.append(history)
# Save last model weights
# model.save_weights(model_path_last)
# GET OOF TARGETS AND NAMES
ds_valid = get_dataset(files_valid, augment=None, repeat=False, dim=config['HEIGHT'],
labeled=True, return_image_names=True)
oof_tar.append(np.array([target.numpy() for img, target in iter(ds_valid.unbatch())]))
oof_folds.append(np.ones_like(oof_tar[-1], dtype='int8')*fold)
ds = get_dataset(files_valid, augment=None, repeat=False, dim=config['HEIGHT'],
labeled=False, return_image_names=True)
oof_names.append(np.array([img_name.numpy().decode("utf-8") for img, img_name in iter(ds.unbatch())]))
# Load best model weights (AUC)
model.load_weights(model_path_best_auc)
# PREDICT OOF USING TTA (AUC)
print('Predicting OOF with TTA (AUC)...')
ds_valid = get_dataset(files_valid, labeled=False, return_image_names=False, augment=data_augment_tta,
repeat=True, shuffle=False, dim=config['HEIGHT'], batch_size=config['BATCH_SIZE']*4)
pred = model.predict(ds_valid, steps=VALID_STEPS, verbose=2)[:config['TTA_STEPS']*ct_valid,]
oof_pred_last.append(np.mean(pred.reshape((ct_valid, config['TTA_STEPS']), order='F'),axis=1))
# PREDICT TEST USING TTA (AUC)
print('Predicting Test with TTA (AUC)...')
ds_test = get_dataset(TEST_FILENAMES, labeled=False, return_image_names=False, augment=data_augment_tta,
repeat=True, shuffle=False, dim=config['HEIGHT'], batch_size=config['BATCH_SIZE']*4)
pred = model.predict(ds_test, steps=TEST_STEPS, verbose=2)[:config['TTA_STEPS']*ct_test,]
preds_last[:,0] += np.mean(pred.reshape((ct_test, config['TTA_STEPS']), order='F'), axis=1) / config['N_USED_FOLDS']
# Load best model weights (Loss)
model.load_weights(model_path_best_loss)
# PREDICT OOF USING TTA (Loss)
print('Predicting OOF with TTA (Loss)...')
pred = model.predict(ds_valid, steps=VALID_STEPS, verbose=2)[:config['TTA_STEPS']*ct_valid,]
oof_pred.append(np.mean(pred.reshape((ct_valid, config['TTA_STEPS']), order='F'), axis=1))
# PREDICT TEST USING TTA (Loss)
print('Predicting Test with TTA (Loss)...')
pred = model.predict(ds_test, steps=TEST_STEPS, verbose=2)[:config['TTA_STEPS']*ct_test,]
preds[:,0] += np.mean(pred.reshape((ct_test, config['TTA_STEPS']), order='F'), axis=1) / config['N_USED_FOLDS']
# REPORT RESULTS
auc = roc_auc_score(oof_tar[-1], oof_pred[-1])
auc_last = roc_auc_score(oof_tar[-1], oof_pred_last[-1])
auc_blend = roc_auc_score(oof_tar[-1], np.mean([oof_pred[-1], oof_pred_last[-1]], axis=0))
oof_val.append(np.max(history['val_auc']))
print(f'#### FOLD {fold+1} OOF AUC = {oof_val[-1]:.3f}, with TTA (Loss) = {auc:.3f}, with TTA (AUC) = {auc_last:.3f}, with TTA (Blend) = {auc_blend:.3f}')
###Output
FOLD: 1
TRAIN: [ 1 2 3 4 5 6 7 8 10 12 13 14] VALID: [ 0 9 11]
Downloading data from https://github.com/Callidior/keras-applications/releases/download/efficientnet/efficientnet-b4_weights_tf_dim_ordering_tf_kernels_autoaugment_notop.h5
71892992/71892840 [==============================] - 1s 0us/step
Epoch 1/84
24/23 - 36s - auc: 0.5168 - loss: 0.6991 - val_auc: 0.5417 - val_loss: 0.6819 - lr: 5.0000e-06
Epoch 2/84
24/23 - 15s - auc: 0.7097 - loss: 0.6378 - val_auc: 0.6335 - val_loss: 0.5066 - lr: 2.6000e-05
Epoch 3/84
24/23 - 15s - auc: 0.8419 - loss: 0.5291 - val_auc: 0.6660 - val_loss: 0.3569 - lr: 4.7000e-05
Epoch 4/84
24/23 - 15s - auc: 0.8722 - loss: 0.4734 - val_auc: 0.6754 - val_loss: 0.3099 - lr: 6.8000e-05
Epoch 5/84
24/23 - 15s - auc: 0.8889 - loss: 0.4493 - val_auc: 0.6902 - val_loss: 0.2699 - lr: 8.9000e-05
Epoch 6/84
24/23 - 15s - auc: 0.9169 - loss: 0.4058 - val_auc: 0.7574 - val_loss: 0.2517 - lr: 1.1000e-04
Epoch 7/84
24/23 - 15s - auc: 0.9264 - loss: 0.3879 - val_auc: 0.7994 - val_loss: 0.2330 - lr: 1.3100e-04
Epoch 8/84
24/23 - 15s - auc: 0.9379 - loss: 0.3647 - val_auc: 0.8218 - val_loss: 0.2189 - lr: 1.5200e-04
Epoch 9/84
24/23 - 12s - auc: 0.9366 - loss: 0.3674 - val_auc: 0.8489 - val_loss: 0.2333 - lr: 1.7300e-04
Epoch 10/84
24/23 - 12s - auc: 0.9333 - loss: 0.3751 - val_auc: 0.8684 - val_loss: 0.2220 - lr: 1.9400e-04
Epoch 11/84
24/23 - 12s - auc: 0.9207 - loss: 0.3992 - val_auc: 0.8802 - val_loss: 0.2766 - lr: 2.1500e-04
Epoch 12/84
24/23 - 12s - auc: 0.9275 - loss: 0.3868 - val_auc: 0.8873 - val_loss: 0.2261 - lr: 2.3600e-04
Epoch 13/84
24/23 - 13s - auc: 0.9360 - loss: 0.3699 - val_auc: 0.8898 - val_loss: 0.2501 - lr: 2.5700e-04
Epoch 14/84
24/23 - 10s - auc: 0.9382 - loss: 0.3663 - val_auc: 0.8740 - val_loss: 0.2600 - lr: 2.7800e-04
Epoch 15/84
24/23 - 10s - auc: 0.9350 - loss: 0.3727 - val_auc: 0.8887 - val_loss: 0.2568 - lr: 2.9900e-04
Epoch 16/84
24/23 - 13s - auc: 0.9572 - loss: 0.3237 - val_auc: 0.8730 - val_loss: 0.1974 - lr: 3.2000e-04
Epoch 17/84
24/23 - 12s - auc: 0.9614 - loss: 0.3126 - val_auc: 0.8973 - val_loss: 0.2538 - lr: 3.2000e-04
Epoch 18/84
24/23 - 14s - auc: 0.9683 - loss: 0.2933 - val_auc: 0.9037 - val_loss: 0.2135 - lr: 3.2000e-04
Epoch 19/84
24/23 - 10s - auc: 0.9711 - loss: 0.2858 - val_auc: 0.8971 - val_loss: 0.2203 - lr: 3.2000e-04
Epoch 20/84
24/23 - 10s - auc: 0.9601 - loss: 0.3159 - val_auc: 0.8941 - val_loss: 0.2723 - lr: 3.2000e-04
Epoch 21/84
24/23 - 13s - auc: 0.9538 - loss: 0.3323 - val_auc: 0.9208 - val_loss: 0.2402 - lr: 3.2000e-04
Epoch 22/84
24/23 - 10s - auc: 0.9593 - loss: 0.3188 - val_auc: 0.9039 - val_loss: 0.2615 - lr: 2.8810e-04
Epoch 23/84
24/23 - 12s - auc: 0.9633 - loss: 0.3081 - val_auc: 0.8932 - val_loss: 0.1972 - lr: 2.5939e-04
Epoch 24/84
24/23 - 10s - auc: 0.9622 - loss: 0.3102 - val_auc: 0.8890 - val_loss: 0.2031 - lr: 2.3355e-04
Epoch 25/84
24/23 - 10s - auc: 0.9694 - loss: 0.2924 - val_auc: 0.8923 - val_loss: 0.2056 - lr: 2.1030e-04
Epoch 26/84
24/23 - 12s - auc: 0.9750 - loss: 0.2731 - val_auc: 0.8999 - val_loss: 0.1905 - lr: 1.8937e-04
Epoch 27/84
24/23 - 10s - auc: 0.9793 - loss: 0.2576 - val_auc: 0.8912 - val_loss: 0.1923 - lr: 1.7053e-04
Epoch 28/84
24/23 - 10s - auc: 0.9833 - loss: 0.2425 - val_auc: 0.8981 - val_loss: 0.1938 - lr: 1.5358e-04
Epoch 29/84
24/23 - 12s - auc: 0.9850 - loss: 0.2365 - val_auc: 0.8948 - val_loss: 0.1894 - lr: 1.3832e-04
Epoch 30/84
24/23 - 10s - auc: 0.9760 - loss: 0.2678 - val_auc: 0.8926 - val_loss: 0.2054 - lr: 1.2459e-04
Epoch 31/84
24/23 - 10s - auc: 0.9768 - loss: 0.2678 - val_auc: 0.8963 - val_loss: 0.2076 - lr: 1.1223e-04
Epoch 32/84
24/23 - 10s - auc: 0.9795 - loss: 0.2577 - val_auc: 0.9050 - val_loss: 0.1984 - lr: 1.0111e-04
Epoch 33/84
24/23 - 10s - auc: 0.9788 - loss: 0.2590 - val_auc: 0.9045 - val_loss: 0.2034 - lr: 9.1095e-05
Epoch 34/84
24/23 - 10s - auc: 0.9804 - loss: 0.2561 - val_auc: 0.9015 - val_loss: 0.2128 - lr: 8.2086e-05
Epoch 35/84
24/23 - 10s - auc: 0.9828 - loss: 0.2458 - val_auc: 0.9043 - val_loss: 0.2070 - lr: 7.3977e-05
Epoch 36/84
24/23 - 10s - auc: 0.9879 - loss: 0.2231 - val_auc: 0.8945 - val_loss: 0.1963 - lr: 6.6679e-05
Epoch 37/84
24/23 - 10s - auc: 0.9896 - loss: 0.2160 - val_auc: 0.8980 - val_loss: 0.1957 - lr: 6.0111e-05
Epoch 38/84
24/23 - 10s - auc: 0.9895 - loss: 0.2140 - val_auc: 0.8966 - val_loss: 0.1919 - lr: 5.4200e-05
Epoch 39/84
24/23 - 10s - auc: 0.9861 - loss: 0.2292 - val_auc: 0.8910 - val_loss: 0.1909 - lr: 4.8880e-05
Epoch 40/84
24/23 - 10s - auc: 0.9837 - loss: 0.2411 - val_auc: 0.8879 - val_loss: 0.1896 - lr: 4.4092e-05
Epoch 41/84
24/23 - 10s - auc: 0.9827 - loss: 0.2463 - val_auc: 0.8902 - val_loss: 0.1938 - lr: 3.9783e-05
Epoch 42/84
24/23 - 10s - auc: 0.9865 - loss: 0.2325 - val_auc: 0.8936 - val_loss: 0.1902 - lr: 3.5905e-05
Epoch 43/84
24/23 - 10s - auc: 0.9856 - loss: 0.2348 - val_auc: 0.8961 - val_loss: 0.1955 - lr: 3.2414e-05
Epoch 44/84
24/23 - 10s - auc: 0.9836 - loss: 0.2438 - val_auc: 0.8836 - val_loss: 0.1938 - lr: 2.9273e-05
Epoch 45/84
24/23 - 10s - auc: 0.9872 - loss: 0.2269 - val_auc: 0.8925 - val_loss: 0.1980 - lr: 2.6445e-05
Epoch 46/84
24/23 - 10s - auc: 0.9907 - loss: 0.2124 - val_auc: 0.8926 - val_loss: 0.1991 - lr: 2.3901e-05
Epoch 47/84
24/23 - 10s - auc: 0.9925 - loss: 0.2027 - val_auc: 0.8957 - val_loss: 0.1997 - lr: 2.1611e-05
Epoch 48/84
24/23 - 10s - auc: 0.9907 - loss: 0.2109 - val_auc: 0.8858 - val_loss: 0.1945 - lr: 1.9550e-05
Epoch 49/84
24/23 - 10s - auc: 0.9914 - loss: 0.2092 - val_auc: 0.8910 - val_loss: 0.1975 - lr: 1.7695e-05
Epoch 50/84
24/23 - 10s - auc: 0.9863 - loss: 0.2302 - val_auc: 0.8826 - val_loss: 0.1911 - lr: 1.6025e-05
Epoch 51/84
24/23 - 10s - auc: 0.9869 - loss: 0.2295 - val_auc: 0.8824 - val_loss: 0.1911 - lr: 1.4523e-05
Epoch 52/84
24/23 - 10s - auc: 0.9864 - loss: 0.2295 - val_auc: 0.8826 - val_loss: 0.1920 - lr: 1.3171e-05
Epoch 53/84
24/23 - 10s - auc: 0.9863 - loss: 0.2317 - val_auc: 0.8810 - val_loss: 0.1943 - lr: 1.1953e-05
Epoch 54/84
24/23 - 10s - auc: 0.9859 - loss: 0.2332 - val_auc: 0.8823 - val_loss: 0.1959 - lr: 1.0858e-05
Epoch 55/84
24/23 - 10s - auc: 0.9896 - loss: 0.2188 - val_auc: 0.8876 - val_loss: 0.1985 - lr: 9.8723e-06
Epoch 56/84
24/23 - 10s - auc: 0.9907 - loss: 0.2113 - val_auc: 0.8922 - val_loss: 0.1993 - lr: 8.9851e-06
Epoch 57/84
24/23 - 10s - auc: 0.9926 - loss: 0.2019 - val_auc: 0.8965 - val_loss: 0.2009 - lr: 8.1866e-06
Epoch 58/84
24/23 - 10s - auc: 0.9922 - loss: 0.2036 - val_auc: 0.8905 - val_loss: 0.1984 - lr: 7.4679e-06
Epoch 59/84
24/23 - 11s - auc: 0.9903 - loss: 0.2124 - val_auc: 0.8890 - val_loss: 0.1980 - lr: 6.8211e-06
Epoch 60/84
24/23 - 10s - auc: 0.9891 - loss: 0.2206 - val_auc: 0.8876 - val_loss: 0.1957 - lr: 6.2390e-06
Epoch 61/84
24/23 - 10s - auc: 0.9878 - loss: 0.2265 - val_auc: 0.8824 - val_loss: 0.1937 - lr: 5.7151e-06
Epoch 62/84
24/23 - 10s - auc: 0.9882 - loss: 0.2239 - val_auc: 0.8830 - val_loss: 0.1932 - lr: 5.2436e-06
Epoch 63/84
24/23 - 10s - auc: 0.9862 - loss: 0.2319 - val_auc: 0.8858 - val_loss: 0.1937 - lr: 4.8192e-06
Epoch 64/84
24/23 - 10s - auc: 0.9866 - loss: 0.2306 - val_auc: 0.8875 - val_loss: 0.1941 - lr: 4.4373e-06
Epoch 65/84
24/23 - 10s - auc: 0.9904 - loss: 0.2144 - val_auc: 0.8898 - val_loss: 0.1945 - lr: 4.0936e-06
Epoch 66/84
24/23 - 10s - auc: 0.9909 - loss: 0.2090 - val_auc: 0.8937 - val_loss: 0.1962 - lr: 3.7842e-06
Epoch 67/84
24/23 - 10s - auc: 0.9928 - loss: 0.2026 - val_auc: 0.8920 - val_loss: 0.1964 - lr: 3.5058e-06
Epoch 68/84
24/23 - 10s - auc: 0.9918 - loss: 0.2057 - val_auc: 0.8887 - val_loss: 0.1968 - lr: 3.2552e-06
Epoch 69/84
24/23 - 10s - auc: 0.9917 - loss: 0.2072 - val_auc: 0.8901 - val_loss: 0.1968 - lr: 3.0297e-06
Epoch 70/84
24/23 - 10s - auc: 0.9881 - loss: 0.2246 - val_auc: 0.8901 - val_loss: 0.1950 - lr: 2.8267e-06
Epoch 71/84
24/23 - 10s - auc: 0.9886 - loss: 0.2234 - val_auc: 0.8849 - val_loss: 0.1937 - lr: 2.6441e-06
Epoch 72/84
24/23 - 10s - auc: 0.9887 - loss: 0.2215 - val_auc: 0.8838 - val_loss: 0.1930 - lr: 2.4796e-06
Epoch 73/84
24/23 - 10s - auc: 0.9877 - loss: 0.2243 - val_auc: 0.8836 - val_loss: 0.1932 - lr: 2.3317e-06
Epoch 74/84
24/23 - 10s - auc: 0.9879 - loss: 0.2243 - val_auc: 0.8854 - val_loss: 0.1925 - lr: 2.1985e-06
Epoch 75/84
24/23 - 10s - auc: 0.9905 - loss: 0.2137 - val_auc: 0.8867 - val_loss: 0.1928 - lr: 2.0787e-06
Epoch 76/84
24/23 - 10s - auc: 0.9927 - loss: 0.2013 - val_auc: 0.8854 - val_loss: 0.1936 - lr: 1.9708e-06
Epoch 77/84
24/23 - 10s - auc: 0.9934 - loss: 0.1979 - val_auc: 0.8869 - val_loss: 0.1947 - lr: 1.8737e-06
Epoch 78/84
24/23 - 10s - auc: 0.9926 - loss: 0.2017 - val_auc: 0.8853 - val_loss: 0.1956 - lr: 1.7863e-06
Epoch 79/84
24/23 - 10s - auc: 0.9908 - loss: 0.2115 - val_auc: 0.8874 - val_loss: 0.1956 - lr: 1.7077e-06
Epoch 80/84
24/23 - 10s - auc: 0.9865 - loss: 0.2289 - val_auc: 0.8870 - val_loss: 0.1947 - lr: 1.6369e-06
Epoch 81/84
24/23 - 10s - auc: 0.9882 - loss: 0.2219 - val_auc: 0.8853 - val_loss: 0.1940 - lr: 1.5732e-06
Epoch 82/84
24/23 - 10s - auc: 0.9875 - loss: 0.2264 - val_auc: 0.8839 - val_loss: 0.1936 - lr: 1.5159e-06
Epoch 83/84
24/23 - 10s - auc: 0.9876 - loss: 0.2254 - val_auc: 0.8865 - val_loss: 0.1935 - lr: 1.4643e-06
Epoch 84/84
24/23 - 10s - auc: 0.9896 - loss: 0.2177 - val_auc: 0.8884 - val_loss: 0.1937 - lr: 1.4179e-06
Predicting OOF with TTA (AUC)...
160/159 - 45s
Predicting Test with TTA (AUC)...
269/268 - 77s
Predicting OOF with TTA (Loss)...
160/159 - 46s
Predicting Test with TTA (Loss)...
269/268 - 78s
#### FOLD 1 OOF AUC = 0.921, with TTA (Loss) = 0.914, with TTA (AUC) = 0.922, with TTA (Blend) = 0.926
FOLD: 2
TRAIN: [ 0 1 2 3 4 6 7 9 10 11 12 14] VALID: [ 5 8 13]
Epoch 1/84
24/23 - 35s - auc: 0.5436 - loss: 0.6883 - val_auc: 0.4341 - val_loss: 0.7872 - lr: 5.0000e-06
Epoch 2/84
24/23 - 15s - auc: 0.7425 - loss: 0.6164 - val_auc: 0.5488 - val_loss: 0.5960 - lr: 2.6000e-05
Epoch 3/84
24/23 - 15s - auc: 0.8448 - loss: 0.5207 - val_auc: 0.6219 - val_loss: 0.4239 - lr: 4.7000e-05
Epoch 4/84
24/23 - 15s - auc: 0.8769 - loss: 0.4673 - val_auc: 0.6608 - val_loss: 0.3478 - lr: 6.8000e-05
Epoch 5/84
24/23 - 15s - auc: 0.8768 - loss: 0.4658 - val_auc: 0.7103 - val_loss: 0.3007 - lr: 8.9000e-05
Epoch 6/84
24/23 - 15s - auc: 0.9099 - loss: 0.4182 - val_auc: 0.7562 - val_loss: 0.2840 - lr: 1.1000e-04
Epoch 7/84
24/23 - 15s - auc: 0.9283 - loss: 0.3868 - val_auc: 0.8068 - val_loss: 0.2222 - lr: 1.3100e-04
Epoch 8/84
24/23 - 15s - auc: 0.9348 - loss: 0.3724 - val_auc: 0.8209 - val_loss: 0.2188 - lr: 1.5200e-04
Epoch 9/84
24/23 - 12s - auc: 0.9391 - loss: 0.3639 - val_auc: 0.8484 - val_loss: 0.2230 - lr: 1.7300e-04
Epoch 10/84
24/23 - 12s - auc: 0.9430 - loss: 0.3555 - val_auc: 0.8550 - val_loss: 0.2496 - lr: 1.9400e-04
Epoch 11/84
24/23 - 12s - auc: 0.9426 - loss: 0.3569 - val_auc: 0.8763 - val_loss: 0.3074 - lr: 2.1500e-04
Epoch 12/84
24/23 - 10s - auc: 0.9374 - loss: 0.3678 - val_auc: 0.8704 - val_loss: 0.2609 - lr: 2.3600e-04
Epoch 13/84
24/23 - 12s - auc: 0.9330 - loss: 0.3768 - val_auc: 0.8454 - val_loss: 0.2104 - lr: 2.5700e-04
Epoch 14/84
24/23 - 10s - auc: 0.9372 - loss: 0.3675 - val_auc: 0.8632 - val_loss: 0.2293 - lr: 2.7800e-04
Epoch 15/84
24/23 - 12s - auc: 0.9411 - loss: 0.3615 - val_auc: 0.8958 - val_loss: 0.2711 - lr: 2.9900e-04
Epoch 16/84
24/23 - 12s - auc: 0.9487 - loss: 0.3440 - val_auc: 0.8970 - val_loss: 0.2764 - lr: 3.2000e-04
Epoch 17/84
24/23 - 10s - auc: 0.9579 - loss: 0.3212 - val_auc: 0.8720 - val_loss: 0.2508 - lr: 3.2000e-04
Epoch 18/84
24/23 - 12s - auc: 0.9681 - loss: 0.2962 - val_auc: 0.8988 - val_loss: 0.2605 - lr: 3.2000e-04
Epoch 19/84
24/23 - 10s - auc: 0.9653 - loss: 0.3026 - val_auc: 0.8846 - val_loss: 0.2473 - lr: 3.2000e-04
Epoch 20/84
24/23 - 10s - auc: 0.9707 - loss: 0.2875 - val_auc: 0.8842 - val_loss: 0.2146 - lr: 3.2000e-04
Epoch 21/84
24/23 - 10s - auc: 0.9653 - loss: 0.3022 - val_auc: 0.8720 - val_loss: 0.2161 - lr: 3.2000e-04
Epoch 22/84
24/23 - 10s - auc: 0.9609 - loss: 0.3145 - val_auc: 0.8858 - val_loss: 0.2274 - lr: 2.8810e-04
Epoch 23/84
24/23 - 10s - auc: 0.9682 - loss: 0.2947 - val_auc: 0.8444 - val_loss: 0.2256 - lr: 2.5939e-04
Epoch 24/84
24/23 - 10s - auc: 0.9661 - loss: 0.2992 - val_auc: 0.8816 - val_loss: 0.2527 - lr: 2.3355e-04
Epoch 25/84
24/23 - 10s - auc: 0.9678 - loss: 0.2966 - val_auc: 0.8884 - val_loss: 0.2586 - lr: 2.1030e-04
Epoch 26/84
24/23 - 10s - auc: 0.9743 - loss: 0.2745 - val_auc: 0.8807 - val_loss: 0.2125 - lr: 1.8937e-04
Epoch 27/84
24/23 - 10s - auc: 0.9804 - loss: 0.2541 - val_auc: 0.8941 - val_loss: 0.2192 - lr: 1.7053e-04
Epoch 28/84
24/23 - 12s - auc: 0.9830 - loss: 0.2458 - val_auc: 0.8911 - val_loss: 0.2066 - lr: 1.5358e-04
Epoch 29/84
24/23 - 10s - auc: 0.9816 - loss: 0.2501 - val_auc: 0.8721 - val_loss: 0.2111 - lr: 1.3832e-04
Epoch 30/84
24/23 - 12s - auc: 0.9843 - loss: 0.2387 - val_auc: 0.8771 - val_loss: 0.2002 - lr: 1.2459e-04
Epoch 31/84
24/23 - 14s - auc: 0.9802 - loss: 0.2542 - val_auc: 0.8787 - val_loss: 0.1920 - lr: 1.1223e-04
Epoch 32/84
24/23 - 10s - auc: 0.9787 - loss: 0.2609 - val_auc: 0.8837 - val_loss: 0.2089 - lr: 1.0111e-04
Epoch 33/84
24/23 - 10s - auc: 0.9825 - loss: 0.2490 - val_auc: 0.8697 - val_loss: 0.1941 - lr: 9.1095e-05
Epoch 34/84
24/23 - 10s - auc: 0.9818 - loss: 0.2488 - val_auc: 0.8587 - val_loss: 0.1989 - lr: 8.2086e-05
Epoch 35/84
24/23 - 11s - auc: 0.9810 - loss: 0.2527 - val_auc: 0.8788 - val_loss: 0.2084 - lr: 7.3977e-05
Epoch 36/84
24/23 - 10s - auc: 0.9841 - loss: 0.2405 - val_auc: 0.8883 - val_loss: 0.2074 - lr: 6.6679e-05
Epoch 37/84
24/23 - 10s - auc: 0.9891 - loss: 0.2212 - val_auc: 0.8763 - val_loss: 0.1995 - lr: 6.0111e-05
Epoch 38/84
24/23 - 10s - auc: 0.9889 - loss: 0.2190 - val_auc: 0.8689 - val_loss: 0.1960 - lr: 5.4200e-05
Epoch 39/84
24/23 - 10s - auc: 0.9881 - loss: 0.2212 - val_auc: 0.8642 - val_loss: 0.2003 - lr: 4.8880e-05
Epoch 40/84
24/23 - 10s - auc: 0.9892 - loss: 0.2179 - val_auc: 0.8881 - val_loss: 0.2045 - lr: 4.4092e-05
Epoch 41/84
24/23 - 10s - auc: 0.9874 - loss: 0.2290 - val_auc: 0.8753 - val_loss: 0.1953 - lr: 3.9783e-05
Epoch 42/84
24/23 - 10s - auc: 0.9845 - loss: 0.2392 - val_auc: 0.8735 - val_loss: 0.1982 - lr: 3.5905e-05
Epoch 43/84
24/23 - 10s - auc: 0.9874 - loss: 0.2288 - val_auc: 0.8614 - val_loss: 0.1957 - lr: 3.2414e-05
Epoch 44/84
24/23 - 10s - auc: 0.9868 - loss: 0.2315 - val_auc: 0.8563 - val_loss: 0.2001 - lr: 2.9273e-05
Epoch 45/84
24/23 - 10s - auc: 0.9863 - loss: 0.2309 - val_auc: 0.8474 - val_loss: 0.1996 - lr: 2.6445e-05
Epoch 46/84
24/23 - 10s - auc: 0.9888 - loss: 0.2199 - val_auc: 0.8651 - val_loss: 0.2026 - lr: 2.3901e-05
Epoch 47/84
24/23 - 10s - auc: 0.9892 - loss: 0.2166 - val_auc: 0.8754 - val_loss: 0.2071 - lr: 2.1611e-05
Epoch 48/84
24/23 - 10s - auc: 0.9904 - loss: 0.2134 - val_auc: 0.8740 - val_loss: 0.2031 - lr: 1.9550e-05
Epoch 49/84
24/23 - 10s - auc: 0.9897 - loss: 0.2167 - val_auc: 0.8606 - val_loss: 0.1954 - lr: 1.7695e-05
Epoch 50/84
24/23 - 10s - auc: 0.9923 - loss: 0.2029 - val_auc: 0.8699 - val_loss: 0.1970 - lr: 1.6025e-05
Epoch 51/84
24/23 - 10s - auc: 0.9885 - loss: 0.2201 - val_auc: 0.8725 - val_loss: 0.1964 - lr: 1.4523e-05
Epoch 52/84
24/23 - 10s - auc: 0.9880 - loss: 0.2224 - val_auc: 0.8660 - val_loss: 0.1963 - lr: 1.3171e-05
Epoch 53/84
24/23 - 10s - auc: 0.9870 - loss: 0.2276 - val_auc: 0.8581 - val_loss: 0.1977 - lr: 1.1953e-05
Epoch 54/84
24/23 - 10s - auc: 0.9867 - loss: 0.2287 - val_auc: 0.8560 - val_loss: 0.1967 - lr: 1.0858e-05
Epoch 55/84
24/23 - 10s - auc: 0.9889 - loss: 0.2190 - val_auc: 0.8600 - val_loss: 0.1961 - lr: 9.8723e-06
Epoch 56/84
24/23 - 10s - auc: 0.9895 - loss: 0.2159 - val_auc: 0.8643 - val_loss: 0.1972 - lr: 8.9851e-06
Epoch 57/84
24/23 - 10s - auc: 0.9920 - loss: 0.2029 - val_auc: 0.8637 - val_loss: 0.1979 - lr: 8.1866e-06
Epoch 58/84
24/23 - 10s - auc: 0.9911 - loss: 0.2101 - val_auc: 0.8699 - val_loss: 0.1991 - lr: 7.4679e-06
Epoch 59/84
24/23 - 10s - auc: 0.9907 - loss: 0.2119 - val_auc: 0.8702 - val_loss: 0.1988 - lr: 6.8211e-06
Epoch 60/84
24/23 - 10s - auc: 0.9902 - loss: 0.2120 - val_auc: 0.8658 - val_loss: 0.1979 - lr: 6.2390e-06
Epoch 61/84
24/23 - 10s - auc: 0.9880 - loss: 0.2253 - val_auc: 0.8688 - val_loss: 0.1964 - lr: 5.7151e-06
Epoch 62/84
24/23 - 10s - auc: 0.9880 - loss: 0.2228 - val_auc: 0.8672 - val_loss: 0.1953 - lr: 5.2436e-06
Epoch 63/84
24/23 - 10s - auc: 0.9903 - loss: 0.2137 - val_auc: 0.8638 - val_loss: 0.1949 - lr: 4.8192e-06
Epoch 64/84
24/23 - 10s - auc: 0.9858 - loss: 0.2327 - val_auc: 0.8665 - val_loss: 0.1959 - lr: 4.4373e-06
Epoch 65/84
24/23 - 10s - auc: 0.9865 - loss: 0.2275 - val_auc: 0.8654 - val_loss: 0.1965 - lr: 4.0936e-06
Epoch 66/84
24/23 - 10s - auc: 0.9915 - loss: 0.2080 - val_auc: 0.8674 - val_loss: 0.1976 - lr: 3.7842e-06
Epoch 67/84
24/23 - 10s - auc: 0.9930 - loss: 0.1986 - val_auc: 0.8689 - val_loss: 0.1985 - lr: 3.5058e-06
Epoch 68/84
24/23 - 10s - auc: 0.9929 - loss: 0.2013 - val_auc: 0.8685 - val_loss: 0.1991 - lr: 3.2552e-06
Epoch 69/84
24/23 - 10s - auc: 0.9924 - loss: 0.2025 - val_auc: 0.8728 - val_loss: 0.1993 - lr: 3.0297e-06
Epoch 70/84
24/23 - 10s - auc: 0.9900 - loss: 0.2125 - val_auc: 0.8710 - val_loss: 0.1993 - lr: 2.8267e-06
Epoch 71/84
24/23 - 10s - auc: 0.9886 - loss: 0.2217 - val_auc: 0.8730 - val_loss: 0.1982 - lr: 2.6441e-06
Epoch 72/84
24/23 - 10s - auc: 0.9886 - loss: 0.2196 - val_auc: 0.8686 - val_loss: 0.1969 - lr: 2.4796e-06
Epoch 73/84
24/23 - 10s - auc: 0.9875 - loss: 0.2237 - val_auc: 0.8659 - val_loss: 0.1964 - lr: 2.3317e-06
Epoch 74/84
24/23 - 10s - auc: 0.9879 - loss: 0.2240 - val_auc: 0.8605 - val_loss: 0.1956 - lr: 2.1985e-06
Epoch 75/84
24/23 - 10s - auc: 0.9892 - loss: 0.2179 - val_auc: 0.8615 - val_loss: 0.1952 - lr: 2.0787e-06
Epoch 76/84
24/23 - 10s - auc: 0.9916 - loss: 0.2056 - val_auc: 0.8591 - val_loss: 0.1954 - lr: 1.9708e-06
Epoch 77/84
24/23 - 10s - auc: 0.9927 - loss: 0.2017 - val_auc: 0.8589 - val_loss: 0.1962 - lr: 1.8737e-06
Epoch 78/84
24/23 - 10s - auc: 0.9905 - loss: 0.2087 - val_auc: 0.8618 - val_loss: 0.1971 - lr: 1.7863e-06
Epoch 79/84
24/23 - 10s - auc: 0.9930 - loss: 0.1995 - val_auc: 0.8631 - val_loss: 0.1975 - lr: 1.7077e-06
Epoch 80/84
24/23 - 10s - auc: 0.9901 - loss: 0.2122 - val_auc: 0.8646 - val_loss: 0.1977 - lr: 1.6369e-06
Epoch 81/84
24/23 - 10s - auc: 0.9886 - loss: 0.2185 - val_auc: 0.8670 - val_loss: 0.1972 - lr: 1.5732e-06
Epoch 82/84
24/23 - 10s - auc: 0.9894 - loss: 0.2166 - val_auc: 0.8655 - val_loss: 0.1970 - lr: 1.5159e-06
Epoch 83/84
24/23 - 10s - auc: 0.9878 - loss: 0.2233 - val_auc: 0.8644 - val_loss: 0.1973 - lr: 1.4643e-06
Epoch 84/84
24/23 - 10s - auc: 0.9879 - loss: 0.2243 - val_auc: 0.8631 - val_loss: 0.1969 - lr: 1.4179e-06
Predicting OOF with TTA (AUC)...
160/159 - 45s
Predicting Test with TTA (AUC)...
269/268 - 77s
Predicting OOF with TTA (Loss)...
160/159 - 46s
Predicting Test with TTA (Loss)...
269/268 - 78s
#### FOLD 2 OOF AUC = 0.899, with TTA (Loss) = 0.896, with TTA (AUC) = 0.897, with TTA (Blend) = 0.911
FOLD: 3
TRAIN: [ 0 3 4 5 6 7 8 9 10 11 12 13] VALID: [ 1 2 14]
Epoch 1/84
24/23 - 35s - auc: 0.5841 - loss: 0.6923 - val_auc: 0.6016 - val_loss: 0.5631 - lr: 5.0000e-06
Epoch 2/84
24/23 - 15s - auc: 0.7605 - loss: 0.6197 - val_auc: 0.6705 - val_loss: 0.4499 - lr: 2.6000e-05
Epoch 3/84
24/23 - 15s - auc: 0.8394 - loss: 0.5299 - val_auc: 0.6990 - val_loss: 0.3137 - lr: 4.7000e-05
Epoch 4/84
24/23 - 15s - auc: 0.8703 - loss: 0.4785 - val_auc: 0.7234 - val_loss: 0.2628 - lr: 6.8000e-05
Epoch 5/84
24/23 - 15s - auc: 0.8723 - loss: 0.4722 - val_auc: 0.7736 - val_loss: 0.2473 - lr: 8.9000e-05
Epoch 6/84
24/23 - 15s - auc: 0.8923 - loss: 0.4435 - val_auc: 0.7973 - val_loss: 0.2173 - lr: 1.1000e-04
Epoch 7/84
24/23 - 15s - auc: 0.9106 - loss: 0.4161 - val_auc: 0.8272 - val_loss: 0.2000 - lr: 1.3100e-04
Epoch 8/84
24/23 - 13s - auc: 0.9352 - loss: 0.3719 - val_auc: 0.8541 - val_loss: 0.2080 - lr: 1.5200e-04
Epoch 9/84
24/23 - 15s - auc: 0.9467 - loss: 0.3476 - val_auc: 0.8735 - val_loss: 0.1990 - lr: 1.7300e-04
Epoch 10/84
24/23 - 12s - auc: 0.9575 - loss: 0.3196 - val_auc: 0.8843 - val_loss: 0.2071 - lr: 1.9400e-04
Epoch 11/84
24/23 - 12s - auc: 0.9459 - loss: 0.3494 - val_auc: 0.8947 - val_loss: 0.2391 - lr: 2.1500e-04
Epoch 12/84
24/23 - 12s - auc: 0.9300 - loss: 0.3825 - val_auc: 0.9061 - val_loss: 0.2260 - lr: 2.3600e-04
Epoch 13/84
24/23 - 12s - auc: 0.9291 - loss: 0.3844 - val_auc: 0.9136 - val_loss: 0.2148 - lr: 2.5700e-04
Epoch 14/84
24/23 - 10s - auc: 0.9357 - loss: 0.3722 - val_auc: 0.9131 - val_loss: 0.2078 - lr: 2.7800e-04
Epoch 15/84
24/23 - 12s - auc: 0.9295 - loss: 0.3838 - val_auc: 0.9168 - val_loss: 0.2279 - lr: 2.9900e-04
Epoch 16/84
24/23 - 10s - auc: 0.9404 - loss: 0.3613 - val_auc: 0.9121 - val_loss: 0.2436 - lr: 3.2000e-04
Epoch 17/84
24/23 - 15s - auc: 0.9544 - loss: 0.3298 - val_auc: 0.9206 - val_loss: 0.1970 - lr: 3.2000e-04
Epoch 18/84
24/23 - 10s - auc: 0.9654 - loss: 0.3021 - val_auc: 0.9113 - val_loss: 0.1973 - lr: 3.2000e-04
Epoch 19/84
24/23 - 10s - auc: 0.9706 - loss: 0.2866 - val_auc: 0.9115 - val_loss: 0.2067 - lr: 3.2000e-04
Epoch 20/84
24/23 - 10s - auc: 0.9758 - loss: 0.2696 - val_auc: 0.9066 - val_loss: 0.2124 - lr: 3.2000e-04
Epoch 21/84
24/23 - 10s - auc: 0.9662 - loss: 0.2996 - val_auc: 0.9181 - val_loss: 0.2107 - lr: 3.2000e-04
Epoch 22/84
24/23 - 10s - auc: 0.9656 - loss: 0.3024 - val_auc: 0.9177 - val_loss: 0.2356 - lr: 2.8810e-04
Epoch 23/84
24/23 - 10s - auc: 0.9613 - loss: 0.3125 - val_auc: 0.9062 - val_loss: 0.2361 - lr: 2.5939e-04
Epoch 24/84
24/23 - 10s - auc: 0.9594 - loss: 0.3172 - val_auc: 0.9014 - val_loss: 0.2263 - lr: 2.3355e-04
Epoch 25/84
24/23 - 10s - auc: 0.9640 - loss: 0.3065 - val_auc: 0.9163 - val_loss: 0.2098 - lr: 2.1030e-04
Epoch 26/84
24/23 - 12s - auc: 0.9729 - loss: 0.2804 - val_auc: 0.9096 - val_loss: 0.1929 - lr: 1.8937e-04
Epoch 27/84
24/23 - 10s - auc: 0.9736 - loss: 0.2776 - val_auc: 0.9173 - val_loss: 0.2096 - lr: 1.7053e-04
Epoch 28/84
24/23 - 10s - auc: 0.9823 - loss: 0.2449 - val_auc: 0.9156 - val_loss: 0.2053 - lr: 1.5358e-04
Epoch 29/84
24/23 - 10s - auc: 0.9842 - loss: 0.2408 - val_auc: 0.8977 - val_loss: 0.1959 - lr: 1.3832e-04
Epoch 30/84
24/23 - 12s - auc: 0.9847 - loss: 0.2367 - val_auc: 0.8829 - val_loss: 0.1916 - lr: 1.2459e-04
Epoch 31/84
24/23 - 10s - auc: 0.9820 - loss: 0.2496 - val_auc: 0.9069 - val_loss: 0.2097 - lr: 1.1223e-04
Epoch 32/84
24/23 - 10s - auc: 0.9780 - loss: 0.2624 - val_auc: 0.8973 - val_loss: 0.1992 - lr: 1.0111e-04
Epoch 33/84
24/23 - 10s - auc: 0.9788 - loss: 0.2618 - val_auc: 0.9092 - val_loss: 0.2099 - lr: 9.1095e-05
Epoch 34/84
24/23 - 10s - auc: 0.9812 - loss: 0.2532 - val_auc: 0.8980 - val_loss: 0.2044 - lr: 8.2086e-05
Epoch 35/84
24/23 - 10s - auc: 0.9770 - loss: 0.2652 - val_auc: 0.9128 - val_loss: 0.2135 - lr: 7.3977e-05
Epoch 36/84
24/23 - 10s - auc: 0.9827 - loss: 0.2463 - val_auc: 0.9138 - val_loss: 0.1930 - lr: 6.6679e-05
Epoch 37/84
24/23 - 12s - auc: 0.9844 - loss: 0.2390 - val_auc: 0.9120 - val_loss: 0.1913 - lr: 6.0111e-05
Epoch 38/84
24/23 - 12s - auc: 0.9880 - loss: 0.2238 - val_auc: 0.8926 - val_loss: 0.1860 - lr: 5.4200e-05
Epoch 39/84
24/23 - 10s - auc: 0.9916 - loss: 0.2086 - val_auc: 0.8868 - val_loss: 0.1922 - lr: 4.8880e-05
Epoch 40/84
24/23 - 10s - auc: 0.9903 - loss: 0.2119 - val_auc: 0.8790 - val_loss: 0.1876 - lr: 4.4092e-05
Epoch 41/84
24/23 - 10s - auc: 0.9858 - loss: 0.2332 - val_auc: 0.8663 - val_loss: 0.1872 - lr: 3.9783e-05
Epoch 42/84
24/23 - 10s - auc: 0.9843 - loss: 0.2387 - val_auc: 0.8545 - val_loss: 0.1890 - lr: 3.5905e-05
Epoch 43/84
24/23 - 10s - auc: 0.9834 - loss: 0.2436 - val_auc: 0.8662 - val_loss: 0.1926 - lr: 3.2414e-05
Epoch 44/84
24/23 - 10s - auc: 0.9844 - loss: 0.2389 - val_auc: 0.8691 - val_loss: 0.1957 - lr: 2.9273e-05
Epoch 45/84
24/23 - 10s - auc: 0.9812 - loss: 0.2509 - val_auc: 0.8728 - val_loss: 0.1957 - lr: 2.6445e-05
Epoch 46/84
24/23 - 13s - auc: 0.9870 - loss: 0.2296 - val_auc: 0.8793 - val_loss: 0.1942 - lr: 2.3901e-05
Epoch 47/84
24/23 - 10s - auc: 0.9888 - loss: 0.2215 - val_auc: 0.8808 - val_loss: 0.1906 - lr: 2.1611e-05
Epoch 48/84
24/23 - 10s - auc: 0.9924 - loss: 0.2027 - val_auc: 0.8728 - val_loss: 0.1875 - lr: 1.9550e-05
Epoch 49/84
24/23 - 10s - auc: 0.9922 - loss: 0.2029 - val_auc: 0.8730 - val_loss: 0.1895 - lr: 1.7695e-05
Epoch 50/84
24/23 - 10s - auc: 0.9906 - loss: 0.2103 - val_auc: 0.8627 - val_loss: 0.1865 - lr: 1.6025e-05
Epoch 51/84
24/23 - 10s - auc: 0.9882 - loss: 0.2232 - val_auc: 0.8578 - val_loss: 0.1866 - lr: 1.4523e-05
Epoch 52/84
24/23 - 10s - auc: 0.9873 - loss: 0.2258 - val_auc: 0.8662 - val_loss: 0.1894 - lr: 1.3171e-05
Epoch 53/84
24/23 - 10s - auc: 0.9844 - loss: 0.2401 - val_auc: 0.8622 - val_loss: 0.1894 - lr: 1.1953e-05
Epoch 54/84
24/23 - 10s - auc: 0.9865 - loss: 0.2308 - val_auc: 0.8588 - val_loss: 0.1912 - lr: 1.0858e-05
Epoch 55/84
24/23 - 10s - auc: 0.9845 - loss: 0.2395 - val_auc: 0.8639 - val_loss: 0.1913 - lr: 9.8723e-06
Epoch 56/84
24/23 - 10s - auc: 0.9893 - loss: 0.2180 - val_auc: 0.8619 - val_loss: 0.1903 - lr: 8.9851e-06
Epoch 57/84
24/23 - 10s - auc: 0.9889 - loss: 0.2188 - val_auc: 0.8743 - val_loss: 0.1925 - lr: 8.1866e-06
Epoch 58/84
24/23 - 10s - auc: 0.9930 - loss: 0.1988 - val_auc: 0.8787 - val_loss: 0.1928 - lr: 7.4679e-06
Epoch 59/84
24/23 - 10s - auc: 0.9934 - loss: 0.1968 - val_auc: 0.8766 - val_loss: 0.1926 - lr: 6.8211e-06
Epoch 60/84
24/23 - 10s - auc: 0.9918 - loss: 0.2033 - val_auc: 0.8745 - val_loss: 0.1915 - lr: 6.2390e-06
Epoch 61/84
24/23 - 10s - auc: 0.9892 - loss: 0.2202 - val_auc: 0.8644 - val_loss: 0.1887 - lr: 5.7151e-06
Epoch 62/84
24/23 - 10s - auc: 0.9862 - loss: 0.2312 - val_auc: 0.8579 - val_loss: 0.1861 - lr: 5.2436e-06
Epoch 63/84
24/23 - 10s - auc: 0.9875 - loss: 0.2263 - val_auc: 0.8554 - val_loss: 0.1867 - lr: 4.8192e-06
Epoch 64/84
24/23 - 10s - auc: 0.9849 - loss: 0.2360 - val_auc: 0.8600 - val_loss: 0.1880 - lr: 4.4373e-06
Epoch 65/84
24/23 - 10s - auc: 0.9868 - loss: 0.2312 - val_auc: 0.8588 - val_loss: 0.1882 - lr: 4.0936e-06
Epoch 66/84
24/23 - 10s - auc: 0.9880 - loss: 0.2229 - val_auc: 0.8602 - val_loss: 0.1888 - lr: 3.7842e-06
Epoch 67/84
24/23 - 10s - auc: 0.9920 - loss: 0.2073 - val_auc: 0.8676 - val_loss: 0.1894 - lr: 3.5058e-06
Epoch 68/84
24/23 - 10s - auc: 0.9929 - loss: 0.2013 - val_auc: 0.8736 - val_loss: 0.1905 - lr: 3.2552e-06
Epoch 69/84
24/23 - 10s - auc: 0.9948 - loss: 0.1907 - val_auc: 0.8746 - val_loss: 0.1913 - lr: 3.0297e-06
Epoch 70/84
24/23 - 10s - auc: 0.9903 - loss: 0.2117 - val_auc: 0.8746 - val_loss: 0.1918 - lr: 2.8267e-06
Epoch 71/84
24/23 - 10s - auc: 0.9893 - loss: 0.2201 - val_auc: 0.8717 - val_loss: 0.1901 - lr: 2.6441e-06
Epoch 72/84
24/23 - 10s - auc: 0.9868 - loss: 0.2296 - val_auc: 0.8651 - val_loss: 0.1896 - lr: 2.4796e-06
Epoch 73/84
24/23 - 10s - auc: 0.9885 - loss: 0.2231 - val_auc: 0.8615 - val_loss: 0.1890 - lr: 2.3317e-06
Epoch 74/84
24/23 - 10s - auc: 0.9849 - loss: 0.2355 - val_auc: 0.8558 - val_loss: 0.1881 - lr: 2.1985e-06
Epoch 75/84
24/23 - 10s - auc: 0.9833 - loss: 0.2406 - val_auc: 0.8565 - val_loss: 0.1879 - lr: 2.0787e-06
Epoch 76/84
24/23 - 10s - auc: 0.9904 - loss: 0.2148 - val_auc: 0.8601 - val_loss: 0.1883 - lr: 1.9708e-06
Epoch 77/84
24/23 - 10s - auc: 0.9904 - loss: 0.2136 - val_auc: 0.8635 - val_loss: 0.1887 - lr: 1.8737e-06
Epoch 78/84
24/23 - 10s - auc: 0.9921 - loss: 0.2034 - val_auc: 0.8671 - val_loss: 0.1898 - lr: 1.7863e-06
Epoch 79/84
24/23 - 10s - auc: 0.9938 - loss: 0.1967 - val_auc: 0.8641 - val_loss: 0.1901 - lr: 1.7077e-06
Epoch 80/84
24/23 - 10s - auc: 0.9890 - loss: 0.2192 - val_auc: 0.8668 - val_loss: 0.1905 - lr: 1.6369e-06
Epoch 81/84
24/23 - 10s - auc: 0.9893 - loss: 0.2194 - val_auc: 0.8688 - val_loss: 0.1898 - lr: 1.5732e-06
Epoch 82/84
24/23 - 10s - auc: 0.9881 - loss: 0.2251 - val_auc: 0.8643 - val_loss: 0.1897 - lr: 1.5159e-06
Epoch 83/84
24/23 - 10s - auc: 0.9883 - loss: 0.2219 - val_auc: 0.8626 - val_loss: 0.1892 - lr: 1.4643e-06
Epoch 84/84
24/23 - 10s - auc: 0.9855 - loss: 0.2339 - val_auc: 0.8613 - val_loss: 0.1887 - lr: 1.4179e-06
Predicting OOF with TTA (AUC)...
160/159 - 45s
Predicting Test with TTA (AUC)...
269/268 - 77s
Predicting OOF with TTA (Loss)...
160/159 - 46s
Predicting Test with TTA (Loss)...
269/268 - 78s
#### FOLD 3 OOF AUC = 0.921, with TTA (Loss) = 0.933, with TTA (AUC) = 0.923, with TTA (Blend) = 0.935
FOLD: 4
TRAIN: [ 0 1 2 3 5 6 8 9 11 12 13 14] VALID: [ 4 7 10]
Epoch 1/84
24/23 - 36s - auc: 0.5819 - loss: 0.6943 - val_auc: 0.5734 - val_loss: 0.7070 - lr: 5.0000e-06
Epoch 2/84
24/23 - 12s - auc: 0.7352 - loss: 0.6372 - val_auc: 0.5634 - val_loss: 0.5538 - lr: 2.6000e-05
Epoch 3/84
24/23 - 15s - auc: 0.8369 - loss: 0.5387 - val_auc: 0.6021 - val_loss: 0.4131 - lr: 4.7000e-05
Epoch 4/84
24/23 - 15s - auc: 0.8734 - loss: 0.4787 - val_auc: 0.6524 - val_loss: 0.3372 - lr: 6.8000e-05
Epoch 5/84
24/23 - 15s - auc: 0.8942 - loss: 0.4417 - val_auc: 0.6975 - val_loss: 0.2576 - lr: 8.9000e-05
Epoch 6/84
24/23 - 15s - auc: 0.9214 - loss: 0.3971 - val_auc: 0.7810 - val_loss: 0.2507 - lr: 1.1000e-04
Epoch 7/84
24/23 - 15s - auc: 0.9334 - loss: 0.3738 - val_auc: 0.8173 - val_loss: 0.2166 - lr: 1.3100e-04
Epoch 8/84
24/23 - 15s - auc: 0.9414 - loss: 0.3586 - val_auc: 0.8449 - val_loss: 0.2147 - lr: 1.5200e-04
Epoch 9/84
24/23 - 12s - auc: 0.9193 - loss: 0.4018 - val_auc: 0.8578 - val_loss: 0.2191 - lr: 1.7300e-04
Epoch 10/84
24/23 - 12s - auc: 0.9168 - loss: 0.4058 - val_auc: 0.8828 - val_loss: 0.2223 - lr: 1.9400e-04
Epoch 11/84
24/23 - 12s - auc: 0.9227 - loss: 0.3968 - val_auc: 0.8942 - val_loss: 0.2666 - lr: 2.1500e-04
Epoch 12/84
24/23 - 12s - auc: 0.9291 - loss: 0.3846 - val_auc: 0.9001 - val_loss: 0.2223 - lr: 2.3600e-04
Epoch 13/84
24/23 - 10s - auc: 0.9325 - loss: 0.3775 - val_auc: 0.8857 - val_loss: 0.2227 - lr: 2.5700e-04
Epoch 14/84
24/23 - 10s - auc: 0.9389 - loss: 0.3648 - val_auc: 0.8866 - val_loss: 0.2404 - lr: 2.7800e-04
Epoch 15/84
24/23 - 12s - auc: 0.9485 - loss: 0.3441 - val_auc: 0.8999 - val_loss: 0.2001 - lr: 2.9900e-04
Epoch 16/84
24/23 - 10s - auc: 0.9583 - loss: 0.3191 - val_auc: 0.8818 - val_loss: 0.2339 - lr: 3.2000e-04
Epoch 17/84
24/23 - 12s - auc: 0.9662 - loss: 0.2992 - val_auc: 0.8956 - val_loss: 0.1876 - lr: 3.2000e-04
Epoch 18/84
24/23 - 13s - auc: 0.9712 - loss: 0.2844 - val_auc: 0.9047 - val_loss: 0.2144 - lr: 3.2000e-04
Epoch 19/84
24/23 - 10s - auc: 0.9495 - loss: 0.3403 - val_auc: 0.8984 - val_loss: 0.2288 - lr: 3.2000e-04
Epoch 20/84
24/23 - 10s - auc: 0.9526 - loss: 0.3359 - val_auc: 0.8941 - val_loss: 0.2860 - lr: 3.2000e-04
Epoch 21/84
24/23 - 10s - auc: 0.9554 - loss: 0.3288 - val_auc: 0.9037 - val_loss: 0.2513 - lr: 3.2000e-04
Epoch 22/84
24/23 - 10s - auc: 0.9642 - loss: 0.3075 - val_auc: 0.8933 - val_loss: 0.2275 - lr: 2.8810e-04
Epoch 23/84
24/23 - 10s - auc: 0.9634 - loss: 0.3070 - val_auc: 0.8999 - val_loss: 0.2349 - lr: 2.5939e-04
Epoch 24/84
24/23 - 10s - auc: 0.9681 - loss: 0.2955 - val_auc: 0.9036 - val_loss: 0.2266 - lr: 2.3355e-04
Epoch 25/84
24/23 - 10s - auc: 0.9734 - loss: 0.2811 - val_auc: 0.9031 - val_loss: 0.1893 - lr: 2.1030e-04
Epoch 26/84
24/23 - 10s - auc: 0.9770 - loss: 0.2668 - val_auc: 0.8937 - val_loss: 0.1885 - lr: 1.8937e-04
Epoch 27/84
24/23 - 10s - auc: 0.9838 - loss: 0.2425 - val_auc: 0.8874 - val_loss: 0.1890 - lr: 1.7053e-04
Epoch 28/84
24/23 - 10s - auc: 0.9835 - loss: 0.2429 - val_auc: 0.8888 - val_loss: 0.1877 - lr: 1.5358e-04
Epoch 29/84
24/23 - 10s - auc: 0.9772 - loss: 0.2666 - val_auc: 0.8897 - val_loss: 0.2072 - lr: 1.3832e-04
Epoch 30/84
24/23 - 10s - auc: 0.9765 - loss: 0.2688 - val_auc: 0.8842 - val_loss: 0.2040 - lr: 1.2459e-04
Epoch 31/84
24/23 - 10s - auc: 0.9764 - loss: 0.2677 - val_auc: 0.8690 - val_loss: 0.1994 - lr: 1.1223e-04
Epoch 32/84
24/23 - 10s - auc: 0.9800 - loss: 0.2571 - val_auc: 0.8697 - val_loss: 0.1998 - lr: 1.0111e-04
Epoch 33/84
24/23 - 10s - auc: 0.9797 - loss: 0.2572 - val_auc: 0.8998 - val_loss: 0.2023 - lr: 9.1095e-05
Epoch 34/84
24/23 - 10s - auc: 0.9826 - loss: 0.2478 - val_auc: 0.8874 - val_loss: 0.1956 - lr: 8.2086e-05
Epoch 35/84
24/23 - 10s - auc: 0.9834 - loss: 0.2412 - val_auc: 0.8868 - val_loss: 0.1910 - lr: 7.3977e-05
Epoch 36/84
24/23 - 10s - auc: 0.9880 - loss: 0.2245 - val_auc: 0.8874 - val_loss: 0.1928 - lr: 6.6679e-05
Epoch 37/84
24/23 - 10s - auc: 0.9911 - loss: 0.2093 - val_auc: 0.8798 - val_loss: 0.1957 - lr: 6.0111e-05
Epoch 38/84
24/23 - 10s - auc: 0.9886 - loss: 0.2211 - val_auc: 0.8846 - val_loss: 0.1893 - lr: 5.4200e-05
Epoch 39/84
24/23 - 10s - auc: 0.9830 - loss: 0.2435 - val_auc: 0.8802 - val_loss: 0.1986 - lr: 4.8880e-05
Epoch 40/84
24/23 - 10s - auc: 0.9835 - loss: 0.2438 - val_auc: 0.8842 - val_loss: 0.1943 - lr: 4.4092e-05
Epoch 41/84
24/23 - 10s - auc: 0.9843 - loss: 0.2419 - val_auc: 0.8864 - val_loss: 0.1964 - lr: 3.9783e-05
Epoch 42/84
24/23 - 10s - auc: 0.9853 - loss: 0.2350 - val_auc: 0.8826 - val_loss: 0.1941 - lr: 3.5905e-05
Epoch 43/84
24/23 - 10s - auc: 0.9876 - loss: 0.2272 - val_auc: 0.8848 - val_loss: 0.1994 - lr: 3.2414e-05
Epoch 44/84
24/23 - 14s - auc: 0.9857 - loss: 0.2336 - val_auc: 0.8849 - val_loss: 0.1979 - lr: 2.9273e-05
Epoch 45/84
24/23 - 10s - auc: 0.9866 - loss: 0.2297 - val_auc: 0.8911 - val_loss: 0.1964 - lr: 2.6445e-05
Epoch 46/84
24/23 - 10s - auc: 0.9916 - loss: 0.2076 - val_auc: 0.8941 - val_loss: 0.1979 - lr: 2.3901e-05
Epoch 47/84
24/23 - 10s - auc: 0.9932 - loss: 0.1981 - val_auc: 0.8915 - val_loss: 0.1948 - lr: 2.1611e-05
Epoch 48/84
24/23 - 10s - auc: 0.9911 - loss: 0.2086 - val_auc: 0.8854 - val_loss: 0.1958 - lr: 1.9550e-05
Epoch 49/84
24/23 - 10s - auc: 0.9856 - loss: 0.2342 - val_auc: 0.8698 - val_loss: 0.1899 - lr: 1.7695e-05
Epoch 50/84
24/23 - 10s - auc: 0.9838 - loss: 0.2415 - val_auc: 0.8730 - val_loss: 0.1879 - lr: 1.6025e-05
Epoch 51/84
24/23 - 10s - auc: 0.9881 - loss: 0.2237 - val_auc: 0.8777 - val_loss: 0.1909 - lr: 1.4523e-05
Epoch 52/84
24/23 - 10s - auc: 0.9861 - loss: 0.2324 - val_auc: 0.8807 - val_loss: 0.1918 - lr: 1.3171e-05
Epoch 53/84
24/23 - 10s - auc: 0.9887 - loss: 0.2217 - val_auc: 0.8821 - val_loss: 0.1929 - lr: 1.1953e-05
Epoch 54/84
24/23 - 10s - auc: 0.9904 - loss: 0.2153 - val_auc: 0.8784 - val_loss: 0.1935 - lr: 1.0858e-05
Epoch 55/84
24/23 - 10s - auc: 0.9900 - loss: 0.2143 - val_auc: 0.8788 - val_loss: 0.1959 - lr: 9.8723e-06
Epoch 56/84
24/23 - 10s - auc: 0.9926 - loss: 0.1999 - val_auc: 0.8817 - val_loss: 0.1967 - lr: 8.9851e-06
Epoch 57/84
24/23 - 10s - auc: 0.9931 - loss: 0.1985 - val_auc: 0.8807 - val_loss: 0.1950 - lr: 8.1866e-06
Epoch 58/84
24/23 - 10s - auc: 0.9903 - loss: 0.2130 - val_auc: 0.8752 - val_loss: 0.1928 - lr: 7.4679e-06
Epoch 59/84
24/23 - 10s - auc: 0.9862 - loss: 0.2312 - val_auc: 0.8706 - val_loss: 0.1903 - lr: 6.8211e-06
Epoch 60/84
24/23 - 10s - auc: 0.9871 - loss: 0.2276 - val_auc: 0.8735 - val_loss: 0.1894 - lr: 6.2390e-06
Epoch 61/84
24/23 - 10s - auc: 0.9869 - loss: 0.2301 - val_auc: 0.8749 - val_loss: 0.1897 - lr: 5.7151e-06
Epoch 62/84
24/23 - 10s - auc: 0.9879 - loss: 0.2246 - val_auc: 0.8746 - val_loss: 0.1901 - lr: 5.2436e-06
Epoch 63/84
24/23 - 10s - auc: 0.9888 - loss: 0.2201 - val_auc: 0.8775 - val_loss: 0.1916 - lr: 4.8192e-06
Epoch 64/84
24/23 - 10s - auc: 0.9905 - loss: 0.2130 - val_auc: 0.8807 - val_loss: 0.1931 - lr: 4.4373e-06
Epoch 65/84
24/23 - 10s - auc: 0.9904 - loss: 0.2142 - val_auc: 0.8814 - val_loss: 0.1941 - lr: 4.0936e-06
Epoch 66/84
24/23 - 10s - auc: 0.9933 - loss: 0.2001 - val_auc: 0.8831 - val_loss: 0.1956 - lr: 3.7842e-06
Epoch 67/84
24/23 - 10s - auc: 0.9938 - loss: 0.1966 - val_auc: 0.8798 - val_loss: 0.1962 - lr: 3.5058e-06
Epoch 68/84
24/23 - 10s - auc: 0.9896 - loss: 0.2151 - val_auc: 0.8786 - val_loss: 0.1952 - lr: 3.2552e-06
Epoch 69/84
24/23 - 10s - auc: 0.9859 - loss: 0.2307 - val_auc: 0.8796 - val_loss: 0.1934 - lr: 3.0297e-06
Epoch 70/84
24/23 - 10s - auc: 0.9857 - loss: 0.2335 - val_auc: 0.8742 - val_loss: 0.1921 - lr: 2.8267e-06
Epoch 71/84
24/23 - 10s - auc: 0.9878 - loss: 0.2245 - val_auc: 0.8746 - val_loss: 0.1914 - lr: 2.6441e-06
Epoch 72/84
24/23 - 10s - auc: 0.9886 - loss: 0.2213 - val_auc: 0.8772 - val_loss: 0.1922 - lr: 2.4796e-06
Epoch 73/84
24/23 - 10s - auc: 0.9883 - loss: 0.2207 - val_auc: 0.8787 - val_loss: 0.1915 - lr: 2.3317e-06
Epoch 74/84
24/23 - 10s - auc: 0.9902 - loss: 0.2136 - val_auc: 0.8787 - val_loss: 0.1916 - lr: 2.1985e-06
Epoch 75/84
24/23 - 10s - auc: 0.9919 - loss: 0.2056 - val_auc: 0.8782 - val_loss: 0.1924 - lr: 2.0787e-06
Epoch 76/84
24/23 - 10s - auc: 0.9938 - loss: 0.1987 - val_auc: 0.8799 - val_loss: 0.1931 - lr: 1.9708e-06
Epoch 77/84
24/23 - 10s - auc: 0.9940 - loss: 0.1943 - val_auc: 0.8773 - val_loss: 0.1944 - lr: 1.8737e-06
Epoch 78/84
24/23 - 10s - auc: 0.9888 - loss: 0.2190 - val_auc: 0.8776 - val_loss: 0.1944 - lr: 1.7863e-06
Epoch 79/84
24/23 - 10s - auc: 0.9874 - loss: 0.2264 - val_auc: 0.8813 - val_loss: 0.1935 - lr: 1.7077e-06
Epoch 80/84
24/23 - 10s - auc: 0.9876 - loss: 0.2269 - val_auc: 0.8801 - val_loss: 0.1928 - lr: 1.6369e-06
Epoch 81/84
24/23 - 10s - auc: 0.9886 - loss: 0.2195 - val_auc: 0.8784 - val_loss: 0.1919 - lr: 1.5732e-06
Epoch 82/84
24/23 - 10s - auc: 0.9900 - loss: 0.2161 - val_auc: 0.8788 - val_loss: 0.1918 - lr: 1.5159e-06
Epoch 83/84
24/23 - 10s - auc: 0.9889 - loss: 0.2192 - val_auc: 0.8770 - val_loss: 0.1915 - lr: 1.4643e-06
Epoch 84/84
24/23 - 10s - auc: 0.9884 - loss: 0.2214 - val_auc: 0.8776 - val_loss: 0.1917 - lr: 1.4179e-06
Predicting OOF with TTA (AUC)...
160/159 - 45s
Predicting Test with TTA (AUC)...
269/268 - 77s
Predicting OOF with TTA (Loss)...
160/159 - 46s
Predicting Test with TTA (Loss)...
269/268 - 78s
#### FOLD 4 OOF AUC = 0.905, with TTA (Loss) = 0.919, with TTA (AUC) = 0.919, with TTA (Blend) = 0.922
FOLD: 5
TRAIN: [ 0 1 2 4 5 7 8 9 10 11 13 14] VALID: [ 3 6 12]
Epoch 1/84
24/23 - 35s - auc: 0.5417 - loss: 0.6919 - val_auc: 0.4898 - val_loss: 0.5729 - lr: 5.0000e-06
Epoch 2/84
24/23 - 15s - auc: 0.7391 - loss: 0.6294 - val_auc: 0.5997 - val_loss: 0.4580 - lr: 2.6000e-05
Epoch 3/84
24/23 - 15s - auc: 0.8397 - loss: 0.5285 - val_auc: 0.6623 - val_loss: 0.3112 - lr: 4.7000e-05
Epoch 4/84
24/23 - 15s - auc: 0.8651 - loss: 0.4828 - val_auc: 0.7154 - val_loss: 0.2677 - lr: 6.8000e-05
Epoch 5/84
24/23 - 15s - auc: 0.8684 - loss: 0.4748 - val_auc: 0.7563 - val_loss: 0.2641 - lr: 8.9000e-05
Epoch 6/84
24/23 - 15s - auc: 0.9104 - loss: 0.4188 - val_auc: 0.7922 - val_loss: 0.2308 - lr: 1.1000e-04
Epoch 7/84
24/23 - 15s - auc: 0.9267 - loss: 0.3884 - val_auc: 0.8372 - val_loss: 0.2283 - lr: 1.3100e-04
Epoch 8/84
24/23 - 15s - auc: 0.9403 - loss: 0.3615 - val_auc: 0.8680 - val_loss: 0.1959 - lr: 1.5200e-04
Epoch 9/84
24/23 - 12s - auc: 0.9382 - loss: 0.3657 - val_auc: 0.8769 - val_loss: 0.2294 - lr: 1.7300e-04
Epoch 10/84
24/23 - 13s - auc: 0.9483 - loss: 0.3444 - val_auc: 0.8849 - val_loss: 0.2163 - lr: 1.9400e-04
Epoch 11/84
24/23 - 10s - auc: 0.9321 - loss: 0.3789 - val_auc: 0.8832 - val_loss: 0.2059 - lr: 2.1500e-04
Epoch 12/84
24/23 - 10s - auc: 0.9294 - loss: 0.3842 - val_auc: 0.8837 - val_loss: 0.2692 - lr: 2.3600e-04
Epoch 13/84
24/23 - 12s - auc: 0.9393 - loss: 0.3660 - val_auc: 0.8904 - val_loss: 0.2532 - lr: 2.5700e-04
Epoch 14/84
24/23 - 10s - auc: 0.9395 - loss: 0.3632 - val_auc: 0.8879 - val_loss: 0.2476 - lr: 2.7800e-04
Epoch 15/84
24/23 - 12s - auc: 0.9315 - loss: 0.3795 - val_auc: 0.8942 - val_loss: 0.2503 - lr: 2.9900e-04
Epoch 16/84
24/23 - 12s - auc: 0.9514 - loss: 0.3378 - val_auc: 0.8995 - val_loss: 0.2455 - lr: 3.2000e-04
Epoch 17/84
24/23 - 10s - auc: 0.9644 - loss: 0.3061 - val_auc: 0.8861 - val_loss: 0.2075 - lr: 3.2000e-04
Epoch 18/84
24/23 - 10s - auc: 0.9662 - loss: 0.2996 - val_auc: 0.8777 - val_loss: 0.1959 - lr: 3.2000e-04
Epoch 19/84
24/23 - 10s - auc: 0.9683 - loss: 0.2941 - val_auc: 0.8834 - val_loss: 0.2053 - lr: 3.2000e-04
Epoch 20/84
24/23 - 12s - auc: 0.9704 - loss: 0.2870 - val_auc: 0.8903 - val_loss: 0.1901 - lr: 3.2000e-04
Epoch 21/84
24/23 - 10s - auc: 0.9630 - loss: 0.3092 - val_auc: 0.8784 - val_loss: 0.2136 - lr: 3.2000e-04
Epoch 22/84
24/23 - 10s - auc: 0.9596 - loss: 0.3174 - val_auc: 0.8900 - val_loss: 0.2218 - lr: 2.8810e-04
Epoch 23/84
24/23 - 10s - auc: 0.9643 - loss: 0.3061 - val_auc: 0.8792 - val_loss: 0.2174 - lr: 2.5939e-04
Epoch 24/84
24/23 - 10s - auc: 0.9660 - loss: 0.3007 - val_auc: 0.8821 - val_loss: 0.2429 - lr: 2.3355e-04
Epoch 25/84
24/23 - 10s - auc: 0.9661 - loss: 0.2996 - val_auc: 0.8918 - val_loss: 0.2289 - lr: 2.1030e-04
Epoch 26/84
24/23 - 10s - auc: 0.9737 - loss: 0.2775 - val_auc: 0.8863 - val_loss: 0.2098 - lr: 1.8937e-04
Epoch 27/84
24/23 - 10s - auc: 0.9803 - loss: 0.2554 - val_auc: 0.8919 - val_loss: 0.1955 - lr: 1.7053e-04
Epoch 28/84
24/23 - 10s - auc: 0.9822 - loss: 0.2472 - val_auc: 0.8950 - val_loss: 0.1999 - lr: 1.5358e-04
Epoch 29/84
24/23 - 12s - auc: 0.9840 - loss: 0.2419 - val_auc: 0.9030 - val_loss: 0.1959 - lr: 1.3832e-04
Epoch 30/84
24/23 - 10s - auc: 0.9848 - loss: 0.2370 - val_auc: 0.8826 - val_loss: 0.1910 - lr: 1.2459e-04
Epoch 31/84
24/23 - 10s - auc: 0.9797 - loss: 0.2573 - val_auc: 0.9008 - val_loss: 0.2007 - lr: 1.1223e-04
Epoch 32/84
24/23 - 10s - auc: 0.9791 - loss: 0.2586 - val_auc: 0.8906 - val_loss: 0.1956 - lr: 1.0111e-04
Epoch 33/84
24/23 - 12s - auc: 0.9807 - loss: 0.2526 - val_auc: 0.8754 - val_loss: 0.1871 - lr: 9.1095e-05
Epoch 34/84
24/23 - 10s - auc: 0.9791 - loss: 0.2592 - val_auc: 0.8807 - val_loss: 0.2049 - lr: 8.2086e-05
Epoch 35/84
24/23 - 10s - auc: 0.9800 - loss: 0.2548 - val_auc: 0.8668 - val_loss: 0.1940 - lr: 7.3977e-05
Epoch 36/84
24/23 - 10s - auc: 0.9843 - loss: 0.2384 - val_auc: 0.8857 - val_loss: 0.2007 - lr: 6.6679e-05
Epoch 37/84
24/23 - 10s - auc: 0.9884 - loss: 0.2226 - val_auc: 0.8883 - val_loss: 0.1941 - lr: 6.0111e-05
Epoch 38/84
24/23 - 10s - auc: 0.9901 - loss: 0.2134 - val_auc: 0.8825 - val_loss: 0.1879 - lr: 5.4200e-05
Epoch 39/84
24/23 - 10s - auc: 0.9900 - loss: 0.2147 - val_auc: 0.8732 - val_loss: 0.1924 - lr: 4.8880e-05
Epoch 40/84
24/23 - 12s - auc: 0.9869 - loss: 0.2266 - val_auc: 0.8650 - val_loss: 0.1862 - lr: 4.4092e-05
Epoch 41/84
24/23 - 10s - auc: 0.9842 - loss: 0.2401 - val_auc: 0.8707 - val_loss: 0.1884 - lr: 3.9783e-05
Epoch 42/84
24/23 - 10s - auc: 0.9860 - loss: 0.2322 - val_auc: 0.8608 - val_loss: 0.1889 - lr: 3.5905e-05
Epoch 43/84
24/23 - 10s - auc: 0.9851 - loss: 0.2368 - val_auc: 0.8591 - val_loss: 0.1919 - lr: 3.2414e-05
Epoch 44/84
24/23 - 10s - auc: 0.9823 - loss: 0.2484 - val_auc: 0.8636 - val_loss: 0.1928 - lr: 2.9273e-05
Epoch 45/84
24/23 - 10s - auc: 0.9833 - loss: 0.2427 - val_auc: 0.8671 - val_loss: 0.1912 - lr: 2.6445e-05
Epoch 46/84
24/23 - 10s - auc: 0.9886 - loss: 0.2224 - val_auc: 0.8693 - val_loss: 0.1910 - lr: 2.3901e-05
Epoch 47/84
24/23 - 10s - auc: 0.9923 - loss: 0.2045 - val_auc: 0.8664 - val_loss: 0.1899 - lr: 2.1611e-05
Epoch 48/84
24/23 - 10s - auc: 0.9917 - loss: 0.2071 - val_auc: 0.8627 - val_loss: 0.1872 - lr: 1.9550e-05
Epoch 49/84
24/23 - 10s - auc: 0.9922 - loss: 0.2038 - val_auc: 0.8628 - val_loss: 0.1865 - lr: 1.7695e-05
Epoch 50/84
24/23 - 12s - auc: 0.9918 - loss: 0.2057 - val_auc: 0.8590 - val_loss: 0.1846 - lr: 1.6025e-05
Epoch 51/84
24/23 - 10s - auc: 0.9875 - loss: 0.2255 - val_auc: 0.8610 - val_loss: 0.1861 - lr: 1.4523e-05
Epoch 52/84
24/23 - 10s - auc: 0.9860 - loss: 0.2302 - val_auc: 0.8643 - val_loss: 0.1869 - lr: 1.3171e-05
Epoch 53/84
24/23 - 10s - auc: 0.9859 - loss: 0.2295 - val_auc: 0.8628 - val_loss: 0.1894 - lr: 1.1953e-05
Epoch 54/84
24/23 - 10s - auc: 0.9875 - loss: 0.2250 - val_auc: 0.8666 - val_loss: 0.1915 - lr: 1.0858e-05
Epoch 55/84
24/23 - 10s - auc: 0.9870 - loss: 0.2287 - val_auc: 0.8651 - val_loss: 0.1905 - lr: 9.8723e-06
Epoch 56/84
24/23 - 10s - auc: 0.9904 - loss: 0.2130 - val_auc: 0.8686 - val_loss: 0.1904 - lr: 8.9851e-06
Epoch 57/84
24/23 - 10s - auc: 0.9932 - loss: 0.1989 - val_auc: 0.8736 - val_loss: 0.1897 - lr: 8.1866e-06
Epoch 58/84
24/23 - 10s - auc: 0.9918 - loss: 0.2072 - val_auc: 0.8711 - val_loss: 0.1882 - lr: 7.4679e-06
Epoch 59/84
24/23 - 10s - auc: 0.9936 - loss: 0.1953 - val_auc: 0.8710 - val_loss: 0.1884 - lr: 6.8211e-06
Epoch 60/84
24/23 - 10s - auc: 0.9908 - loss: 0.2107 - val_auc: 0.8708 - val_loss: 0.1877 - lr: 6.2390e-06
Epoch 61/84
24/23 - 10s - auc: 0.9875 - loss: 0.2245 - val_auc: 0.8682 - val_loss: 0.1865 - lr: 5.7151e-06
Epoch 62/84
24/23 - 10s - auc: 0.9861 - loss: 0.2324 - val_auc: 0.8600 - val_loss: 0.1849 - lr: 5.2436e-06
Epoch 63/84
24/23 - 10s - auc: 0.9893 - loss: 0.2176 - val_auc: 0.8638 - val_loss: 0.1861 - lr: 4.8192e-06
Epoch 64/84
24/23 - 10s - auc: 0.9874 - loss: 0.2253 - val_auc: 0.8585 - val_loss: 0.1868 - lr: 4.4373e-06
Epoch 65/84
24/23 - 10s - auc: 0.9891 - loss: 0.2180 - val_auc: 0.8612 - val_loss: 0.1872 - lr: 4.0936e-06
Epoch 66/84
24/23 - 10s - auc: 0.9906 - loss: 0.2098 - val_auc: 0.8642 - val_loss: 0.1875 - lr: 3.7842e-06
Epoch 67/84
24/23 - 10s - auc: 0.9918 - loss: 0.2053 - val_auc: 0.8702 - val_loss: 0.1889 - lr: 3.5058e-06
Epoch 68/84
24/23 - 10s - auc: 0.9929 - loss: 0.2010 - val_auc: 0.8699 - val_loss: 0.1889 - lr: 3.2552e-06
Epoch 69/84
24/23 - 10s - auc: 0.9929 - loss: 0.1994 - val_auc: 0.8694 - val_loss: 0.1889 - lr: 3.0297e-06
Epoch 70/84
24/23 - 10s - auc: 0.9905 - loss: 0.2111 - val_auc: 0.8723 - val_loss: 0.1884 - lr: 2.8267e-06
Epoch 71/84
24/23 - 10s - auc: 0.9882 - loss: 0.2223 - val_auc: 0.8705 - val_loss: 0.1872 - lr: 2.6441e-06
Epoch 72/84
24/23 - 10s - auc: 0.9874 - loss: 0.2257 - val_auc: 0.8669 - val_loss: 0.1874 - lr: 2.4796e-06
Epoch 73/84
24/23 - 10s - auc: 0.9883 - loss: 0.2216 - val_auc: 0.8643 - val_loss: 0.1873 - lr: 2.3317e-06
Epoch 74/84
24/23 - 10s - auc: 0.9870 - loss: 0.2295 - val_auc: 0.8589 - val_loss: 0.1866 - lr: 2.1985e-06
Epoch 75/84
24/23 - 10s - auc: 0.9902 - loss: 0.2133 - val_auc: 0.8563 - val_loss: 0.1866 - lr: 2.0787e-06
Epoch 76/84
24/23 - 10s - auc: 0.9911 - loss: 0.2105 - val_auc: 0.8624 - val_loss: 0.1870 - lr: 1.9708e-06
Epoch 77/84
24/23 - 10s - auc: 0.9929 - loss: 0.2010 - val_auc: 0.8657 - val_loss: 0.1876 - lr: 1.8737e-06
Epoch 78/84
24/23 - 10s - auc: 0.9927 - loss: 0.1993 - val_auc: 0.8644 - val_loss: 0.1880 - lr: 1.7863e-06
Epoch 79/84
24/23 - 10s - auc: 0.9931 - loss: 0.1998 - val_auc: 0.8671 - val_loss: 0.1883 - lr: 1.7077e-06
Epoch 80/84
24/23 - 10s - auc: 0.9895 - loss: 0.2144 - val_auc: 0.8656 - val_loss: 0.1881 - lr: 1.6369e-06
Epoch 81/84
24/23 - 10s - auc: 0.9875 - loss: 0.2271 - val_auc: 0.8683 - val_loss: 0.1876 - lr: 1.5732e-06
Epoch 82/84
24/23 - 10s - auc: 0.9874 - loss: 0.2254 - val_auc: 0.8627 - val_loss: 0.1869 - lr: 1.5159e-06
Epoch 83/84
24/23 - 10s - auc: 0.9896 - loss: 0.2176 - val_auc: 0.8613 - val_loss: 0.1868 - lr: 1.4643e-06
Epoch 84/84
24/23 - 10s - auc: 0.9863 - loss: 0.2312 - val_auc: 0.8622 - val_loss: 0.1866 - lr: 1.4179e-06
Predicting OOF with TTA (AUC)...
161/160 - 45s
Predicting Test with TTA (AUC)...
269/268 - 77s
Predicting OOF with TTA (Loss)...
161/160 - 47s
Predicting Test with TTA (Loss)...
269/268 - 78s
#### FOLD 5 OOF AUC = 0.903, with TTA (Loss) = 0.902, with TTA (AUC) = 0.916, with TTA (Blend) = 0.913
###Markdown
Model loss graph
###Code
for n_fold, history in enumerate(history_list):
print(f'Fold: {n_fold + 1}')
plt.figure(figsize=(15,5))
plt.plot(np.arange(config['EPOCHS']), history['auc'],'-o',label='Train AUC',color='#ff7f0e')
plt.plot(np.arange(config['EPOCHS']), history['val_auc'],'-o',label='Val AUC',color='#1f77b4')
x = np.argmax(history['val_auc'])
y = np.max(history['val_auc'])
xdist = plt.xlim()[1] - plt.xlim()[0]
ydist = plt.ylim()[1] - plt.ylim()[0]
plt.scatter(x,y,s=200,color='#1f77b4')
plt.text(x-0.03*xdist,y-0.13*ydist,'max auc\n%.2f'%y,size=14)
plt.ylabel('AUC',size=14)
plt.xlabel('Epoch',size=14)
plt.legend(loc=2)
plt2 = plt.gca().twinx()
plt2.plot(np.arange(config['EPOCHS']), history['loss'],'-o',label='Train Loss',color='#2ca02c')
plt2.plot(np.arange(config['EPOCHS']), history['val_loss'],'-o',label='Val Loss',color='#d62728')
x = np.argmin(history['val_loss'])
y = np.min(history['val_loss'])
ydist = plt.ylim()[1] - plt.ylim()[0]
plt.scatter(x,y,s=200,color='#d62728')
plt.text(x-0.03*xdist,y+0.05*ydist,'min loss',size=14)
plt.ylabel('Loss',size=14)
plt.title('FOLD %i - Image Size %i' % (n_fold+1, config['HEIGHT']), size=18)
plt.legend(loc=3)
plt.show()
###Output
Fold: 1
###Markdown
Model loss graph aggregated
###Code
# plot_metrics_agg(history_list, config['N_USED_FOLDS'])
###Output
_____no_output_____
###Markdown
Model evaluation
###Code
# COMPUTE OVERALL OOF AUC (last)
oof = np.concatenate(oof_pred_last)
true = np.concatenate(oof_tar)
names = np.concatenate(oof_names)
folds = np.concatenate(oof_folds)
auc = roc_auc_score(true, oof)
print('Overall OOF AUC with TTA (last) = %.3f' % auc)
# COMPUTE OVERALL OOF AUC
oof = np.concatenate(oof_pred)
true = np.concatenate(oof_tar)
names = np.concatenate(oof_names)
folds = np.concatenate(oof_folds)
auc = roc_auc_score(true, oof)
print('Overall OOF AUC with TTA = %.3f' % auc)
# SAVE OOF TO DISK
df_oof = pd.DataFrame(dict(image_name=names, target=true, pred=oof, fold=folds))
df_oof.to_csv('oof.csv', index=False)
df_oof.head()
###Output
Overall OOF AUC with TTA (last) = 0.910
Overall OOF AUC with TTA = 0.909
###Markdown
Visualize test predictions
###Code
ds = get_dataset(TEST_FILENAMES, augment=False, repeat=False, dim=config['HEIGHT'],
labeled=False, return_image_names=True)
image_names = np.array([img_name.numpy().decode("utf-8") for img, img_name in iter(ds.unbatch())])
submission = pd.DataFrame(dict(image_name=image_names, target=preds[:,0], target_last=preds_last[:,0]))
submission = submission.sort_values('image_name')
print(f"Test predictions {len(submission[submission['target'] > .5])}|{len(submission[submission['target'] <= .5])}")
print(f"Test predictions (last) {len(submission[submission['target_last'] > .5])}|{len(submission[submission['target_last'] <= .5])}")
print('Top 10 samples')
display(submission.head(10))
print('Top 10 positive samples')
display(submission.query('target > .5').head(10))
fig = plt.subplots(figsize=(20, 5))
plt.hist(submission['target'], bins=100)
plt.show()
fig = plt.subplots(figsize=(20, 5))
plt.hist(submission['target_last'], bins=100)
plt.show()
###Output
Test predictions 309|10673
Test predictions (last) 538|10444
Top 10 samples
###Markdown
Test set predictions
###Code
submission['target_blend'] = (submission['target'] * .5) + (submission['target_last'] * .5)
display(submission.head(10))
display(submission.describe().T)
### BEST ###
submission[['image_name', 'target']].to_csv('submission.csv', index=False)
### LAST ###
submission_last = submission[['image_name', 'target_last']]
submission_last.columns = ['image_name', 'target']
submission_last.to_csv('submission_last.csv', index=False)
### BLEND ###
submission_blend = submission[['image_name', 'target_blend']]
submission_blend.columns = ['image_name', 'target']
submission_blend.to_csv('submission_blend.csv', index=False)
###Output
_____no_output_____ |
notebooks/4_comparison_and_report.ipynb | ###Markdown
In this notebook we explore the `compare` function and the `Report` class offered by `ranx`. First of all we need to install [ranx](https://github.com/AmenRa/ranx)Mind that the first time you run any ranx' functions they may take a while as they must be compiled first
###Code
!pip install -U ranx
###Output
_____no_output_____
###Markdown
Download the data we need
###Code
import os
import requests
for file in ["qrels", "run_1", "run_2", "run_3", "run_4", "run_5"]:
os.makedirs("notebooks/data", exist_ok=True)
with open(f"notebooks/data/{file}.trec", "w") as f:
master = f"https://raw.githubusercontent.com/AmenRa/ranx/master/notebooks/data/{file}.trec"
f.write(requests.get(master).text)
###Output
_____no_output_____
###Markdown
Compare
###Code
from ranx import Qrels, Run, compare
# Let's load qrels and runs from files
qrels = Qrels.from_file("notebooks/data/qrels.trec", kind="trec")
run_1 = Run.from_file("notebooks/data/run_1.trec", kind="trec")
run_2 = Run.from_file("notebooks/data/run_2.trec", kind="trec")
run_3 = Run.from_file("notebooks/data/run_3.trec", kind="trec")
run_4 = Run.from_file("notebooks/data/run_4.trec", kind="trec")
run_5 = Run.from_file("notebooks/data/run_5.trec", kind="trec")
# Compares different runs and performs statistical tests (Fisher's Randomization test)
report = compare(
qrels,
runs=[run_1, run_2, run_3, run_4, run_5],
metrics=["map@100", "mrr@100", "ndcg@10"],
max_p=0.01, # P-value threshold
# Use `fisher` for Fisher's Randomization Test (slower)
# or `student` for Two-sided Paired Student's t-Test (faster)
stat_test="fisher"
)
# The comparison results are saved in a Report instance,
# which provides handy functionalities such as tabular formatting
# (superscripts denote statistical significance differences)
report
# You can change the number of shown digits as follows
report.rounding_digits = 4
report
# You can show percentages insted of digits
# Note that the number of shown digits is based on
# the `rounding_digits` attribute, try changing it
report.rounding_digits = 3
report.show_percentages = True
report
# `rounding_digits` and `show_percentages` can be passed directly when
# calling `compare`
report = compare(
qrels,
runs=[run_1, run_2, run_3, run_4, run_5],
metrics=["map@100", "mrr@100", "ndcg@10"],
max_p=0.01, # P-value threshold
rounding_digits=3,
show_percentages=True,
)
report
# A Report can also be exported as LaTeX table ready for scientific publications
# Again you can control the number of digits and showing percentages
report.rounding_digits = 4
report.show_percentages = False
print(report.to_latex())
report.rounding_digits = 3
report.show_percentages = True
print(report.to_latex())
# A Report also keep track of the number of win, tie, and loss
report.win_tie_loss[("run_1", "run_2")]
# You can extract a Report data and convert it to a Python Dictionary
# or save it as JSON file
import json # Just for pretty printing
print(json.dumps(report.to_dict(), indent=4))
report.save("report.json")
###Output
_____no_output_____
###Markdown
In this notebook we explore the `compare` function and the `Report` class offered by `ranx`. First of all we need to install [ranx](https://github.com/AmenRa/ranx)Mind that the first time you run any ranx' functions they may take a while as they must be compiled first
###Code
!pip install -U ranx
###Output
_____no_output_____
###Markdown
Download the data we need
###Code
import os
import requests
for file in ["qrels", "run_1", "run_2", "run_3", "run_4", "run_5"]:
os.makedirs("notebooks/data", exist_ok=True)
with open(f"notebooks/data/{file}.trec", "w") as f:
master = f"https://raw.githubusercontent.com/AmenRa/ranx/master/notebooks/data/{file}.trec"
f.write(requests.get(master).text)
###Output
_____no_output_____
###Markdown
Compare
###Code
from ranx import Qrels, Run, compare
# Let's load qrels and runs from files
qrels = Qrels.from_file("notebooks/data/qrels.trec", kind="trec")
run_1 = Run.from_file("notebooks/data/run_1.trec", kind="trec")
run_2 = Run.from_file("notebooks/data/run_2.trec", kind="trec")
run_3 = Run.from_file("notebooks/data/run_3.trec", kind="trec")
run_4 = Run.from_file("notebooks/data/run_4.trec", kind="trec")
run_5 = Run.from_file("notebooks/data/run_5.trec", kind="trec")
# Compares different runs and performs statistical tests (Fisher's Randomization test)
report = compare(
qrels,
runs=[run_1, run_2, run_3, run_4, run_5],
metrics=["map@100", "mrr@100", "ndcg@10"],
max_p=0.01 # P-value threshold
)
# The comparison results are saved in a Report instance,
# which provides handy functionalities such as tabular formatting
# (superscripts denote statistical significance differences)
report
# You can change the number of shown digits as follows
report.rounding_digits = 4
report
# You can show percentages insted of digits
# Note that the number of shown digits is based on
# the `rounding_digits` attribute, try changing it
report.rounding_digits = 3
report.show_percentages = True
report
# `rounding_digits` and `show_percentages` can be passed directly when
# calling `compare`
report = compare(
qrels,
runs=[run_1, run_2, run_3, run_4, run_5],
metrics=["map@100", "mrr@100", "ndcg@10"],
max_p=0.01, # P-value threshold
rounding_digits=3,
show_percentages=True,
)
report
# A Report can also be exported as LaTeX table ready for scientific publications
# Again you can control the number of digits and showing percentages
report.rounding_digits = 4
report.show_percentages = False
print(report.to_latex())
report.rounding_digits = 3
report.show_percentages = True
print(report.to_latex())
# A Report also keep track of the number of win, tie, and loss
report.win_tie_loss[("run_1", "run_2")]
# You can extract a Report data and convert it to a Python Dictionary
# or save it as JSON file
import json # Just for pretty printing
print(json.dumps(report.to_dict(), indent=4))
report.save("report.json")
###Output
_____no_output_____ |
gravity-inversion.ipynb | ###Markdown
Gravity: Laguna del Maule Bouguer GravityThis notebook illustrates the SimPEG code used to invert Bouguer gravity data collected at Laguna del Maule volcanic field, Chile. Refer to [Miller et al 2017 EPSL](https://doi.org/10.1016/j.epsl.2016.11.007) for full details.Originally implemented in the [SimPEG Examples](http://docs.simpeg.xyz/content/examples/04-grav/plot_laguna_del_maule_inversion.htmlsphx-glr-content-examples-04-grav-plot-laguna-del-maule-inversion-py)
###Code
import deepdish as dd
import os
import tarfile
import SimPEG.PF as PF
from SimPEG import (
Maps, Regularization, Optimization, DataMisfit,
InvProblem, Directives, Inversion, Utils
)
from SimPEG.Utils.io_utils import download
import matplotlib.pyplot as plt
import numpy as np
# Download the data
url = "https://storage.googleapis.com/simpeg/Chile_GRAV_4_Miller/Chile_GRAV_4_Miller.tar.gz"
downloads = download(url, overwrite=True)
basePath = downloads.split(".")[0]
# unzip the tarfile
tar = tarfile.open(downloads, "r")
tar.extractall()
tar.close()
input_file = os.path.sep.join([basePath, 'LdM_input_file.inp'])
###Output
overwriting /Users/lindseyjh/git/simpeg-research/uda-2019-inversion/Chile_GRAV_4_Miller.tar.gz
Downloading https://storage.googleapis.com/simpeg/Chile_GRAV_4_Miller/Chile_GRAV_4_Miller.tar.gz
saved to: /Users/lindseyjh/git/simpeg-research/uda-2019-inversion/Chile_GRAV_4_Miller.tar.gz
Download completed!
###Markdown
Input parameters
###Code
# Plotting parameters, max and min densities in g/cc
vmin = -0.6
vmax = 0.6
# weight exponent for default weighting
wgtexp = 3.
# %%
# Read in the input file which included all parameters at once
# (mesh, topo, model, survey, inv param, etc.)
driver = PF.GravityDriver.GravityDriver_Inv(input_file)
###Output
_____no_output_____
###Markdown
forward simulation
###Code
# Access the mesh and survey information
mesh = driver.mesh
survey = driver.survey
# define gravity survey locations
rxLoc = survey.srcField.rxList[0].locs
# define gravity data and errors
d = survey.dobs
wd = survey.std
# Get the active cells
active = driver.activeCells
nC = len(active) # Number of active cells
# Create active map to go from reduce set to full
activeMap = Maps.InjectActiveCells(mesh, active, -100)
# Create static map
static = driver.staticCells
dynamic = driver.dynamicCells
staticCells = Maps.InjectActiveCells(
None, dynamic, driver.m0[static], nC=nC
)
mstart = driver.m0[dynamic]
# Get index of the center
midx = int(mesh.nCx/2)
# %%
# Now that we have a model and a survey we can build the linear system ...
# Create the forward model operator
prob = PF.Gravity.GravityIntegral(mesh, rhoMap=staticCells,
actInd=active)
prob.solverOpts['accuracyTol'] = 1e-4
# Pair the survey and problem
survey.pair(prob)
###Output
_____no_output_____
###Markdown
Set up the inversion
###Code
# create a custom directive to save the inversion model at every iteration to hdf5
class SaveModelEveryIterationHDF5(Directives.SaveEveryIteration):
"""SaveModelEveryIteration
This directive saves the model as a numpy array at each iteration. The
default direcroty is the current directoy and the models are saved as
`InversionModel-YYYY-MM-DD-HH-MM-iter.h5`
"""
mapping = None
def initialize(self):
print(
"SimPEG.SaveModelEveryIteration will save your models as: "
"'{0!s}###-{1!s}.h5'".format(
self.directory + os.path.sep, self.fileName
)
)
def endIter(self):
mesh = self.invProb.dmisfit.prob.mesh
rhoMap = self.invProb.dmisfit.prob.rhoMap
model = self.opt.xc
if self.mapping is not None:
model = self.mapping * model
model = model.reshape(mesh.vnC, order="F")
data = {
"x": mesh.vectorCCx,
"y": mesh.vectorCCy,
"z": mesh.vectorCCz,
"model": model
}
dd.io.save(
'{0!s}{1:03d}-{2!s}.h5'.format(
self.directory + os.path.sep, self.opt.iter, self.fileName
), data
)
# Apply depth weighting
wr = PF.Magnetics.get_dist_wgt(mesh, rxLoc, active, wgtexp,
np.min(mesh.hx)/4.)
wr = wr**2.
# %% Create inversion objects
reg = Regularization.Sparse(
mesh, indActive=active, mapping=staticCells, gradientType='total'
)
reg.mref = driver.mref[dynamic]
reg.cell_weights = wr * mesh.vol[active]
reg.norms = np.c_[0., 1., 1., 1.]
# reg.norms = driver.lpnorms
# Specify how the optimization will proceed
opt = Optimization.ProjectedGNCG(
maxIter=20, lower=driver.bounds[0],
upper=driver.bounds[1], maxIterLS=10,
maxIterCG=20, tolCG=1e-3
)
# Define misfit function (obs-calc)
dmis = DataMisfit.l2_DataMisfit(survey)
dmis.W = 1./wd
# create the default L2 inverse problem from the above objects
invProb = InvProblem.BaseInvProblem(dmis, reg, opt)
# Specify how the initial beta is found
betaest = Directives.BetaEstimate_ByEig(beta0_ratio=1e-2)
# save the results
save_model_mapping = Maps.InjectActiveCells(mesh, active, np.nan)
save_models = SaveModelEveryIterationHDF5(directory="inversion_results", mapping=save_model_mapping)
# IRLS sets up the Lp inversion problem
# Set the eps parameter parameter in Line 11 of the
# input file based on the distribution of model (DEFAULT = 95th %ile)
IRLS = Directives.Update_IRLS(
f_min_change=1e-4, maxIRLSiter=40, beta_tol=5e-1,
betaSearch=False
)
# Preconditioning refreshing for each IRLS iteration
update_Jacobi = Directives.UpdatePreconditioner()
# Create combined the L2 and Lp problem
inv = Inversion.BaseInversion(
invProb, directiveList=[IRLS, update_Jacobi, betaest, save_models]
)
# Run L2 and Lp inversion
mrec = inv.run(mstart)
###Output
SimPEG.InvProblem is setting bfgsH0 to the inverse of the eval2Deriv.
***Done using same Solver and solverOpts as the problem***
Approximated diag(JtJ) with linear operator
SimPEG.SaveModelEveryIteration will save your models as: 'inversion_results/###-InversionModel-2019-11-13-09-52.h5'
model has any nan: 0
=============================== Projected GNCG ===============================
# beta phi_d phi_m f |proj(x-g)-x| LS Comment
-----------------------------------------------------------------------------
x0 has any nan: 0
0 2.32e-03 1.30e+06 1.87e+01 1.30e+06 1.95e+02 0
1 1.16e-03 3.71e+03 7.00e+06 1.18e+04 1.04e+02 0
2 5.81e-04 1.85e+03 8.10e+06 6.56e+03 8.71e+01 0 Skip BFGS
3 2.91e-04 8.71e+02 9.27e+06 3.56e+03 7.20e+01 0 Skip BFGS
4 1.45e-04 3.74e+02 1.04e+07 1.89e+03 5.69e+01 0 Skip BFGS
5 7.26e-05 1.48e+02 1.15e+07 9.84e+02 5.21e+01 0 Skip BFGS
Reached starting chifact with l2-norm regularization: Start IRLS steps...
eps_p: 0.5000837959458159 eps_q: 0.5000837959458159
delta phim: 1.314e+06
6 3.63e-05 5.83e+01 2.21e+07 8.61e+02 3.72e+01 0 Skip BFGS
delta phim: 3.064e+00
7 8.53e-05 3.54e+01 2.66e+07 2.31e+03 6.81e+01 0 Skip BFGS
delta phim: 3.104e-01
8 5.52e-05 1.75e+02 2.88e+07 1.77e+03 5.10e+01 0
delta phim: 2.697e-01
9 5.52e-05 9.25e+01 3.34e+07 1.94e+03 1.39e+02 0
delta phim: 2.665e-01
10 3.43e-05 1.94e+02 3.42e+07 1.37e+03 2.18e+02 0
delta phim: 2.283e-01
11 3.43e-05 7.61e+01 3.88e+07 1.41e+03 1.03e+02 0
delta phim: 2.376e-01
12 3.43e-05 1.37e+02 3.98e+07 1.50e+03 1.94e+02 0
delta phim: 1.844e-01
13 3.43e-05 1.03e+02 4.22e+07 1.55e+03 1.79e+02 0
delta phim: 1.698e-01
14 2.38e-05 1.49e+02 4.22e+07 1.16e+03 8.75e+01 0
delta phim: 1.448e-01
15 2.38e-05 7.86e+01 4.57e+07 1.17e+03 7.36e+01 0
delta phim: 1.433e-01
16 2.38e-05 1.26e+02 4.49e+07 1.20e+03 6.94e+01 0
delta phim: 9.322e-02
17 2.38e-05 1.24e+02 4.56e+07 1.21e+03 1.71e+02 3
###Markdown
plot the results
###Code
# Plot observed data
Utils.PlotUtils.plot2Ddata(rxLoc, d)
# %%
# Write output model and data files and print misfit stats.
iteration = 20
iz = 20
data = dd.io.load(
f"{save_models.directory}{os.path.sep}{iteration:03d}-{save_models.fileName}.h5"
)
plt.pcolormesh(data["x"], data["y"], data["model"].reshape(mesh.vnC, order="F")[:, :, iz])
plt.title(f"z = {data['z'][iz]}m")
iy = 29
plt.pcolormesh(data["x"], data["z"], data["model"][:, iy, :].T)
plt.title(f"y = {data['y'][iy]}m")
###Output
_____no_output_____
###Markdown
Building a non-linear gravity inversion from scratch (almost)In this notebook, we'll build a non-linear gravity inversion to estimate the relief of a sedimentary basin. We'll implement smoothness regularization and see its effects on the solution. We'll also see how we can break the inversion by adding random noise, abusing regularization, and breaking the underlying assumptions.This notebook is part of the Geophysics Library lesson: [GeophysicsLibrary/non-linear-gravity-inversion](https://github.com/GeophysicsLibrary/non-linear-gravity-inversion) See the lesson material for extra code and setup instructions. ImportsWe'll use the basic scientific Python stack for this tutorial plus a custom module with the forward modelling function (based on the code from the [Harmonica](https://github.com/fatiando/harmonica) library).
###Code
import numpy as np
import matplotlib.pyplot as plt
# Our custom code (cheatcodes.py) with forward modelling and some utilities.
import cheatcodes as cc
###Output
_____no_output_____
###Markdown
This is a little trick to make the resolution of the matplotlib figures better for larger screens.
###Code
plt.rc("figure", dpi=120)
###Output
_____no_output_____
###Markdown
AssumptionsHere are some assumptions we'll work with:1. The basin is much larger in the y-dimension so we'll assume it's infinite (reducing the problem to 2D)1. The gravity disturbance is entirely due to the sedimentary basin1. The top of the basin is a flat surface at $z=0$1. The data are measured at a constant height of $z=1\ m$ Making synthetic dataFirst, we'll explore the forward modelling function and create some synthetic data.
###Code
depths, basin_boundaries = cc.synthetic_model()
print(basin_boundaries)
print(depths)
###Output
_____no_output_____
###Markdown
Plot the model.
###Code
cc.plot_prisms(depths, basin_boundaries)
###Output
_____no_output_____
###Markdown
Forward model some gravity data at a set of locations.
###Code
x = np.linspace(-5e3, 105e3, 60)
density = -300 # kg/m³
data = cc.forward_model(depths, basin_boundaries, density, x)
plt.figure(figsize=(9, 3))
plt.plot(x / 1000, data, ".k")
plt.xlabel("x [km]")
plt.ylabel("gravity disturbance [mGal]")
plt.show()
###Output
_____no_output_____
###Markdown
Calculating the Jacobian matrixThe first step to most inverse problems is being able to calculate the Jacobian matrix. We'll do this for our problem by a first-order finite differences approximation. If you can get analytical derivatives, that's usually a lot better.
###Code
def make_jacobian(parameters, basin_boundaries, density, x):
"""
Calculate the Jacobian matrix by finite differences.
"""
jacobian = np.empty((x.size, parameters.size))
step = np.zeros_like(parameters)
delta = 10
for j in range(jacobian.shape[1]):
step[j] += delta
jacobian[:, j] = (
(
cc.forward_model(parameters + step, basin_boundaries, density, x)
- cc.forward_model(parameters, basin_boundaries, density, x)
)
/ delta
)
step[j] = 0
return jacobian
###Output
_____no_output_____
###Markdown
Calculate and plot an example so we can see what this matrix looks like. We'll use a parameter vector with constant depths at first.
###Code
parameters = np.zeros(30) + 5000
jacobian = make_jacobian(parameters, basin_boundaries, density, x)
plt.figure()
plt.imshow(jacobian)
plt.colorbar(label="mGal/m")
plt.xlabel("columns")
plt.ylabel("rows")
plt.show()
###Output
_____no_output_____
###Markdown
Solve the inverse problem Now that we have a way of forward modelling and calculating the Jacobian matrix, we can implement the Gauss-Newton method for solving the non-linear inverse problem. The function below takes the input data, model configuration, and an initial estimate and outputs the estimated parameters and a list with the goal function value per iteration.
###Code
def basin2d_inversion(x, data, basin_boundaries, density, initial, max_iterations=10):
"""
Solve the inverse problem using the Gauss-Newton method.
"""
parameters = initial.astype(np.float64).copy()
predicted = cc.forward_model(parameters, basin_boundaries, density, x)
residuals = data - predicted
goal_function = [np.linalg.norm(residuals)**2]
for i in range(max_iterations):
jacobian = make_jacobian(parameters, basin_boundaries, density, x)
hessian = jacobian.T @ jacobian
gradient = jacobian.T @ residuals
deltap = np.linalg.solve(hessian, gradient)
new_parameters = parameters + deltap
predicted = cc.forward_model(new_parameters, basin_boundaries, density, x)
residuals = data - predicted
current_goal = np.linalg.norm(residuals)**2
if current_goal > goal_function[-1]:
break
parameters = new_parameters
goal_function.append(current_goal)
return parameters, goal_function
###Output
_____no_output_____
###Markdown
Now we can use this function to invert our synthetic data.
###Code
estimated, goal_function = basin2d_inversion(
x, data, basin_boundaries, density, initial=np.full(30, 1000),
)
predicted = cc.forward_model(estimated, basin_boundaries, density, x)
###Output
_____no_output_____
###Markdown
Plot the observed vs predicted data so we can inspect the fit.
###Code
plt.figure(figsize=(9, 3))
plt.plot(x / 1e3, data, ".k", label="observed")
plt.plot(x / 1e3, predicted, "-r", label='predicted')
plt.legend()
plt.xlabel("x [km]")
plt.ylabel("gravity disturbance [mGal]")
plt.show()
###Output
_____no_output_____
###Markdown
Look at the convergence of the method.
###Code
plt.figure()
plt.plot(goal_function)
plt.yscale("log")
plt.xlabel("iteration")
plt.ylabel("goal function (mGal²)")
plt.show()
###Output
_____no_output_____
###Markdown
And finally see if our estimate is close to the true model.
###Code
ax = cc.plot_prisms(depths, basin_boundaries)
cc.plot_prisms(estimated, basin_boundaries, edgecolor="blue", ax=ax)
###Output
_____no_output_____
###Markdown
Perfect! It seems that our inversion works well under these conditions (this initial estimate and no noise in the data). **Now let's break it!** **Your turn****Add pseudo-random noise to the data using `np.random.normal` function and investigate the effect this has on the inversion results.** A typical gravity survey has accuracy in between 0.5-1 mGal. Stability testWe can go one step further and run several of these inversion in a loop for different random noise realizations. This will give us an idea of how stable the inversion is.
###Code
estimates = []
for i in range(5):
noise = np.random.normal(loc=0, scale=1, size=data.size)
noisy_data = data + noise
estimated, goal_function = basin2d_inversion(
x, noisy_data, basin_boundaries, density, initial=np.full(30, 1000),
)
estimates.append(estimated)
ax = cc.plot_prisms(depths, basin_boundaries)
for estimated in estimates:
cc.plot_prisms(estimated, basin_boundaries, edgecolor="#0000ff66", ax=ax)
###Output
_____no_output_____
###Markdown
**Question time!****Why does the inversion become more unstable for the deeper portions of the model?**Hint: It's related to the physics of the forward modelling and the Jacobian matrix. Regularization to the rescueTo deal with the instability issues we encountered, we will apply **first-order Tikhonov regularization** (aka "smoothness"). First thing we need to do is create the finite difference matrix $\bar{\bar{R}}$.
###Code
def finite_difference_matrix(nparams):
"""
Create the finite difference matrix for regularization.
"""
fdmatrix = np.zeros((nparams - 1, nparams))
for i in range(fdmatrix.shape[0]):
fdmatrix[i, i] = -1
fdmatrix[i, i + 1] = 1
return fdmatrix
finite_difference_matrix(10)
###Output
_____no_output_____
###Markdown
Now we can use this to make a new inversion function with smoothness.
###Code
def basin2d_smooth_inversion(x, data, basin_boundaries, density, initial, smoothness, max_iterations=10):
"""
Solve the regularized inverse problem using the Gauss-Newton method.
"""
parameters = initial.astype(np.float64).copy()
predicted = cc.forward_model(parameters, basin_boundaries, density, x)
residuals = data - predicted
goal_function = [np.linalg.norm(residuals)**2]
fdmatrix = finite_difference_matrix(parameters.size)
for i in range(max_iterations):
jacobian = make_jacobian(parameters, basin_boundaries, density, x)
hessian = jacobian.T @ jacobian + smoothness * fdmatrix.T @ fdmatrix
gradient = jacobian.T @ residuals - smoothness * fdmatrix.T @ fdmatrix @ parameters
deltap = np.linalg.solve(hessian, gradient)
new_parameters = parameters + deltap
predicted = cc.forward_model(new_parameters, basin_boundaries, density, x)
residuals = data - predicted
current_goal = np.linalg.norm(residuals)**2
if current_goal > goal_function[-1]:
break
parameters = new_parameters
goal_function.append(current_goal)
return parameters, goal_function
###Output
_____no_output_____
###Markdown
Now we check if it works on our noisy data.
###Code
estimates = []
for i in range(5):
noise = np.random.normal(loc=0, scale=1, size=data.size)
noisy_data = data + noise
estimated, goal_function = basin2d_smooth_inversion(
x, noisy_data, basin_boundaries, density, initial=np.full(30, 1000), smoothness=1e-5
)
estimates.append(estimated)
ax = cc.plot_prisms(depths, basin_boundaries)
for estimated in estimates:
cc.plot_prisms(estimated, basin_boundaries, edgecolor="#0000ff66", ax=ax)
###Output
_____no_output_____
###Markdown
**Question time!****What happens when the regularization paramater is extremely high?** Try to predict what the answer would be and then execute the code to check your reasoning.Hint: what is the smoothest possible model? **Your turn****Can our regularized model recover a non-smooth geometry?** For example, real sedimentary basins often have [faults](https://en.wikipedia.org/wiki/Fault_(geology)) running through them, causing sharp jumps in the sediment thickness (up or down). To answer this question:1. Use the modified model depths below (the `depths` array) that introduce a shift up or down by 1-2 km in a section of the model of about 5-10 km.2. Generate new noisy data with this new model3. Invert the noisy data and try to find a model that: 1. Fits the data 2. Is stable (doesn't vary much if we change the noise) 3. Recovers the sharp boundary
###Code
fault_model = np.copy(depths)
fault_model[45:55] -= 2000
cc.plot_prisms(fault_model, basin_boundaries)
###Output
_____no_output_____
###Markdown
Now fill out the rest of the code below! **Question time!****What would happen if we used a "sharpness" regularization?** Would we be able to recover the faults? What about the smoother parts of the model? One type of sharpness regularization is called "total-variation regularization" and it [has been used for this problem in the past](https://doi.org/10.1190/1.3524286). Extra thinking points* What happens if we get the density wrong?* What are the sources of uncertainty in our final solution? Is it just the noise in the data?* How much does the solution depend on the inital estimate? **Bonus:** Optimizing codeThe code we wrote is not the greatest and it does take a while to run even for these really small 2D problems. There are ways in which we can make the code fast. But before we do any of that, **we need to know where our code spends most of its time**. Otherwise, we could spend hours optimizing a part of the code that is already really fast.This can be done with tools called **profilers**, which measure the time spent in each function of your code. This is also why its very important to **break up your code into functions**. In a Jupyter notebook, you can run the standard Python profiler by using the `%%prun` cell magic:
###Code
%%prun
basin2d_smooth_inversion(
x, noisy_data, basin_boundaries, density, initial=np.full(30, 1000), smoothness=1e-5
)
###Output
_____no_output_____
###Markdown
The `tottime` column is the amount of time spent on the function itself (not counting functions called inside it) and `cumtime` is the total time spent in the function, including function calls inside it. We can see from the profiling that the majority of the computation is spend in forward modelling, in particular for building the Jacobian. So if we can optimize `make_jacobian` that will have the biggest impact on performance of all.To start let's measure the computation time of `make_jacobian` with the `%%timeit` magic:
###Code
%%timeit
make_jacobian(np.full(30, 1000), basin_boundaries, density, x)
###Output
_____no_output_____
###Markdown
Alright, now we can try to do better.For many of these problems, the biggest return on investment is **not** parallelization or going to C/Fortran. **The largest improvements come from better maths/physics**. Here, we can take advantage of potential-field theory to cut down on the computation time of the Jacobian. We'll use the fact that the difference in gravity values produced by two models is the same as the gravity value produced by the difference in the models. Meaning that $\delta g = g(m_1) - g(m_2) = g(m_1 - m_2)$. This way, we can reduce by more than half the number of forward modelling operations we do in the finite-difference computations.So instead of calculating the entire basin model with and without a small step in a single parameter, we can only calculate the effect of that small step.
###Code
def make_jacobian_fast(parameters, basin_boundaries, density, x):
"""
Calculate the Jacobian matrix by finite differences.
"""
jacobian = np.empty((x.size, parameters.size))
delta = 10
boundaries = cc.prism_boundaries(parameters, basin_boundaries)
for j in range(jacobian.shape[1]):
jacobian[:, j] = (
(
# Replace with a single forward modelling of a single prism
cc.prism_gravity(x, boundaries[j], boundaries[j + 1], parameters[j], parameters[j] + delta, density)
)
/ delta
)
return jacobian
###Output
_____no_output_____
###Markdown
First, we check if the results are still correct.
###Code
np.allclose(
make_jacobian(np.full(30, 1000), basin_boundaries, density, x),
make_jacobian_fast(np.full(30, 1000), basin_boundaries, density, x)
)
###Output
_____no_output_____
###Markdown
Now we can measure the time again:
###Code
%%timeit
make_jacobian_fast(np.full(30, 1000), basin_boundaries, density, x)
###Output
_____no_output_____
###Markdown
This one change gave use 2 orders of magnitude improvement in the function that makes up most of the computation time. **Now that is time well spent!**We can measure how much of a difference this makes for the inversion as a whole by making a new function with our fast Jacobian matrix calculation.
###Code
def fast_basin2d_smooth_inversion(x, data, basin_boundaries, density, initial, smoothness, max_iterations=10):
"""
Solve the regularized inverse problem using the Gauss-Newton method.
"""
parameters = initial.astype(np.float64).copy()
predicted = cc.forward_model(parameters, basin_boundaries, density, x)
residuals = data - predicted
goal_function = [np.linalg.norm(residuals)**2]
fdmatrix = finite_difference_matrix(parameters.size)
for i in range(max_iterations):
# Swap out the slow jacobian for the fast one
jacobian = make_jacobian_fast(parameters, basin_boundaries, density, x)
hessian = jacobian.T @ jacobian + smoothness * fdmatrix.T @ fdmatrix
gradient = jacobian.T @ residuals - smoothness * fdmatrix.T @ fdmatrix @ parameters
deltap = np.linalg.solve(hessian, gradient)
new_parameters = parameters + deltap
predicted = cc.forward_model(new_parameters, basin_boundaries, density, x)
residuals = data - predicted
current_goal = np.linalg.norm(residuals)**2
if current_goal > goal_function[-1]:
break
parameters = new_parameters
goal_function.append(current_goal)
return parameters, goal_function
###Output
_____no_output_____
###Markdown
And now we can measure the computation time for both.
###Code
%%timeit
basin2d_smooth_inversion(
x, noisy_data, basin_boundaries, density, initial=np.full(30, 1000), smoothness=1e-5
)
%%timeit
fast_basin2d_smooth_inversion(
x, noisy_data, basin_boundaries, density, initial=np.full(30, 1000), smoothness=1e-5
)
###Output
_____no_output_____
###Markdown
Building a non-linear gravity inversion from scratch (almost)In this notebook, we'll build a non-linear gravity inversion to estimate the relief of a sedimentary basin. We'll implement smoothness regularization and see its effects on the solution. We'll also see how we can break the inversion by adding random noise, abusing regularization, and breaking the underlying assumptions. ImportsWe'll use the basic scientific Python stack for this tutorial plus a custom module with the forward modelling function (based on the code from the [Harmonica](https://github.com/fatiando/harmonica) library).
###Code
import numpy as np
import matplotlib.pyplot as plt
import cheatcodes as cc
###Output
_____no_output_____
###Markdown
This is a little trick to make the resolution of the matplotlib figures better for larger screens.
###Code
plt.rc("figure", dpi=120)
###Output
_____no_output_____
###Markdown
AssumptionsHere are some assumptions we'll work with:1. The basin is much larger in the y-dimension so we'll assume it's infinite (reducing the problem to 2D)1. The gravity disturbance is entirely due to the sedimentary basin1. The top of the basin is a flat surface at $z=0$1. The data are measured at a constant height of $z=1\ m$ Making synthetic dataFirst, we'll explore the forward modelling function and create some synthetic data.
###Code
depths, basin_boundaries = cc.synthetic_model()
print(basin_boundaries)
print(depths)
###Output
_____no_output_____
###Markdown
Plot the model.
###Code
cc.plot_prisms(depths, basin_boundaries)
###Output
_____no_output_____
###Markdown
Forward model some gravity data at a set of locations.
###Code
x = np.linspace(-5e3, 105e3, 60)
density = -300 # kg/m³
data = cc.forward_model(depths, basin_boundaries, density, x)
plt.figure(figsize=(9, 3))
plt.plot(x / 1000, data, ".k")
plt.xlabel("x [km]")
plt.ylabel("gravity disturbance [mGal]")
plt.show()
###Output
_____no_output_____
###Markdown
Calculating the Jacobian matrixThe first step to most inverse problems is being able to calculate the Jacobian matrix. We'll do this for our problem by a first-order finite differences approximation. If you can get analytical derivatives, that's usually a lot better.
###Code
def make_jacobian(parameters, basin_boundaries, density, x):
"""
Calculate the Jacobian matrix by finite differences.
"""
jacobian = np.empty((x.size, parameters.size))
step = np.zeros_like(parameters)
delta = 10
for j in range(jacobian.shape[1]):
step[j] += delta
jacobian[:, j] = (
(
cc.forward_model(parameters + step, basin_boundaries, density, x)
- cc.forward_model(parameters, basin_boundaries, density, x)
)
/ delta
)
step[j] = 0
return jacobian
###Output
_____no_output_____
###Markdown
Calculate and plot an example so we can see what this matrix looks like. We'll use a parameter vector with constant depths at first.
###Code
parameters = np.zeros(30) + 5000
jacobian = make_jacobian(parameters, basin_boundaries, density, x)
plt.figure()
plt.imshow(jacobian)
plt.colorbar(label="mGal/m")
plt.xlabel("columns")
plt.ylabel("rows")
plt.show()
###Output
_____no_output_____
###Markdown
Solve the inverse problem Now that we have a way of forward modelling and calculating the Jacobian matrix, we can implement the Gauss-Newton method for solving the non-linear inverse problem. The function below takes the input data, model configuration, and an initial estimate and outputs the estimated parameters and a list with the goal function value per iteration.
###Code
def basin2d_inversion(x, data, basin_boundaries, density, initial, max_iterations=10):
"""
Solve the inverse problem using the Gauss-Newton method.
"""
parameters = initial.astype(np.float64).copy()
predicted = cc.forward_model(parameters, basin_boundaries, density, x)
residuals = data - predicted
goal_function = [np.linalg.norm(residuals)**2]
for i in range(max_iterations):
jacobian = make_jacobian(parameters, basin_boundaries, density, x)
hessian = jacobian.T @ jacobian
gradient = jacobian.T @ residuals
deltap = np.linalg.solve(hessian, gradient)
new_parameters = parameters + deltap
predicted = cc.forward_model(new_parameters, basin_boundaries, density, x)
residuals = data - predicted
current_goal = np.linalg.norm(residuals)**2
if current_goal > goal_function[-1]:
break
parameters = new_parameters
goal_function.append(current_goal)
return parameters, goal_function
###Output
_____no_output_____
###Markdown
Now we can use this function to invert our synthetic data.
###Code
estimated, goal_function = basin2d_inversion(
x, data, basin_boundaries, density, initial=np.full(30, 1000),
)
predicted = cc.forward_model(estimated, basin_boundaries, density, x)
###Output
_____no_output_____
###Markdown
Plot the observed vs predicted data so we can inspect the fit.
###Code
plt.figure(figsize=(9, 3))
plt.plot(x / 1e3, data, ".k", label="observed")
plt.plot(x / 1e3, predicted, "-r", label='predicted')
plt.legend()
plt.xlabel("x [km]")
plt.ylabel("gravity disturbance [mGal]")
plt.show()
###Output
_____no_output_____
###Markdown
Look at the convergence of the method.
###Code
plt.figure()
plt.plot(goal_function)
plt.yscale("log")
plt.xlabel("iteration")
plt.ylabel("goal function (mGal²)")
plt.show()
###Output
_____no_output_____
###Markdown
And finally see if our estimate is close to the true model.
###Code
ax = cc.plot_prisms(depths, basin_boundaries)
cc.plot_prisms(estimated, basin_boundaries, edgecolor="blue", ax=ax)
###Output
_____no_output_____
###Markdown
Perfect! It seems that our inversion works well under these conditions (this initial estimate and no noise in the data). **Now let's break it!** **Flex your coding muscles****Add pseudo-random noise to the data using `np.random.normal` function and investigate the effect this has on the inversion results.** A typical gravity survey has accuracy in between 0.5-1 mGal. **Question time!****Why does the inversion struggle to recover the deeper portions of the model in particular?**Hint: It's related to the physics of the forward modelling and the Jacobian matrix. Regularization to the rescueTo deal with the instability issues we encountered, we will apply **first-order Tikhonov regularization** (aka "smoothness"). First thing we need to do is create the finite difference matrix $\bar{\bar{R}}$.
###Code
def finite_difference_matrix(nparams):
"""
Create the finite difference matrix for regularization.
"""
fdmatrix = np.zeros((nparams - 1, nparams))
for i in range(fdmatrix.shape[0]):
fdmatrix[i, i] = -1
fdmatrix[i, i + 1] = 1
return fdmatrix
finite_difference_matrix(10)
###Output
_____no_output_____
###Markdown
Now we can use this to make a new inversion function with smoothness.
###Code
def basin2d_smooth_inversion(x, data, basin_boundaries, density, initial, smoothness, max_iterations=10):
"""
Solve the regularized inverse problem using the Gauss-Newton method.
"""
parameters = initial.astype(np.float64).copy()
predicted = cc.forward_model(parameters, basin_boundaries, density, x)
residuals = data - predicted
goal_function = [np.linalg.norm(residuals)**2]
fdmatrix = finite_difference_matrix(parameters.size)
for i in range(max_iterations):
jacobian = make_jacobian(parameters, basin_boundaries, density, x)
hessian = jacobian.T @ jacobian + smoothness * fdmatrix.T @ fdmatrix
gradient = jacobian.T @ residuals - smoothness * fdmatrix.T @ fdmatrix @ parameters
deltap = np.linalg.solve(hessian, gradient)
new_parameters = parameters + deltap
predicted = cc.forward_model(new_parameters, basin_boundaries, density, x)
residuals = data - predicted
current_goal = np.linalg.norm(residuals)**2
if current_goal > goal_function[-1]:
break
parameters = new_parameters
goal_function.append(current_goal)
return parameters, goal_function
###Output
_____no_output_____
###Markdown
Now we check if it works on our noisy data.
###Code
estimates = []
for i in range(5):
noise = np.random.normal(loc=0, scale=1, size=data.size)
noisy_data = data + noise
estimated, goal_function = basin2d_smooth_inversion(
x, noisy_data, basin_boundaries, density, initial=np.full(30, 1000), smoothness=1e-5
)
estimates.append(estimated)
ax = cc.plot_prisms(depths, basin_boundaries)
for estimated in estimates:
cc.plot_prisms(estimated, basin_boundaries, edgecolor="#0000ff66", ax=ax)
###Output
_____no_output_____
###Markdown
**Question time!****What happens when the regularization paramater is extremely high?** Try to predict what the answer would be and then execute the code to check your reasoning.Hint: what is the smoothest possible model? **Flex your coding muscles****Can our regularized model recover a non-smooth geometry?** For example, real sedimentary basins often have [faults](https://en.wikipedia.org/wiki/Fault_(geology)) running through them, causing sharp jumps in the sediment thickness (up or down). To answer this question:1. Modify our model depths (the `depths` array) to introduce a shift up or down by 1-2 km in a section of the model of about 5-10 km.2. Generate new noisy data with this new model3. Invert the noisy data and try to find a model that: 1. Fits the data 2. Is stable (doesn't vary much if we change the noise) 3. Recovers the sharp boundary Hint: Use `np.copy` to make a copy of the `depths` (to avoid overwriting it). **Question time!****What would happen if we used a "sharpness" regularization?** Would we be able to recover the faults? What about the smoother parts of the model? One type of sharpness regularization is called "total-variation regularization" and it [has been used for this problem in the past](https://doi.org/10.1190/1.3524286). Extra thinking points* What happens if we get the density wrong?* What are the sources of uncertainty in our final solution? Is it just the noise in the data?* How much does the solution depend on the inital estimate? **Bonus:** Optimizing codeThe code we wrote is not the greatest and it does take a while to run even for these really small 2D problems. There are ways in which we can make the code fast. But before we do any of that, **we need to know where our code spends most of its time**. Otherwise, we could spend hours optimizing a part of the code that is already really fast.This can be done with tools called **profilers**, which measure the time spent in each function of your code. This is also why its very important to **break up your code into functions**. In a Jupyter notebook, you can run the standard Python profiler by using the `%%prun` cell magic:
###Code
%%prun
basin2d_smooth_inversion(
x, noisy_data, basin_boundaries, density, initial=np.full(30, 1000), smoothness=1e-5
)
###Output
_____no_output_____
###Markdown
The `tottime` column is the amount of time spent on the function itself (not counting functions called inside it) and `cumtime` is the total time spent in the function, including function calls inside it. We can see from the profiling that the majority of the computation is spend in forward modelling, in particular for building the Jacobian. So if we can optimize `make_jacobian` that will have the biggest impact on performance of all.To start let's measure the computation time of `make_jacobian` with the `%%timeit` magic:
###Code
%%timeit
make_jacobian(np.full(30, 1000), basin_boundaries, density, x)
###Output
_____no_output_____
###Markdown
Alright, now we can try to do better.For many of these problems, the biggest return on investment is **not** parallelization or going to C/Fortran. **The largest improvements come from better maths/physics**. Here, we can take advantage of potential-field theory to cut down on the computation time of the Jacobian. We'll use the fact that the difference in gravity values produced by two models is the same as the gravity value produced by the difference in the models. Meaning that $\delta g = g(m_1) - g(m_2) = g(m_1 - m_2)$. This way, we can reduce by more than half the number of forward modelling operations we do in the finite-difference computations.So instead of calculating the entire basin model with and without a small step in a single parameter, we can only calculate the effect of that small step.
###Code
def make_jacobian_fast(parameters, basin_boundaries, density, x):
"""
Calculate the Jacobian matrix by finite differences.
"""
jacobian = np.empty((x.size, parameters.size))
delta = 10
boundaries = cc.prism_boundaries(parameters, basin_boundaries)
for j in range(jacobian.shape[1]):
jacobian[:, j] = (
(
# Replace with a single forward modelling of a single prism
cc.prism_gravity(x, boundaries[j], boundaries[j + 1], parameters[j], parameters[j] + delta, density)
)
/ delta
)
return jacobian
###Output
_____no_output_____
###Markdown
First, we check if the results are still correct.
###Code
np.allclose(
make_jacobian(np.full(30, 1000), basin_boundaries, density, x),
make_jacobian_fast(np.full(30, 1000), basin_boundaries, density, x)
)
###Output
_____no_output_____
###Markdown
Now we can measure the time again:
###Code
%%timeit
make_jacobian_fast(np.full(30, 1000), basin_boundaries, density, x)
###Output
_____no_output_____
###Markdown
This one change gave use 2 orders of magnitude improvement in the function that makes up most of the computation time. **Now that is time well spent!**We can measure how much of a difference this makes for the inversion as a whole by making a new function with our fast Jacobian matrix calculation.
###Code
def fast_basin2d_smooth_inversion(x, data, basin_boundaries, density, initial, smoothness, max_iterations=10):
"""
Solve the regularized inverse problem using the Gauss-Newton method.
"""
parameters = initial.astype(np.float64).copy()
predicted = cc.forward_model(parameters, basin_boundaries, density, x)
residuals = data - predicted
goal_function = [np.linalg.norm(residuals)**2]
fdmatrix = finite_difference_matrix(parameters.size)
for i in range(max_iterations):
# Swap out the slow jacobian for the fast one
jacobian = make_jacobian_fast(parameters, basin_boundaries, density, x)
hessian = jacobian.T @ jacobian + smoothness * fdmatrix.T @ fdmatrix
gradient = jacobian.T @ residuals - smoothness * fdmatrix.T @ fdmatrix @ parameters
deltap = np.linalg.solve(hessian, gradient)
new_parameters = parameters + deltap
predicted = cc.forward_model(new_parameters, basin_boundaries, density, x)
residuals = data - predicted
current_goal = np.linalg.norm(residuals)**2
if current_goal > goal_function[-1]:
break
parameters = new_parameters
goal_function.append(current_goal)
return parameters, goal_function
###Output
_____no_output_____
###Markdown
And now we can measure the computation time for both.
###Code
%%timeit
basin2d_smooth_inversion(
x, noisy_data, basin_boundaries, density, initial=np.full(30, 1000), smoothness=1e-5
)
%%timeit
fast_basin2d_smooth_inversion(
x, noisy_data, basin_boundaries, density, initial=np.full(30, 1000), smoothness=1e-5
)
###Output
_____no_output_____ |
lecture_09.ipynb | ###Markdown
Statistics with Lists!
###Code
import math
from scipy import stats
import scipy
import string
def getNumbers():
nums = [] # start with an empty list
# sentinel loop to get numbers
xStr = raw_input("Enter a number (<Enter> to quit): ")
while xStr != "":
x = int(xStr)
nums.append(x) # add this value to the list
xStr = raw_input("Enter a number (<Enter> to quit: ")
return nums
def mean(nums):
summ = 0.0
for num in nums:
summ = summ + num
return summ / len(nums)
def stdDev(nums, xbar):
sumDevSq = 0.0
for num in nums:
dev = xbar - num
sumDevSq = sumDevSq+dev**2
## sumDevSq = sumDevSq + dev * dev
return math.sqrt(sumDevSq/(len(nums)-1))
def median(nums):
nums.sort()
size = len(nums)
midPos = size / 2
if size % 2 == 0:
median = (nums[midPos] + nums[midPos-1]) / 2.0
else:
median = nums[midPos]
return median
def main():
data = getNumbers()
#data = [2,4,6,9,13]
xbar = mean(data)
std = stdDev(data, xbar)
## std = scipy.stats.tstd(data)
## print std
med = median(data)
print "The mean of this set is numbers is %.2f, the std is %.2f,"\
"and the median is %.2f" %(xbar, std, med)
##
##
##
##
main()
###Output
_____no_output_____
###Markdown
Practice: Dictionaries
###Code
psswrd = {}
psswrd["cara"] = "singing"
psswrd['susana'] = 'batman'
psswrd['michele'] = 'purple'
print psswrd
print psswrd.keys()
print psswrd.values()
print psswrd.items()
###Output
_____no_output_____
###Markdown
Practice: Word Frequency (with Dictionaries!) To be viewed after practicum
###Code
import string
def getFile(filename):
infile = open(filename, "r")
text = infile.read()
infile.close()
return text
def wordCount():
filename = "c:/Python27/Methods_1/book_34.txt"
text = getFile(filename)
text = text.lower()
punc_list = [".", "?", "-", "!", '"', "'", "$", ";", ":",",", "(", ")",
"\x92", "\x93", "\x94"]
## replace punctuations with a space
for char in punc_list:
text = string.replace(text, char, " ")
words = text.split()
##print words[400:500]
counts = {}
for w in words:
if counts.has_key(w):
counts[w] = counts[w] + 1
else:
counts[w] = 1
word_freq_list = counts.items()
return word_freq_list
def main():
word_freq_list = wordCount()
print word_freq_list[:100]
main()
###Output
_____no_output_____
###Markdown
Practice: Order Word Count Lists
###Code
def Order(items):
## Inputs to this function are a list of word frequencies
## Outputs a list sorted based on frequency
values = []
ordered_freq = []
for i in range(len(items)):
count = items[i][1]
idx = i
val_tup = (count, idx)
values.append(val_tup)
## Sorts by the first item
values.sort()
## Reverses the list - most frequent first
values.reverse()
for thing in values:
idx = thing[1]
ordered = items[idx]
ordered_freq.append(ordered)
return ordered_freq
def main():
word_freq_list = wordCount()
final = Order(word_freq_list)
print final[:100]
main()
###Output
_____no_output_____
###Markdown
Lecture 09:* Creating a softmax classifier with a kaggle dataset* URL for the dataset: https://www.kaggle.com/c/otto-group-product-classification-challenge/data- For the sake of excercise the data is saved in the data folder
###Code
import numpy as np
import pandas as pd
import torch
import matplotlib.pyplot as plt
from sklearn import model_selection
import time
from sklearn.preprocessing import LabelEncoder
# Libraries for creating the dataloader
from torch.utils.data import DataLoader, Dataset
from torch import cuda
device = 'cuda' if cuda.is_available() else 'cpu'
# Load and split the data in train and valid
df = pd.read_csv('./data/otto_train.csv')
df = df[df.columns[1:]]
number = LabelEncoder()
df['target'] = number.fit_transform(df['target'].astype('str'))
df_train, df_valid = model_selection.train_test_split(df, test_size = 0.2, random_state = 42, stratify=df.target.values)
# Create a custom Dataset class which accepts a dataframe and pops out tensors as needed from the model
class Mydataset():
def __init__(self, df):
self.xy = df
self.len = self.xy.shape[0]
self.x_data = torch.from_numpy(self.xy.iloc[:,0:-1].to_numpy(dtype=np.float32))
self.y_data = torch.from_numpy(self.xy.iloc[:,-1].to_numpy(dtype=np.float32))
self.y_data = self.y_data.long()
def __getitem__(self, index):
return self.x_data[index], self.y_data[index]
def __len__(self):
return self.len
df.shape
# Creating a dataset
train_dataset = Mydataset(df_train)
valid_dataset = Mydataset(df_valid)
# Creating dataloaders
train_dataloader = DataLoader(dataset = train_dataset, batch_size=32, shuffle=True, num_workers=0)
valid_dataloader = DataLoader(dataset = valid_dataset, batch_size=32, shuffle=True, num_workers=0)
# Creating a network for passing the data
class Model(torch.nn.Module):
def __init__(self):
super (Model, self).__init__()
self.l1 = torch.nn.Linear(93, 720)
self.l2 = torch.nn.Linear(720, 540)
self.l3 = torch.nn.Linear(540, 320)
self.l4 = torch.nn.Linear(320, 240)
self.l5 = torch.nn.Linear(240, 120)
self.l6 = torch.nn.Linear(120, 9)
self.relu = torch.nn.functional.relu()
def forward(self, x):
out_1 = self.relu(self.l1(x))
out_2 = self.relu(self.l2(out_1))
out_3 = self.relu(self.l3(out_2))
out_4 = self.relu(self.l4(out_3))
out_5 = self.relu(self.l5(out_4))
y_pred = self.l6(out_5)
return y_pred
model = Model()
model.to(device)
criterion = torch.nn.CrossEntropyLoss(reduction='mean')
optimus = torch.optim.AdamW(model.parameters(), lr = 0.01)
def train(epoch):
model.train()
for _, data in enumerate(train_dataloader, 0):
inputs, labels = data
y_pred = model(inputs.to(device))
labels = labels.to(device)
loss = criterion(y_pred,labels)
if _%500 == 0:
print(f'Epoch: {epoch}, Loss: {loss.item()}')
optimus.zero_grad()
loss.backward()
optimus.step()
for epoch in range(0,2):
train(epoch)
def valid(model, valid_valid_dataloader):
model.eval()
n_correct = 0; n_wrong = 0; total = 0
with torch.no_grad():
for _, data in enumerate(valid_dataloader, 0):
inputs, labels = data
output = model(inputs.to(device))
labels = labels.to(device)
big_val, big_idx = torch.max(output.data, dim=1)
total += labels.size(0)
n_correct += (big_idx == labels).sum().item()
# if big_idx.item() == target.item()[1]:
# n_correct += 1
# else:
# n_wrong += 1
return (n_correct * 100.0) / (total)
# valid_loss += criterion(output, target).item()
# pred = output.data.max(1, keepdim=True)[1]
# correct += pred.eq(target.data.view_as(pred)).cpu().sum()
# valid_loss /= len(valid_dataloader.dataset)
# print(f'===========================\nTest set: Average loss: {valid_loss:.4f}, Accuracy: {correct}/{len(valid_dataloader.dataset)} '
# f'({100. * correct / len(valid_dataloader.dataset):.0f}%)')
print('This is the validation section to print the accuracy and see how it performs')
print('Here we are leveraging on the dataloader crearted for the validation dataset, the approcah is using more of pytorch')
acc = valid(model, valid_dataloader)
print("Accuracy on test data = %0.2f%%" % acc)
def accuracy(model, data_x, data_y):
# data_x and data_y are numpy nd arrays
N = len(data_x) # number data items
n = len(data_x[0]) # number features
n_correct = 0; n_wrong = 0
for i in range(N):
X = torch.Tensor(data_x[i].reshape((1,n)))
Y = torch.LongTensor(data_y[i].reshape((1,1)))
oupt = model(X.to(device))
(big_val, big_idx) = torch.max(oupt, dim=1)
if big_idx.item() == data_y[i]:
n_correct += 1
else:
n_wrong += 1
return (n_correct * 100.0) / (n_correct + n_wrong)
print('This is the validation section to print the accuracy and see how it performs')
print('This is more of a hack using the original dataset and converting it into tensors and calculating it directly')
data_x = df_valid.iloc[:,0:-1].to_numpy(dtype=np.float32)
data_y = df_valid.iloc[:,-1].to_numpy(dtype=np.float32)
model.eval()
acc = accuracy(model, data_x, data_y)
print("Accuracy on test data = %0.2f%%" % acc)
# if __name__ == '__main__':
# since = time.time()
# for epoch in range(1, 10):
# epoch_start = time.time()
# train(epoch)
# m, s = divmod(time.time() - epoch_start, 60)
# print(f'Training time: {m:.0f}m {s:.0f}s')
# valid()
# m, s = divmod(time.time() - epoch_start, 60)
# print(f'Testing time: {m:.0f}m {s:.0f}s')
# m, s = divmod(time.time() - since, 60)
# print(f'Total Time: {m:.0f}m {s:.0f}s\nModel was trained on {device}!')
###Output
_____no_output_____ |
Hands-on-ML/Code/Chapter 12/type_conversion.ipynb | ###Markdown
They have to be the same type
###Code
tf.constant(2.0) + tf.constant(40)
t2 = tf.constant(40., dtype=tf.float64)
tf.constant(2.0) + tf.cast(t2, tf.float32)
###Output
_____no_output_____ |
homeworks/D091/Day091_classification_with_cv.ipynb | ###Markdown
產生直方圖特徵的訓練資料
###Code
x_train_histogram = []
x_test_histogram = []
# 對於所有訓練資料
for i in range(len(x_train)):
chans = cv2.split(x_train[i]) # 把圖像的 3 個 channel 切分出來
# 對於所有 channel
hist_feature = []
for chan in chans:
# 計算該 channel 的直方圖
hist = cv2.calcHist([chan], [0], None, [16], [0, 256]) # 切成 16 個 bin
hist_feature.extend(hist.flatten())
# 把計算的直方圖特徵收集起來
x_train_histogram.append(hist_feature)
# 對於所有測試資料也做一樣的處理
for i in range(len(x_test)):
chans = cv2.split(x_test[i]) # 把圖像的 3 個 channel 切分出來
# 對於所有 channel
hist_feature = []
for chan in chans:
# 計算該 channel 的直方圖
hist = cv2.calcHist([chan], [0], None, [16], [0, 256]) # 切成 16 個 bin
hist_feature.extend(hist.flatten())
x_test_histogram.append(hist_feature)
x_train_histogram = np.array(x_train_histogram)
x_test_histogram = np.array(x_test_histogram)
###Output
_____no_output_____
###Markdown
產生 HOG 特徵的訓練資料* HOG 特徵通過計算和統計圖像局部區域的梯度方向直方圖來構建特徵,具體細節不在我們涵蓋的範圍裡面,有興趣的同學請參考[補充資料](https://www.cnblogs.com/zyly/p/9651261.html)哦
###Code
# SZ=20
bin_n = 16 # Number of bins
def hog(img):
img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
gx = cv2.Sobel(img, cv2.CV_32F, 1, 0)
gy = cv2.Sobel(img, cv2.CV_32F, 0, 1)
mag, ang = cv2.cartToPolar(gx, gy)
bins = np.int32(bin_n*ang/(2*np.pi)) # quantizing binvalues in (0...16)
bin_cells = bins[:10,:10], bins[10:,:10], bins[:10,10:], bins[10:,10:]
mag_cells = mag[:10,:10], mag[10:,:10], mag[:10,10:], mag[10:,10:]
hists = [np.bincount(b.ravel(), m.ravel(), bin_n) for b, m in zip(bin_cells, mag_cells)]
hist = np.hstack(hists) # hist is a 64 bit vector
return hist.astype(np.float32)
x_train_hog = np.array([hog(x) for x in x_train])
x_test_hog = np.array([hog(x) for x in x_test])
###Output
_____no_output_____
###Markdown
SVM model* SVM 是機器學習中一個經典的分類算法,具體細節有興趣可以參考 [該知乎上的解釋](https://www.zhihu.com/question/21094489),我們這裡直接調用 opencv 中實現好的函數 用 histogram 特徵訓練 SVM 模型* 訓練過程可能會花點時間,請等他一下
###Code
SVM_hist = cv2.ml.SVM_create()
SVM_hist.setKernel(cv2.ml.SVM_LINEAR)
SVM_hist.setGamma(5.383)
SVM_hist.setType(cv2.ml.SVM_C_SVC)
SVM_hist.setC(2.67)
#training
SVM_hist.train(x_train_histogram, cv2.ml.ROW_SAMPLE, y_train)
# prediction
_, y_hist_train = SVM_hist.predict(x_train_histogram)
_, y_hist_test = SVM_hist.predict(x_test_histogram)
###Output
_____no_output_____
###Markdown
用 HOG 特徵訓練 SVM 模型* 訓練過程可能會花點時間,請等他一下
###Code
SVM_hog = cv2.ml.SVM_create()
SVM_hog.setKernel(cv2.ml.SVM_LINEAR)
SVM_hog.setGamma(5.383)
SVM_hog.setType(cv2.ml.SVM_C_SVC)
SVM_hog.setC(2.67)
#training
SVM_hog.train(x_train_hog, cv2.ml.ROW_SAMPLE, y_train)
# prediction
_, y_hog_train = SVM_hog.predict(x_train_hog)
_, y_hog_test = SVM_hog.predict(x_test_hog)
###Output
_____no_output_____
###Markdown
作業嘗試比較用 color histogram 和 HOG 特徵來訓練的 SVM 分類器在 cifar10 training 和 testing data 上準確度的差別
###Code
import matplotlib.pyplot as plt
%matplotlib inline
print("hog train acc", np.sum(y_hog_train == y_train) / len(y_train))
print("hist test acc", np.sum(y_hist_test == y_test) / len(y_test))
print("hist train acc", np.sum(y_hist_train == y_train) / len(y_train))
print("hog test acc", np.sum(y_hog_test == y_test) / len(y_test))
# plt.plot(range(len(valid_f1sc)), valid_f1sc, label="valid acc")
# plt.legend()
# plt.title("F1-score")
# plt.show()
###Output
hog train acc 0.1928
hist test acc 0.1448
hist train acc 0.14846
hog test acc 0.1905
|
song-popularity/song-popularity.ipynb | ###Markdown
Impute missing data
###Code
total = train.isnull().sum().sort_values(ascending=False)
percent = (train.isnull().sum()/train.isnull().count()).sort_values(ascending=False)
missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_data.head(10)
f, ax = plt.subplots(figsize=(15, 6))
plt.xticks(rotation='90')
sns.barplot(x=missing_data.index, y=missing_data['Percent'])
plt.xlabel('Features', fontsize=15)
plt.ylabel('Percent', fontsize=15)
plt.title('Percent of missing values by feature', fontsize=15)
plt.show()
f, ax = plt.subplots(figsize=(15, 6))
plt.xticks(rotation='90')
sns.histplot(x=train.song_duration_ms)
plt.title('song_duration_ms', fontsize=15)
plt.show()
FEATURES = ['song_duration_ms', 'acousticness', 'danceability', 'energy', 'instrumentalness', 'key', 'liveness', 'loudness', 'audio_mode',
'speechiness', 'tempo', 'time_signature', 'audio_valence']
imputer = IterativeImputer()
train[FEATURES] = imputer.fit_transform(train[FEATURES])
test[FEATURES] = imputer.transform(test[FEATURES])
total = train.isnull().sum().sort_values(ascending=False)
percent = (train.isnull().sum()/train.isnull().count()).sort_values(ascending=False)
missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_data.head(10)
###Output
_____no_output_____
###Markdown
Balance target
###Code
f, ax = plt.subplots(figsize=(15, 6))
plt.xticks(rotation='90')
sns.countplot(x=train.song_popularity)
plt.title('song_popularity', fontsize=15)
plt.show()
FEATURE_COLUMNS = [ 'key', 'audio_mode', 'time_signature']
CAT_FEATURES = []
def add_cross_features(df):
for feature1 in FEATURE_COLUMNS:
for feature2 in FEATURE_COLUMNS:
if feature1 != feature2:
x2_feature_name = f'{feature1}-{feature2}'
x2_feature_name_2 = f'{feature2}-{feature1}'
if x2_feature_name not in df.columns and x2_feature_name_2 not in df.columns:
df[x2_feature_name] = df[feature1].astype(str) + '_' + df[feature2].astype(str)
CAT_FEATURES.append(x2_feature_name)
return df
train = add_cross_features(train)
test = add_cross_features(test)
ALL_FEATURES = FEATURES
for f in CAT_FEATURES:
if f not in ALL_FEATURES:
ALL_FEATURES.append(f)
print(len(ALL_FEATURES))
def encode_cat_feature( df, test_df, feature_name):
print(feature_name)
all_values = np.concatenate( (df[feature_name].values, test_df[feature_name].values))
encoder = LabelEncoder()
encoder.fit(all_values)
df[feature_name] = encoder.transform(df[feature_name])
test_df[feature_name] = encoder.transform(test_df[feature_name])
return df, test_df
CAT_FEATURES.append('audio_mode')
CAT_FEATURES.append('key')
CAT_FEATURES.append('time_signature')
for cat in CAT_FEATURES:
train, test = encode_cat_feature(train, test, cat)
#train.drop(['audio_mode', 'key', 'time_signature'], axis=1, inplace=True)
#test.drop(['audio_mode', 'key', 'time_signature'], axis=1, inplace=True)
ALL_FEATURES
RANDOM_SEED = 42
sampler = RandomOverSampler()
X_over, y_over = sampler.fit_resample(train, train.song_popularity)
sampler = RandomUnderSampler()
X_under, y_under = sampler.fit_resample(train, train.song_popularity)
sampler = SMOTEENN(random_state=RANDOM_SEED)
X_sme, y_sme = sampler.fit_resample(train, train.song_popularity)
train.drop(['id'], axis=1, inplace=True)
test.drop(['id'], axis=1, inplace=True)
USE_UNDER_SAMPLER = False
USE_OVER_SAMPLER = False
USE_SMOTEENN_SAMPLER = False
if USE_UNDER_SAMPLER:
target = y_under.astype(int)
train = X_under
elif USE_OVER_SAMPLER:
target = y_over.astype(int)
train = X_over
elif USE_SMOTEENN_SAMPLER:
target = y_sme.astype(int)
train = X_sme
else:
target = train.song_popularity.astype(int)
train.drop(['song_popularity'], axis=1, inplace=True)
## HIGH CORR
#train.drop(['acousticness', 'loudness'], axis=1, inplace=True)
#test.drop(['acousticness', 'loudness'], axis=1, inplace=True)
#train_profile = profile(train, title="Train Data", minimal=False)
#display(train_profile)
#test_profile = profile(test, title="Test Data", minimal=False)
#display(test_profile)
###Output
_____no_output_____
###Markdown
Scale data
###Code
test
def scale_feature( df, test_df, feature_name):
print(feature_name)
all_values = np.concatenate( (df[feature_name].values, test_df[feature_name].values)).reshape(-1, 1)
scaler = RobustScaler()
scaler.fit(all_values)
df[feature_name] = scaler.transform(df[feature_name].values.reshape(-1, 1))
test_df[feature_name] = scaler.transform(test_df[feature_name].values.reshape(-1, 1))
return df, test_df
SCALE_FEATURES = ['song_duration_ms', 'key-audio_mode', 'key-time_signature', 'audio_mode-time_signature']
train_scaled = train
test_scaled = test
for cat in SCALE_FEATURES:
train_scaled, test_scaled = scale_feature(train_scaled, test_scaled, cat)
#scaler = RobustScaler()
#train_scaled = pd.DataFrame(scaler.fit_transform(train), columns=train.columns)
#test_scaled = pd.DataFrame(scaler.transform(test), columns=test.columns)
#train_scaled = train
#test_scaled = test
print( train_scaled.shape)
print( test_scaled.shape)
###Output
(40000, 16)
(10000, 16)
###Markdown
Tune
###Code
NUM_BOOST_ROUND = 600
EARLY_STOPPING_ROUNDS = 200
VERBOSE_EVAL = 100
RANDOM_SEED = 42
TUNING = False
N_TRIALS = 25
def objective(trial, X, y):
scale_pos_weight = sum(y==0)/sum(y==1)
param_grid = {
'verbosity': 1,
'objective': 'binary:logistic',
'eval_metric' : 'auc',
'n_estimators' : NUM_BOOST_ROUND,
'learning_rate': trial.suggest_float('learning_rate', 0.0001, 0.01),
'eta': trial.suggest_float('eta', 0.001, 0.1),
'max_depth': trial.suggest_int('max_depth', 50, 500),
'min_child_weight': trial.suggest_float('min_child_weight', 50, 250),
'colsample_bytree': trial.suggest_float('colsample_bytree', 0.1, 0.9),
'gamma': trial.suggest_float('gamma', 0, 100),
'subsample': trial.suggest_float('subsample', 0.1, 0.9),
'lambda': trial.suggest_float('lambda', 1, 10),
'alpha': trial.suggest_float('alpha', 0, 9),
'scale_pos_weight': scale_pos_weight,
}
X_train, X_valid, y_train, y_valid = train_test_split( X, y, test_size=0.25,
random_state=RANDOM_SEED, shuffle=True)
dtrain = xgb.DMatrix(X_train, label=y_train, enable_categorical=True)
dvalid = xgb.DMatrix(X_valid, label=y_valid, enable_categorical=True)
model = xgb.XGBClassifier( use_label_encoder=True, **param_grid)
model.fit( X_train.values, y_train.values,
eval_set = [ (X_train.values, y_train.values), (X_valid.values, y_valid.values) ],
callbacks = [xgb.callback.EarlyStopping(rounds=EARLY_STOPPING_ROUNDS, save_best=True)],
verbose=False)
oof_pred = model.predict(X_valid)
score = f1_score(y_valid, oof_pred)
oof_auc_score = roc_auc_score(y_valid, oof_pred)
print(f'OOF F1 score: {score}, AUC score: {oof_auc_score}')
return oof_auc_score
if TUNING:
study = optuna.create_study(direction='maximize')
objective_func = lambda trial: objective(trial, train_scaled, target)
study.optimize(objective_func, n_trials=N_TRIALS)
print("Number of finished trials: {}".format(len(study.trials)))
print("Best trial:")
trial = study.best_trial
print(" Value: {}".format(trial.value))
print(" Params: ")
for key, value in trial.params.items():
print(" {}: {}".format(key, value))
TOTAL_SPLITS = 2
N_REPEATS = 2
NUM_BOOST_ROUND = 1000
EARLY_STOPPING_ROUNDS = 100
VERBOSE_EVAL = 200
def run_train(X, y, run_params, splits, num_boost_round, verbose_eval, early_stopping_rounds ):
models = []
scores = []
eval_results = {} # to record eval results for plotting
#folds = RepeatedStratifiedKFold(n_splits=splits, n_repeats=N_REPEATS, random_state=RANDOM_SEED)
folds = StratifiedKFold(n_splits=splits, random_state=RANDOM_SEED)
for fold_n, (train_index, valid_index) in enumerate(folds.split(X, y)):
print(f'Fold {fold_n+1} started')
X_train, X_valid = X.iloc[train_index], X.iloc[valid_index]
y_train, y_valid = y.iloc[train_index], y.iloc[valid_index]
dtrain = xgb.DMatrix(X_train, label=y_train, enable_categorical=True)
dvalid = xgb.DMatrix(X_valid, label=y_valid, enable_categorical=True)
model = xgb.train( run_params, dtrain,
num_boost_round = num_boost_round,
evals=[(dvalid, 'evals')],
verbose_eval = verbose_eval,
early_stopping_rounds=early_stopping_rounds
)
scores.append(model.best_score)
models.append(model)
return models, eval_results, scores
scale_pos_weight = sum(target==0)/sum(target==1)
under_params = {
'verbosity': 1,
'objective': 'binary:logistic',
'eval_metric': ['logloss','auc'],
'learning_rate': 0.176722404455782796,
'eta': 0.9764414987043088,
'max_depth': 200,
'min_child_weight': 80,
'colsample_bytree': 0.8,
'gamma': 2.2,
'subsample': 0.95,
'lambda': 2.54,
'alpha': 0.8,
}
run_params = {
'verbosity': 1,
'objective': 'binary:logistic',
'eval_metric': ['logloss','auc'],
'learning_rate': 0.1974105129432222,
'eta': 0.29151787034760856,
'scale_pos_weight': scale_pos_weight,
}
models, eval_results, scores = run_train(train_scaled, target, under_params,
TOTAL_SPLITS, NUM_BOOST_ROUND, VERBOSE_EVAL, EARLY_STOPPING_ROUNDS)
###Output
Fold 1 started
[0] evals-logloss:0.68045 evals-auc:0.53447
[122] evals-logloss:0.66899 evals-auc:0.55008
Fold 2 started
[0] evals-logloss:0.68022 evals-auc:0.54124
[112] evals-logloss:0.66814 evals-auc:0.54913
###Markdown
Validation
###Code
from sklearn.metrics import ConfusionMatrixDisplay
y_pred = np.zeros( (len(models), len(train_scaled)) )
for i in range(len(models)):
y_pred[i] = models[i].predict(xgb.DMatrix(train_scaled, enable_categorical=True))
y_pred = np.mean(y_pred, axis=0)
y_predicted = np.where(y_pred > 0.5, 1, 0)
cm = confusion_matrix(target, y_predicted)
cm_display = ConfusionMatrixDisplay(cm).plot()
from sklearn.metrics import roc_curve, precision_recall_curve
from sklearn.metrics import RocCurveDisplay, PrecisionRecallDisplay
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 6))
fpr, tpr, _ = roc_curve(target, y_pred)
prec, recall, _ = precision_recall_curve(target, y_pred)
roc_display = RocCurveDisplay(fpr=fpr, tpr=tpr).plot(ax=ax1)
pr_display = PrecisionRecallDisplay(precision=prec, recall=recall).plot(ax=ax2)
###Output
_____no_output_____
###Markdown
Submission
###Code
y_pred = np.zeros( (len(models), len(test)) )
for i in range(len(models)):
y_pred[i] = models[i].predict(xgb.DMatrix(test, enable_categorical=True))
y_pred = np.mean(y_pred, axis=0)
y_predicted = np.where(y_pred > 0.5, 1, 0)
# more like a sanity check
f, ax = plt.subplots(figsize=(15, 6))
plt.xticks(rotation='90')
sns.countplot(x=y_predicted)
plt.show()
submission['song_popularity'] = y_predicted
submission.to_csv('submission.csv', index=False, float_format='%.6f')
submission.head(20)
###Output
_____no_output_____ |
PyTorch/Tutorial01-3.ipynb | ###Markdown
PyTorch Tutorial 03. Neural Networks- 基本的に,[このチュートリアル](https://tutorials.pytorch.kr/beginner/blitz/neural_networks_tutorial.html)の内容に基づいている. - 徐々にディープラーニングの内容に移っていく Define neural network基礎的なニューラルネットワークを定義する. 実際学習をさせる前に,以下のような内容を説明するための例題に過ぎない.- PyTorchではニューラルネットワークをどう定義するか?- 中ではどういう風に演算がされていくか?- パディング(padding)がないように見えるが,これはPyTorchの仕様なのか?
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
############################################################
## user-defined layers in the network
############################################################
## define layer (1) convolution (2D)
## Conv2d(n_input_channels, n_output_channels, convolution_size)
self.conv1 = nn.Conv2d(1, 6, 3) ## (1 channels -> 6 channels) with 3x3 conv
self.conv2 = nn.Conv2d(6, 16, 3) ## (6 channels -> 16 channels) with 3x3 conv
## define layer (2) fully connected
## Linear(n_input_neurons, n_output_neurons)
self.fc1 = nn.Linear(16 * 6 * 6, 120) ## (16-channels x 6x6 -> 120)
self.fc2 = nn.Linear(120, 84) ## (120 -> 84)
self.fc3 = nn.Linear(84, 10) ## ( 84 -> 10)
def forward(self, x):
############################################################
## user-defined network by connecting layers (NO PADDING!)
############################################################
## block (1) convolution: 32x32x1 -(conv3x3)-> 30x30x6 -(maxpooling)-> 15x15x6
x = self.conv1(x)
x = F.relu(x)
x = F.max_pool2d(x, (2, 2)) # Max pooling over a (2, 2) window
## block (2) convolution: 15x15x6 -(conv3x3)-> 13x13x16 -(maxpooling)-> 6x6x16
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2) # size is a square when you only specify a single number
## top-layer: fully connected network
x = x.view(-1, self.num_flat_features(x)) ## [N,C,H,W] -> [N, CxHxW]
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
print(net)
## show model's (trainable) parameters detail
params = list(net.parameters())
print(len(params))
for i in range(len(params)):
print(params[i].size())
## neural network input & output
input_image = torch.randn(1, 1, 32, 32) ## random image NCHW format
out = net(input_image)
print(out)
###Output
tensor([[ 0.0762, 0.0015, 0.0348, 0.1176, -0.0889, -0.1429, -0.0517, -0.1441,
0.0986, 0.0090]], grad_fn=<AddmmBackward>)
###Markdown
Loss function
###Code
net.zero_grad() ## zero-clear the gradient buffers of all parameters
out.backward(torch.randn(1, 10))
output = net(input_image)
target = torch.randn(10) # a dummy target, for example
target = target.view(1, -1) # make it the same shape as output
criterion = nn.MSELoss()
loss = criterion(output, target)
print(loss)
## traceback the autograd functions
print(loss.grad_fn) # MSELoss
print(loss.grad_fn.next_functions[0][0]) # Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU
###Output
<MseLossBackward object at 0x000002023B6F4E88>
<AddmmBackward object at 0x000002023B6F4588>
<AccumulateGrad object at 0x000002023B6F4E88>
###Markdown
Backprop- .backward(): 誤差の逆伝搬(back propagation, backprop)を行う.- .zero_grad(): 既に計算されていたgradientsをクリアすると思えば良い (?)
###Code
net.zero_grad() # zero-clear the gradient buffers of all parameters
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)
loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad) # now it will be updated
###Output
conv1.bias.grad before backward
tensor([0., 0., 0., 0., 0., 0.])
conv1.bias.grad after backward
tensor([-0.0087, -0.0170, 0.0170, -0.0114, -0.0081, 0.0094])
###Markdown
Update weights
###Code
## Stochastic graient in a scratch
learning_rate = 0.01
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate)
import torch.optim as optim
## optimizer for neural network learning
optimizer = optim.SGD(net.parameters(), lr=0.01)
##
optimizer.zero_grad() # zero the gradient buffers
output = net(input_image)
loss = criterion(output, target)
loss.backward()
optimizer.step() # Does the update
###Output
_____no_output_____ |
ipynb/Germany-Bayern-LK-Traunstein.ipynb | ###Markdown
Germany: LK Traunstein (Bayern)* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Bayern-LK-Traunstein.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview(country="Germany", subregion="LK Traunstein", weeks=5);
overview(country="Germany", subregion="LK Traunstein");
compare_plot(country="Germany", subregion="LK Traunstein", dates="2020-03-15:");
# load the data
cases, deaths = germany_get_region(landkreis="LK Traunstein")
# get population of the region for future normalisation:
inhabitants = population(country="Germany", subregion="LK Traunstein")
print(f'Population of country="Germany", subregion="LK Traunstein": {inhabitants} people')
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 1000 rows
pd.set_option("max_rows", 1000)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Bayern-LK-Traunstein.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
Germany: LK Traunstein (Bayern)* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Bayern-LK-Traunstein.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview(country="Germany", subregion="LK Traunstein", weeks=5);
overview(country="Germany", subregion="LK Traunstein");
compare_plot(country="Germany", subregion="LK Traunstein", dates="2020-03-15:");
# load the data
cases, deaths = germany_get_region(landkreis="LK Traunstein")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Bayern-LK-Traunstein.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
Germany: LK Traunstein (Bayern)* Homepage of project: https://oscovida.github.io* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Bayern-LK-Traunstein.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview(country="Germany", subregion="LK Traunstein");
# load the data
cases, deaths, region_label = germany_get_region(landkreis="LK Traunstein")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Bayern-LK-Traunstein.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____ |
docs/io/chianti.ipynb | ###Markdown
CHIANTIAn Atomic Database for Spectroscopic Diagnostics of Astrophysical Plasmas . ChiantiIonReaderThe `ChiantiIonReader` class is a wrapper for the `ChiantiPy` package that interacts directly with the [database files](https://db.chiantidatabase.org/) and as a consequence uses a different notation. It provides information about individual ions.
###Code
from carsus.io.chianti_ import ChiantiIonReader
h_1 = ChiantiIonReader('h_1') # H I == Chianti's `he_1` == Carsus's (1,0)
h_1.levels
###Output
_____no_output_____
###Markdown
See the [API section](https://epassaro.github.io/carsus/api/carsus.io.chianti_.chianti_.html) for more information. ChiantiReaderThe `ChiantiReader` class mimics the structure of `GFALLReader` and provides `levels`, `lines` and `collisions` tables for the selected ions.
###Code
from carsus.io.chianti_ import ChiantiReader
chianti_reader = ChiantiReader('H-He', collisions=True)
chianti_reader.levels
chianti_reader.lines
chianti_reader.collisions
###Output
_____no_output_____
###Markdown
CHIANTIImport the `ChiantiReader` class.
###Code
from carsus.io.chianti_ import ChiantiReader
chianti_reader = ChiantiReader('H-He', collisions=True)
chianti_reader.levels
chianti_reader.lines
chianti_reader.collisions
###Output
_____no_output_____
###Markdown
Dump data with method `to_hdf`.
###Code
chianti_reader.to_hdf('chianti.h5')
# Hidden cell
!rm chianti.h5
###Output
_____no_output_____ |
notebooks/train_xgboost_model.ipynb | ###Markdown
Load Train Datatrain = pd.read_csv('../data/processed/train_agg_x_envios.csv', sep=';')
###Code
# Load Train Data
train = pd.read_pickle('../data/processed/train_x_envios_feateng.pkl')
train.shape
train['fecha_venta_norm'] = pd.to_datetime(train['fecha_venta_norm'])
train['fecha_venta_norm'] = train['fecha_venta_norm'].dt.date
###Output
_____no_output_____
###Markdown
Filtramos los meses que consideramos buenos para el entrenamiento (11 y 12)train = train[train.fecha_venta_norm.isin([ date(2012, 11, 1), date(2012, 12, 1), date(2013, 11, 1), date(2013, 12, 1), date(2014, 11, 1)])] predictors = ['id_pos','canal', 'competidores', 'ingreso_mediana', 'densidad_poblacional', 'pct_0a5', 'pct_5a9', 'pct_10a14', 'pct_15a19', 'pct_20a24', 'pct_25a29', 'pct_30a34', 'pct_35a39', 'pct_40a44', 'pct_45a49', 'pct_50a54', 'pct_55a59', 'pct_60a64', 'pct_65a69', 'pct_70a74', 'pct_75a79', 'pct_80a84', 'pct_85ainf', 'pct_bachelors', 'pct_doctorados', 'pct_secundario', 'pct_master', 'pct_bicicleta', 'pct_omnibus', 'pct_subtes', 'pct_taxi', 'pct_caminata', 'mediana_valor_hogar', 'unidades']
###Code
predictors = [
'id_pos',
'canal',
'competidores',
'ingreso_mediana',
'ingreso_promedio',
'densidad_poblacional',
'pct_0a5',
'pct_5a9',
'pct_10a14',
'pct_15a19',
'pct_20a24',
'pct_25a29',
'pct_30a34',
'pct_35a39',
'pct_40a44',
'pct_45a49',
'pct_50a54',
'pct_55a59',
'pct_60a64',
'pct_65a69',
'pct_70a74',
'pct_75a79',
'pct_80a84',
'pct_85ainf',
'pct_bachelors',
'pct_doctorados',
'pct_secundario',
'pct_master',
'pct_bicicleta',
'pct_omnibus',
'pct_subtes',
'pct_taxi',
'pct_caminata',
'mediana_valor_hogar',
'unidades_despachadas_sum',
'unidades_despachadas_max',
'unidades_despachadas_min',
'unidades_despachadas_avg',
'cantidad_envios_max',
'cantidad_envios_min',
'cantidad_envios_avg',
'num_cantidad_envios',
'unidades_despachadas_sum_acum',
'unidades_despachadas_sum_acum_3p',
'unidades_despachadas_sum_acum_6p',
'unidades_despachadas_max_acum',
'unidades_despachadas_min_acum',
'num_cantidad_envios_acum',
'num_cantidad_envios_acum_3per',
'num_cantidad_envios_acum_6per',
'diff_dtventa_dtenvio',
'unidades_before',
'num_ventas_before',
'rel_unidades_num_ventas',
'unidades_acum',
'num_ventas_acum',
'countacum', 'unidades_mean',
'num_ventas_mean',
'unidades_2time_before',
'unidades_diff',
'month',
'diff_dtventa_dtventa_before',
'unidades_pend',
'unidades'
]
train = train[predictors]
###Output
_____no_output_____
###Markdown
encode catvars
###Code
le = preprocessing.LabelEncoder()
le.fit(train['canal'].unique())
train['canal'] = le.transform(train['canal'].values)
X, y = train.iloc[:,:-1],train.iloc[:,-1]
data_dmatrix = xgb.DMatrix(data=X,label=y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=123)
###Output
_____no_output_____
###Markdown
Hyperparameters tuning
###Code
## =========================================================================================================
#
# Booster Parameters
#
# n_estimators
# - The number of sequential trees to be modeled
# - Though GBM is fairly robust at higher number of trees but it can still overfit at a point.
#
# max_depth [default=6]
# - The maximum depth of a tree.
# - Used to control over-fitting as higher depth will allow model to learn relations very pecific to a particular sample.
# - Typical values: 3-10
#
# min_child_weight [default=1]
# - Defines the minimum sum of weights of all observations required in a child.
# - This is similar to min_child_leaf in GBM but not exactly.
# This refers to min sum of weightsof observations while GBM has min number of observations.
# - Used to control over-fitting. Higher values prevent a model from learning relations
# which might be highly specific to the particular sample selected for a tree.
# - Too high values can lead to under-fitting hence, it should be tuned using CV.
#
# eta : learning rate
# - Makes the model more robust by shrinking the weights on each step
# - Typical final values to be used: 0.01-0.2
#
# gamma [default=0]
#
# - A node is split only when the resulting split gives a positive reduction in the loss function.
# - Gamma specifies the minimum loss reduction required to make a split.
# - Makes the algorithm conservative. The values can vary depending on the loss function and should be tuned.
#
# subsample [default=1]
# - Denotes the fraction of observations to be randomly samples for each tree.
# - Lower values make the algorithm more conservative and prevents overfitting but too small values
# might lead to under-fitting.
# - Typical values: 0.5-1
#
# colsample_bytree [default=1]
# - Denotes the fraction of columns to be randomly samples for each tree.
# - Typical values: 0.5-1
#
# alpha [default=0]
# - L1 regularization term on weight (analogous to Lasso regression)
# - Can be used in case of very high dimensionality so that the algorithm runs faster when implemented
#
# scale_pos_weight [default=1]
# A value greater than 0 should be used in case of high class imbalance as it helps in faster convergence.
#
## ===========================================================================================================
###Output
_____no_output_____
###Markdown
Number of trees (estimators)
###Code
# 5 fold cross validation is more accurate than using a single validation set
cv_folds = 5
early_stopping_rounds = 50
model = xgb.XGBRegressor(seed = SEED)
xgb_param = model.get_xgb_params()
xgb_param
# To train on GPU
xgb_param['objective'] = 'reg:squarederror'
xgb_param['gpu_id'] = 0
xgb_param['max_bin'] = 16
xgb_param['tree_method'] = 'gpu_hist'
xgb_param['learning_rate'] = 0.01
cvresult = xgb.cv(params=xgb_param, dtrain=data_dmatrix, num_boost_round = 1000, nfold = cv_folds, metrics = 'mae', early_stopping_rounds = early_stopping_rounds, seed = SEED)
print(cvresult)
print ("Optimal number of trees (estimators) is %i" % cvresult.shape[0])
model.set_params(objective = 'reg:squarederror')
model.set_params(gpu_id = 0)
model.set_params(max_bin= 16)
model.set_params(learning_rate = 0.01)
model.set_params(n_estimators=1000)
model.set_params(tree_method='gpu_hist')
model.fit(X,y)
feat_imp = pd.Series(model.feature_importances_, index=X.columns).sort_values(ascending=False)
feat_imp.plot(kind='bar', title='Feature Importances',figsize=(16,6))
plt.ylabel('Feature Importance Score')
###Output
_____no_output_____
###Markdown
max_depth & min_child_weight
###Code
model.set_params(objective = 'reg:squarederror')
model.set_params(gpu_id = 0)
model.set_params(max_bin= 16)
model.set_params(learning_rate = 0.01)
model.set_params(n_estimators=1000)
model.set_params(tree_method='gpu_hist')
param_test1 = {
'max_depth': [i for i in range(2,8,1)],
'min_child_weight': [i for i in range(1,6,1)]
}
gsearch1 = GridSearchCV(estimator = model, param_grid = param_test1, scoring = 'neg_mean_absolute_error', iid = False, cv = cv_folds, verbose = 1)
res =gsearch1.fit(X_train,y_train)
#print gsearch1.grid_scores_
print(gsearch1.best_params_)
print(gsearch1.best_score_)
###Output
{'max_depth': 7, 'min_child_weight': 4}
-5.576470857893801
###Markdown
gamma
###Code
model.set_params(max_depth = 7)
model.set_params(min_child_weight = 4)
param_test2 = {
'gamma':[i/10.0 for i in range(0,5)]
}
gsearch2 = GridSearchCV(estimator = model, param_grid = param_test2, scoring = 'neg_mean_absolute_error', iid = False, cv = cv_folds, verbose = 1)
gsearch2.fit(X_train,y_train)
print(gsearch2.best_params_)
print(gsearch2.best_score_)
###Output
{'gamma': 0.0}
-5.576470857893801
###Markdown
Recal number of trees (estimators)
###Code
# 5 fold cross validation is more accurate than using a single validation set
cv_folds = 5
early_stopping_rounds = 50
model = xgb.XGBRegressor(seed = SEED)
xgb_param = model.get_xgb_params()
# To train on GPU
xgb_param['objective'] = 'reg:squarederror'
xgb_param['gpu_id'] = 0
xgb_param['max_bin'] = 16
xgb_param['tree_method'] = 'gpu_hist'
xgb_param['learning_rate'] = 0.01
xgb_param['gamma'] = 0.0
xgb_param['max_depth'] = 7
xgb_param['min_child_weight'] = 4
cvresult = xgb.cv(params=xgb_param, dtrain=data_dmatrix, num_boost_round = 1000, nfold = cv_folds, metrics = 'mae', early_stopping_rounds = early_stopping_rounds, seed = SEED)
print(cvresult)
print ("Optimal number of trees (estimators) is %i" % cvresult.shape[0])
###Output
train-mae-mean train-mae-std test-mae-mean test-mae-std
0 16.997697 0.065034 16.997463 0.260372
1 16.828657 0.064343 16.828224 0.257683
2 16.661378 0.063660 16.660743 0.255031
3 16.495951 0.062987 16.495135 0.252511
4 16.332359 0.062283 16.331143 0.249874
5 16.170497 0.061636 16.169159 0.247268
6 16.010664 0.061072 16.009269 0.244700
7 15.852994 0.060512 15.851453 0.242255
8 15.697456 0.059867 15.695982 0.239872
9 15.543846 0.059136 15.542439 0.237438
10 15.392373 0.058290 15.390947 0.235286
11 15.242987 0.057571 15.241646 0.232983
12 15.095462 0.056899 15.094234 0.230979
13 14.949644 0.056326 14.948698 0.228751
14 14.805946 0.055642 14.805302 0.226727
15 14.663996 0.055066 14.663839 0.224737
16 14.523940 0.054538 14.524023 0.222900
17 14.385761 0.054178 14.386519 0.220699
18 14.249345 0.053736 14.250572 0.218634
19 14.114664 0.053347 14.116397 0.216485
20 13.981868 0.053024 13.984136 0.214459
21 13.850954 0.052640 13.853997 0.212449
22 13.721778 0.052352 13.725441 0.210371
23 13.594313 0.051894 13.598682 0.208595
24 13.468476 0.051549 13.473463 0.206780
25 13.344474 0.051230 13.350541 0.204765
26 13.222065 0.050851 13.228910 0.203110
27 13.101368 0.050548 13.109103 0.201288
28 12.982173 0.050319 12.991006 0.199583
29 12.864668 0.050109 12.874616 0.197830
.. ... ... ... ...
621 4.681801 0.020360 5.532985 0.126719
622 4.681024 0.020115 5.532848 0.126629
623 4.679982 0.019967 5.532805 0.126739
624 4.679042 0.019807 5.532768 0.126806
625 4.678223 0.019748 5.532609 0.126895
626 4.677137 0.019752 5.532422 0.126801
627 4.676179 0.019650 5.532407 0.126928
628 4.675625 0.019717 5.532279 0.126834
629 4.674634 0.019665 5.532275 0.127027
630 4.673967 0.019565 5.532234 0.127050
631 4.673494 0.019357 5.532198 0.127061
632 4.672570 0.019390 5.532225 0.127127
633 4.671680 0.019529 5.532197 0.127239
634 4.670933 0.019578 5.532194 0.127044
635 4.670385 0.019757 5.532230 0.127056
636 4.669516 0.019680 5.532260 0.127056
637 4.668797 0.019749 5.532295 0.127139
638 4.668190 0.019822 5.532328 0.127249
639 4.667286 0.019960 5.532208 0.127290
640 4.666221 0.020268 5.532083 0.127102
641 4.665408 0.020025 5.532236 0.127162
642 4.664321 0.020043 5.532257 0.127164
643 4.663308 0.020230 5.532146 0.127013
644 4.662735 0.020177 5.532239 0.126962
645 4.661790 0.020374 5.532118 0.126737
646 4.660932 0.020504 5.532210 0.126780
647 4.660146 0.020440 5.532240 0.126811
648 4.659427 0.020665 5.532195 0.126794
649 4.658489 0.020985 5.532096 0.126587
650 4.657553 0.021085 5.532071 0.126698
[651 rows x 4 columns]
Optimal number of trees (estimators) is 651
###Markdown
- N_estimators = 1000. Se encontro un mejor numero de estimadores subsample & colsample_bytree
###Code
model.set_params(objective = 'reg:squarederror')
model.set_params(gpu_id = 0)
model.set_params(max_bin= 16)
model.set_params(tree_method='gpu_hist')
model.set_params(learning_rate = 0.01)
model.set_params(n_estimators = 651)
model.set_params(max_depth = 7)
model.set_params(min_child_weight = 4)
model.set_params(gamma = 0.0)
param_test3 = {
'subsample' : [i/10.0 for i in range(6,11)],
'colsample_bytree' : [i/10.0 for i in range(6,11)]
}
gsearch3 = GridSearchCV(estimator = model, param_grid = param_test3, scoring = 'neg_mean_absolute_error', iid = False, cv = cv_folds, verbose = 1)
gsearch3.fit(X_train,y_train)
print(gsearch3.best_params_)
print(gsearch3.best_score_)
###Output
{'colsample_bytree': 0.7, 'subsample': 0.9}
-5.533399801184162
###Markdown
shrinking subsample & colsample_bytree
###Code
model.set_params(objective = 'reg:squarederror')
model.set_params(gpu_id = 0)
model.set_params(max_bin= 16)
model.set_params(tree_method='gpu_hist')
model.set_params(learning_rate = 0.01)
model.set_params(n_estimators = 939)
model.set_params(max_depth = 5)
model.set_params(min_child_weight = 5)
model.set_params(gamma = 0.0)
param_test3 = {
'subsample' : [i/100.0 for i in range(95,100,1)],
'colsample_bytree' : [i/100.0 for i in range(65,76,1)]
}
gsearch3 = GridSearchCV(estimator = model, param_grid = param_test3, scoring = 'neg_mean_absolute_error', iid = False, cv = cv_folds, verbose = 1)
gsearch3.fit(X_train,y_train)
print(gsearch3.best_params_)
print(gsearch3.best_score_)
###Output
{'colsample_bytree': 0.66, 'subsample': 0.95}
-7.644157605361481
###Markdown
reg_alpha
###Code
model.set_params(objective = 'reg:squarederror')
model.set_params(gpu_id = 0)
model.set_params(max_bin= 16)
model.set_params(tree_method='gpu_hist')
model.set_params(learning_rate = 0.01)
model.set_params(n_estimators = 651)
model.set_params(max_depth = 7)
model.set_params(min_child_weight = 4)
model.set_params(gamma = 0.0)
model.set_params(colsample_bytree = 0.7)
model.set_params(subsample = 0.9)
param_test4 = {
'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100]
}
gsearch4 = GridSearchCV(estimator = model, param_grid = param_test4, scoring = 'neg_mean_absolute_error', iid = False, cv = cv_folds, verbose = 1)
gsearch4.fit(X_train,y_train)
print(gsearch4.best_params_)
print(gsearch4.best_score_)
###Output
{'reg_alpha': 0.01}
-5.534888525528238
###Markdown
Training and Testing Model
###Code
model = xgb.XGBRegressor(seed = SEED)
model.set_params(objective = 'reg:squarederror')
model.set_params(gpu_id = 0)
model.set_params(max_bin= 16)
model.set_params(tree_method='gpu_hist')
model.set_params(learning_rate = 0.01)
model.set_params(n_estimators = 651)
model.set_params(max_depth = 7)
model.set_params(min_child_weight = 4)
model.set_params(gamma = 0.0)
model.set_params(colsample_bytree = 0.7)
model.set_params(subsample = 0.9)
model.set_params(reg_alpha = 0.01)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print("MAE unidades: ",mean_absolute_error(y_test, y_pred))
print("mean unidades pred: ", np.mean(y_pred))
print("median unidades pred: ", np.median(y_pred))
feat_imp = pd.Series(model.feature_importances_, index=X.columns).sort_values(ascending=False)
feat_imp.plot(kind='bar', title='Feature Importances',figsize=(16,6))
plt.ylabel('Feature Importance Score')
###Output
_____no_output_____ |
TensorFlow/handson/09_up_and_running_with_tensorflow.ipynb | ###Markdown
**Chapter 9 – Up and running with TensorFlow** _This notebook contains all the sample code and solutions to the exercises in chapter 9._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
###Code
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "tensorflow"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
###Output
_____no_output_____
###Markdown
Creating and running a graph
###Code
import tensorflow as tf
reset_graph()
x = tf.Variable(3, name="x")
y = tf.Variable(4, name="y")
f = x*x*y + y + 2
f
sess = tf.Session()
sess.run(x.initializer)
sess.run(y.initializer)
result = sess.run(f)
print(result)
sess.close()
with tf.Session() as sess:
x.initializer.run()
y.initializer.run()
result = f.eval()
result
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
result = f.eval()
result
init = tf.global_variables_initializer()
sess = tf.InteractiveSession()
init.run()
result = f.eval()
print(result)
sess.close()
result
###Output
_____no_output_____
###Markdown
Managing graphs
###Code
reset_graph()
x1 = tf.Variable(1)
x1.graph is tf.get_default_graph()
graph = tf.Graph()
with graph.as_default():
x2 = tf.Variable(2)
x2.graph is graph
x2.graph is tf.get_default_graph()
w = tf.constant(3)
x = w + 2
y = x + 5
z = x * 3
with tf.Session() as sess:
print(y.eval()) # 10
print(z.eval()) # 15
with tf.Session() as sess:
y_val, z_val = sess.run([y, z])
print(y_val) # 10
print(z_val) # 15
###Output
10
15
###Markdown
Linear Regression Using the Normal Equation
###Code
import numpy as np
from sklearn.datasets import fetch_california_housing
reset_graph()
housing = fetch_california_housing()
m, n = housing.data.shape
housing_data_plus_bias = np.c_[np.ones((m, 1)), housing.data]
X = tf.constant(housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
XT = tf.transpose(X)
theta = tf.matmul(tf.matmul(tf.matrix_inverse(tf.matmul(XT, X)), XT), y)
with tf.Session() as sess:
theta_value = theta.eval()
theta_value
###Output
_____no_output_____
###Markdown
Compare with pure NumPy
###Code
X = housing_data_plus_bias
y = housing.target.reshape(-1, 1)
theta_numpy = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y)
print(theta_numpy)
###Output
[[-3.69419202e+01]
[ 4.36693293e-01]
[ 9.43577803e-03]
[-1.07322041e-01]
[ 6.45065694e-01]
[-3.97638942e-06]
[-3.78654265e-03]
[-4.21314378e-01]
[-4.34513755e-01]]
###Markdown
Compare with Scikit-Learn
###Code
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing.data, housing.target.reshape(-1, 1))
print(np.r_[lin_reg.intercept_.reshape(-1, 1), lin_reg.coef_.T])
###Output
[[-3.69419202e+01]
[ 4.36693293e-01]
[ 9.43577803e-03]
[-1.07322041e-01]
[ 6.45065694e-01]
[-3.97638942e-06]
[-3.78654265e-03]
[-4.21314378e-01]
[-4.34513755e-01]]
###Markdown
Using Batch Gradient Descent Gradient Descent requires scaling the feature vectors first. We could do this using TF, but let's just use Scikit-Learn for now.
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaled_housing_data = scaler.fit_transform(housing.data)
scaled_housing_data_plus_bias = np.c_[np.ones((m, 1)), scaled_housing_data]
print(scaled_housing_data_plus_bias.mean(axis=0))
print(scaled_housing_data_plus_bias.mean(axis=1))
print(scaled_housing_data_plus_bias.mean())
print(scaled_housing_data_plus_bias.shape)
###Output
[ 1.00000000e+00 6.60969987e-17 5.50808322e-18 6.60969987e-17
-1.06030602e-16 -1.10161664e-17 3.44255201e-18 -1.07958431e-15
-8.52651283e-15]
[ 0.38915536 0.36424355 0.5116157 ... -0.06612179 -0.06360587
0.01359031]
0.11111111111111005
(20640, 9)
###Markdown
Manually computing the gradients
###Code
reset_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
gradients = 2/m * tf.matmul(tf.transpose(X), error)
training_op = tf.assign(theta, theta - learning_rate * gradients)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
sess.run(training_op)
best_theta = theta.eval()
best_theta
###Output
_____no_output_____
###Markdown
Using autodiff Same as above except for the `gradients = ...` line:
###Code
reset_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
gradients = tf.gradients(mse, [theta])[0]
training_op = tf.assign(theta, theta - learning_rate * gradients)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
sess.run(training_op)
best_theta = theta.eval()
print("Best theta:")
print(best_theta)
###Output
Epoch 0 MSE = 9.161542
Epoch 100 MSE = 0.7145004
Epoch 200 MSE = 0.56670487
Epoch 300 MSE = 0.55557173
Epoch 400 MSE = 0.5488112
Epoch 500 MSE = 0.5436363
Epoch 600 MSE = 0.53962904
Epoch 700 MSE = 0.5365092
Epoch 800 MSE = 0.53406775
Epoch 900 MSE = 0.5321473
Best theta:
[[ 2.0685525 ]
[ 0.8874027 ]
[ 0.14401658]
[-0.34770882]
[ 0.36178368]
[ 0.00393811]
[-0.04269556]
[-0.6614528 ]
[-0.6375277 ]]
###Markdown
How could you find the partial derivatives of the following function with regards to `a` and `b`?
###Code
def my_func(a, b):
z = 0
for i in range(100):
z = a * np.cos(z + i) + z * np.sin(b - i)
return z
my_func(0.2, 0.3)
reset_graph()
a = tf.Variable(0.2, name="a")
b = tf.Variable(0.3, name="b")
z = tf.constant(0.0, name="z0")
for i in range(100):
z = a * tf.cos(z + i) + z * tf.sin(b - i)
grads = tf.gradients(z, [a, b])
init = tf.global_variables_initializer()
###Output
_____no_output_____
###Markdown
Let's compute the function at $a=0.2$ and $b=0.3$, and the partial derivatives at that point with regards to $a$ and with regards to $b$:
###Code
with tf.Session() as sess:
init.run()
print(z.eval())
print(sess.run(grads))
###Output
-0.21253741
[-1.1388495, 0.19671395]
###Markdown
Using a `GradientDescentOptimizer`
###Code
reset_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
sess.run(training_op)
best_theta = theta.eval()
print("Best theta:")
print(best_theta)
###Output
Epoch 0 MSE = 9.161542
Epoch 100 MSE = 0.7145004
Epoch 200 MSE = 0.56670487
Epoch 300 MSE = 0.55557173
Epoch 400 MSE = 0.5488112
Epoch 500 MSE = 0.5436363
Epoch 600 MSE = 0.53962904
Epoch 700 MSE = 0.5365092
Epoch 800 MSE = 0.53406775
Epoch 900 MSE = 0.5321473
Best theta:
[[ 2.0685525 ]
[ 0.8874027 ]
[ 0.14401658]
[-0.34770882]
[ 0.36178368]
[ 0.00393811]
[-0.04269556]
[-0.6614528 ]
[-0.6375277 ]]
###Markdown
Using a momentum optimizer
###Code
reset_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate,
momentum=0.9)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
sess.run(training_op)
best_theta = theta.eval()
print("Best theta:")
print(best_theta)
###Output
Best theta:
[[ 2.068558 ]
[ 0.8296286 ]
[ 0.11875337]
[-0.26554456]
[ 0.3057109 ]
[-0.00450251]
[-0.03932662]
[-0.89986444]
[-0.87052065]]
###Markdown
Feeding data to the training algorithm Placeholder nodes
###Code
reset_graph()
A = tf.placeholder(tf.float32, shape=(None, 3))
B = A + 5
with tf.Session() as sess:
B_val_1 = B.eval(feed_dict={A: [[1, 2, 3]]})
B_val_2 = B.eval(feed_dict={A: [[4, 5, 6], [7, 8, 9]]})
print(B_val_1)
print(B_val_2)
###Output
[[ 9. 10. 11.]
[12. 13. 14.]]
###Markdown
Mini-batch Gradient Descent
###Code
n_epochs = 1000
learning_rate = 0.01
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
n_epochs = 10
batch_size = 100
n_batches = int(np.ceil(m / batch_size))
def fetch_batch(epoch, batch_index, batch_size):
np.random.seed(epoch * n_batches + batch_index) # not shown in the book
indices = np.random.randint(m, size=batch_size) # not shown
X_batch = scaled_housing_data_plus_bias[indices] # not shown
y_batch = housing.target.reshape(-1, 1)[indices] # not shown
return X_batch, y_batch
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
best_theta = theta.eval()
best_theta
###Output
_____no_output_____
###Markdown
Saving and restoring a model
###Code
reset_graph()
n_epochs = 1000 # not shown in the book
learning_rate = 0.01 # not shown
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X") # not shown
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y") # not shown
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions") # not shown
error = y_pred - y # not shown
mse = tf.reduce_mean(tf.square(error), name="mse") # not shown
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) # not shown
training_op = optimizer.minimize(mse) # not shown
init = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval()) # not shown
save_path = saver.save(sess, "./saver/my_model.ckpt")
sess.run(training_op)
best_theta = theta.eval()
save_path = saver.save(sess, "./saver/my_model_final.ckpt")
best_theta
with tf.Session() as sess:
saver.restore(sess, "./saver/my_model_final.ckpt")
best_theta_restored = theta.eval() # not shown in the book
np.allclose(best_theta, best_theta_restored)
###Output
_____no_output_____
###Markdown
If you want to have a saver that loads and restores `theta` with a different name, such as `"weights"`:
###Code
saver = tf.train.Saver({"weights": theta})
###Output
_____no_output_____
###Markdown
By default the saver also saves the graph structure itself in a second file with the extension `.meta`. You can use the function `tf.train.import_meta_graph()` to restore the graph structure. This function loads the graph into the default graph and returns a `Saver` that can then be used to restore the graph state (i.e., the variable values):
###Code
reset_graph()
# notice that we start with an empty graph.
saver = tf.train.import_meta_graph("./saver/my_model_final.ckpt.meta") # this loads the graph structure
theta = tf.get_default_graph().get_tensor_by_name("theta:0") # not shown in the book
with tf.Session() as sess:
saver.restore(sess, "./saver/my_model_final.ckpt") # this restores the graph's state
best_theta_restored = theta.eval() # not shown in the book
np.allclose(best_theta, best_theta_restored)
###Output
_____no_output_____
###Markdown
This means that you can import a pretrained model without having to have the corresponding Python code to build the graph. This is very handy when you keep tweaking and saving your model: you can load a previously saved model without having to search for the version of the code that built it. Visualizing the graph inside Jupyter To visualize the graph within Jupyter, we will use a TensorBoard server available online at https://tensorboard.appspot.com/ (so this will not work if you do not have Internet access). As far as I can tell, this code was originally written by Alex Mordvintsev in his [DeepDream tutorial](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/deepdream/deepdream.ipynb). Alternatively, you could use a tool like [tfgraphviz](https://github.com/akimach/tfgraphviz).
###Code
from tensorflow_graph_in_jupyter import show_graph
show_graph(tf.get_default_graph())
###Output
_____no_output_____
###Markdown
Using TensorBoard
###Code
reset_graph()
from datetime import datetime
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
logdir = "{}/run-{}/".format(root_logdir, now)
n_epochs = 1000
learning_rate = 0.01
X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
mse_summary = tf.summary.scalar('MSE', mse)
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
n_epochs = 10
batch_size = 100
n_batches = int(np.ceil(m / batch_size))
with tf.Session() as sess: # not shown in the book
sess.run(init) # not shown
for epoch in range(n_epochs): # not shown
for batch_index in range(n_batches):
X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size)
if batch_index % 10 == 0:
summary_str = mse_summary.eval(feed_dict={X: X_batch, y: y_batch})
step = epoch * n_batches + batch_index
file_writer.add_summary(summary_str, step)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
best_theta = theta.eval() # not shown
file_writer.close()
best_theta
###Output
_____no_output_____
###Markdown
Name scopes
###Code
reset_graph()
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
logdir = "{}/run-{}/".format(root_logdir, now)
n_epochs = 1000
learning_rate = 0.01
X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
with tf.name_scope("loss") as scope:
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
mse_summary = tf.summary.scalar('MSE', mse)
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
n_epochs = 10
batch_size = 100
n_batches = int(np.ceil(m / batch_size))
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size)
if batch_index % 10 == 0:
summary_str = mse_summary.eval(feed_dict={X: X_batch, y: y_batch})
step = epoch * n_batches + batch_index
file_writer.add_summary(summary_str, step)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
best_theta = theta.eval()
file_writer.flush()
file_writer.close()
print("Best theta:")
print(best_theta)
print(error.op.name)
print(mse.op.name)
reset_graph()
a1 = tf.Variable(0, name="a") # name == "a"
a2 = tf.Variable(0, name="a") # name == "a_1"
with tf.name_scope("param"): # name == "param"
a3 = tf.Variable(0, name="a") # name == "param/a"
with tf.name_scope("param"): # name == "param_1"
a4 = tf.Variable(0, name="a") # name == "param_1/a"
for node in (a1, a2, a3, a4):
print(node.op.name)
###Output
a
a_1
param/a
param_1/a
###Markdown
Modularity An ugly flat code:
###Code
reset_graph()
n_features = 3
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
w1 = tf.Variable(tf.random_normal((n_features, 1)), name="weights1")
w2 = tf.Variable(tf.random_normal((n_features, 1)), name="weights2")
b1 = tf.Variable(0.0, name="bias1")
b2 = tf.Variable(0.0, name="bias2")
z1 = tf.add(tf.matmul(X, w1), b1, name="z1")
z2 = tf.add(tf.matmul(X, w2), b2, name="z2")
relu1 = tf.maximum(z1, 0., name="relu1")
relu2 = tf.maximum(z1, 0., name="relu2") # Oops, cut&paste error! Did you spot it?
output = tf.add(relu1, relu2, name="output")
###Output
_____no_output_____
###Markdown
Much better, using a function to build the ReLUs:
###Code
reset_graph()
def relu(X):
w_shape = (int(X.get_shape()[1]), 1)
w = tf.Variable(tf.random_normal(w_shape), name="weights")
b = tf.Variable(0.0, name="bias")
z = tf.add(tf.matmul(X, w), b, name="z")
return tf.maximum(z, 0., name="relu")
n_features = 3
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = [relu(X) for i in range(5)]
output = tf.add_n(relus, name="output")
file_writer = tf.summary.FileWriter("logs/relu1", tf.get_default_graph())
###Output
_____no_output_____
###Markdown
Even better using name scopes:
###Code
reset_graph()
def relu(X):
with tf.name_scope("relu"):
w_shape = (int(X.get_shape()[1]), 1) # not shown in the book
w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown
b = tf.Variable(0.0, name="bias") # not shown
z = tf.add(tf.matmul(X, w), b, name="z") # not shown
return tf.maximum(z, 0., name="max") # not shown
n_features = 3
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = [relu(X) for i in range(5)]
output = tf.add_n(relus, name="output")
file_writer = tf.summary.FileWriter("logs/relu2", tf.get_default_graph())
file_writer.close()
###Output
_____no_output_____
###Markdown
Sharing Variables Sharing a `threshold` variable the classic way, by defining it outside of the `relu()` function then passing it as a parameter:
###Code
reset_graph()
def relu(X, threshold):
with tf.name_scope("relu"):
w_shape = (int(X.get_shape()[1]), 1) # not shown in the book
w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown
b = tf.Variable(0.0, name="bias") # not shown
z = tf.add(tf.matmul(X, w), b, name="z") # not shown
return tf.maximum(z, threshold, name="max")
threshold = tf.Variable(0.0, name="threshold")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = [relu(X, threshold) for i in range(5)]
output = tf.add_n(relus, name="output")
reset_graph()
def relu(X):
with tf.name_scope("relu"):
if not hasattr(relu, "threshold"):
relu.threshold = tf.Variable(0.0, name="threshold")
w_shape = int(X.get_shape()[1]), 1 # not shown in the book
w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown
b = tf.Variable(0.0, name="bias") # not shown
z = tf.add(tf.matmul(X, w), b, name="z") # not shown
return tf.maximum(z, relu.threshold, name="max")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = [relu(X) for i in range(5)]
output = tf.add_n(relus, name="output")
reset_graph()
with tf.variable_scope("relu"):
threshold = tf.get_variable("threshold", shape=(),
initializer=tf.constant_initializer(0.0))
with tf.variable_scope("relu", reuse=True):
threshold = tf.get_variable("threshold")
with tf.variable_scope("relu") as scope:
scope.reuse_variables()
threshold = tf.get_variable("threshold")
reset_graph()
def relu(X):
with tf.variable_scope("relu", reuse=True):
threshold = tf.get_variable("threshold")
w_shape = int(X.get_shape()[1]), 1 # not shown
w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown
b = tf.Variable(0.0, name="bias") # not shown
z = tf.add(tf.matmul(X, w), b, name="z") # not shown
return tf.maximum(z, threshold, name="max")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
with tf.variable_scope("relu"):
threshold = tf.get_variable("threshold", shape=(),
initializer=tf.constant_initializer(0.0))
relus = [relu(X) for relu_index in range(5)]
output = tf.add_n(relus, name="output")
file_writer = tf.summary.FileWriter("logs/relu6", tf.get_default_graph())
file_writer.close()
reset_graph()
def relu(X):
with tf.variable_scope("relu"):
threshold = tf.get_variable("threshold", shape=(), initializer=tf.constant_initializer(0.0))
w_shape = (int(X.get_shape()[1]), 1)
w = tf.Variable(tf.random_normal(w_shape), name="weights")
b = tf.Variable(0.0, name="bias")
z = tf.add(tf.matmul(X, w), b, name="z")
return tf.maximum(z, threshold, name="max")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
with tf.variable_scope("", default_name="") as scope:
first_relu = relu(X) # create the shared variable
scope.reuse_variables() # then reuse it
relus = [first_relu] + [relu(X) for i in range(4)]
output = tf.add_n(relus, name="output")
file_writer = tf.summary.FileWriter("logs/relu8", tf.get_default_graph())
file_writer.close()
reset_graph()
def relu(X):
threshold = tf.get_variable("threshold", shape=(),
initializer=tf.constant_initializer(0.0))
w_shape = (int(X.get_shape()[1]), 1) # not shown in the book
w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown
b = tf.Variable(0.0, name="bias") # not shown
z = tf.add(tf.matmul(X, w), b, name="z") # not shown
return tf.maximum(z, threshold, name="max")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = []
for relu_index in range(5):
with tf.variable_scope("relu", reuse=(relu_index >= 1)) as scope:
relus.append(relu(X))
output = tf.add_n(relus, name="output")
file_writer = tf.summary.FileWriter("logs/relu9", tf.get_default_graph())
file_writer.close()
###Output
_____no_output_____
###Markdown
Extra material
###Code
reset_graph()
with tf.variable_scope("my_scope"):
x0 = tf.get_variable("x", shape=(), initializer=tf.constant_initializer(0.))
x1 = tf.Variable(0., name="x")
x2 = tf.Variable(0., name="x")
with tf.variable_scope("my_scope", reuse=True):
x3 = tf.get_variable("x")
x4 = tf.Variable(0., name="x")
with tf.variable_scope("", default_name="", reuse=True):
x5 = tf.get_variable("my_scope/x")
print("x0:", x0.op.name)
print("x1:", x1.op.name)
print("x2:", x2.op.name)
print("x3:", x3.op.name)
print("x4:", x4.op.name)
print("x5:", x5.op.name)
print(x0 is x3 and x3 is x5)
###Output
x0: my_scope/x
x1: my_scope/x_1
x2: my_scope/x_2
x3: my_scope/x
x4: my_scope_1/x
x5: my_scope/x
True
###Markdown
The first `variable_scope()` block first creates the shared variable `x0`, named `my_scope/x`. For all operations other than shared variables (including non-shared variables), the variable scope acts like a regular name scope, which is why the two variables `x1` and `x2` have a name with a prefix `my_scope/`. Note however that TensorFlow makes their names unique by adding an index: `my_scope/x_1` and `my_scope/x_2`.The second `variable_scope()` block reuses the shared variables in scope `my_scope`, which is why `x0 is x3`. Once again, for all operations other than shared variables it acts as a named scope, and since it's a separate block from the first one, the name of the scope is made unique by TensorFlow (`my_scope_1`) and thus the variable `x4` is named `my_scope_1/x`.The third block shows another way to get a handle on the shared variable `my_scope/x` by creating a `variable_scope()` at the root scope (whose name is an empty string), then calling `get_variable()` with the full name of the shared variable (i.e. `"my_scope/x"`). Strings
###Code
reset_graph()
text = np.array("Do you want some café?".split())
text_tensor = tf.constant(text)
with tf.Session() as sess:
print(text_tensor.eval())
###Output
[b'Do' b'you' b'want' b'some' b'caf\xc3\xa9?']
###Markdown
Autodiff Note: the autodiff content was moved to the [extra_autodiff.ipynb](extra_autodiff.ipynb) notebook. Exercise solutions 1. to 11. See appendix A. 12. Logistic Regression with Mini-Batch Gradient Descent using TensorFlow First, let's create the moons dataset using Scikit-Learn's `make_moons()` function:
###Code
from sklearn.datasets import make_moons
m = 1000
X_moons, y_moons = make_moons(m, noise=0.1, random_state=42)
###Output
_____no_output_____
###Markdown
Let's take a peek at the dataset:
###Code
plt.plot(X_moons[y_moons == 1, 0], X_moons[y_moons == 1, 1], 'go', label="Positive")
plt.plot(X_moons[y_moons == 0, 0], X_moons[y_moons == 0, 1], 'r^', label="Negative")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
We must not forget to add an extra bias feature ($x_0 = 1$) to every instance. For this, we just need to add a column full of 1s on the left of the input matrix $\mathbf{X}$:
###Code
X_moons_with_bias = np.c_[np.ones((m, 1)), X_moons]
###Output
_____no_output_____
###Markdown
Let's check:
###Code
X_moons_with_bias[:5]
###Output
_____no_output_____
###Markdown
Looks good. Now let's reshape `y_train` to make it a column vector (i.e. a 2D array with a single column):
###Code
y_moons_column_vector = y_moons.reshape(-1, 1)
###Output
_____no_output_____
###Markdown
Now let's split the data into a training set and a test set:
###Code
test_ratio = 0.2
test_size = int(m * test_ratio)
X_train = X_moons_with_bias[:-test_size]
X_test = X_moons_with_bias[-test_size:]
y_train = y_moons_column_vector[:-test_size]
y_test = y_moons_column_vector[-test_size:]
###Output
_____no_output_____
###Markdown
Ok, now let's create a small function to generate training batches. In this implementation we will just pick random instances from the training set for each batch. This means that a single batch may contain the same instance multiple times, and also a single epoch may not cover all the training instances (in fact it will generally cover only about two thirds of the instances). However, in practice this is not an issue and it simplifies the code:
###Code
def random_batch(X_train, y_train, batch_size):
rnd_indices = np.random.randint(0, len(X_train), batch_size)
X_batch = X_train[rnd_indices]
y_batch = y_train[rnd_indices]
return X_batch, y_batch
###Output
_____no_output_____
###Markdown
Let's look at a small batch:
###Code
X_batch, y_batch = random_batch(X_train, y_train, 5)
X_batch
y_batch
###Output
_____no_output_____
###Markdown
Great! Now that the data is ready to be fed to the model, we need to build that model. Let's start with a simple implementation, then we will add all the bells and whistles. First let's reset the default graph.
###Code
reset_graph()
###Output
_____no_output_____
###Markdown
The _moons_ dataset has two input features, since each instance is a point on a plane (i.e., 2-Dimensional):
###Code
n_inputs = 2
###Output
_____no_output_____
###Markdown
Now let's build the Logistic Regression model. As we saw in chapter 4, this model first computes a weighted sum of the inputs (just like the Linear Regression model), and then it applies the sigmoid function to the result, which gives us the estimated probability for the positive class:$\hat{p} = h_\boldsymbol{\theta}(\mathbf{x}) = \sigma(\boldsymbol{\theta}^T \mathbf{x})$ Recall that $\boldsymbol{\theta}$ is the parameter vector, containing the bias term $\theta_0$ and the weights $\theta_1, \theta_2, \dots, \theta_n$. The input vector $\mathbf{x}$ contains a constant term $x_0 = 1$, as well as all the input features $x_1, x_2, \dots, x_n$.Since we want to be able to make predictions for multiple instances at a time, we will use an input matrix $\mathbf{X}$ rather than a single input vector. The $i^{th}$ row will contain the transpose of the $i^{th}$ input vector $(\mathbf{x}^{(i)})^T$. It is then possible to estimate the probability that each instance belongs to the positive class using the following equation:$ \hat{\mathbf{p}} = \sigma(\mathbf{X} \boldsymbol{\theta})$That's all we need to build the model:
###Code
X = tf.placeholder(tf.float32, shape=(None, n_inputs + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n_inputs + 1, 1], -1.0, 1.0, seed=42), name="theta")
logits = tf.matmul(X, theta, name="logits")
y_proba = 1 / (1 + tf.exp(-logits))
###Output
_____no_output_____
###Markdown
In fact, TensorFlow has a nice function `tf.sigmoid()` that we can use to simplify the last line of the previous code:
###Code
y_proba = tf.sigmoid(logits)
###Output
_____no_output_____
###Markdown
As we saw in chapter 4, the log loss is a good cost function to use for Logistic Regression:$J(\boldsymbol{\theta}) = -\dfrac{1}{m} \sum\limits_{i=1}^{m}{\left[ y^{(i)} \log\left(\hat{p}^{(i)}\right) + (1 - y^{(i)}) \log\left(1 - \hat{p}^{(i)}\right)\right]}$One option is to implement it ourselves:
###Code
epsilon = 1e-7 # to avoid an overflow when computing the log
loss = -tf.reduce_mean(y * tf.log(y_proba + epsilon) + (1 - y) * tf.log(1 - y_proba + epsilon))
###Output
_____no_output_____
###Markdown
But we might as well use TensorFlow's `tf.losses.log_loss()` function:
###Code
loss = tf.losses.log_loss(y, y_proba) # uses epsilon = 1e-7 by default
###Output
_____no_output_____
###Markdown
The rest is pretty standard: let's create the optimizer and tell it to minimize the cost function:
###Code
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
###Output
_____no_output_____
###Markdown
All we need now (in this minimal version) is the variable initializer:
###Code
init = tf.global_variables_initializer()
###Output
_____no_output_____
###Markdown
And we are ready to train the model and use it for predictions! There's really nothing special about this code, it's virtually the same as the one we used earlier for Linear Regression:
###Code
n_epochs = 1000
batch_size = 50
n_batches = int(np.ceil(m / batch_size))
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = random_batch(X_train, y_train, batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
loss_val = loss.eval({X: X_test, y: y_test})
if epoch % 100 == 0:
print("Epoch:", epoch, "\tLoss:", loss_val)
y_proba_val = y_proba.eval(feed_dict={X: X_test, y: y_test})
###Output
Epoch: 0 Loss: 0.792602
Epoch: 100 Loss: 0.343463
Epoch: 200 Loss: 0.30754
Epoch: 300 Loss: 0.292889
Epoch: 400 Loss: 0.285336
Epoch: 500 Loss: 0.280478
Epoch: 600 Loss: 0.278083
Epoch: 700 Loss: 0.276154
Epoch: 800 Loss: 0.27552
Epoch: 900 Loss: 0.274912
###Markdown
Note: we don't use the epoch number when generating batches, so we could just have a single `for` loop rather than 2 nested `for` loops, but it's convenient to think of training time in terms of number of epochs (i.e., roughly the number of times the algorithm went through the training set). For each instance in the test set, `y_proba_val` contains the estimated probability that it belongs to the positive class, according to the model. For example, here are the first 5 estimated probabilities:
###Code
y_proba_val[:5]
###Output
_____no_output_____
###Markdown
To classify each instance, we can go for maximum likelihood: classify as positive any instance whose estimated probability is greater or equal to 0.5:
###Code
y_pred = (y_proba_val >= 0.5)
y_pred[:5]
###Output
_____no_output_____
###Markdown
Depending on the use case, you may want to choose a different threshold than 0.5: make it higher if you want high precision (but lower recall), and make it lower if you want high recall (but lower precision). See chapter 3 for more details. Let's compute the model's precision and recall:
###Code
from sklearn.metrics import precision_score, recall_score
precision_score(y_test, y_pred)
recall_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Let's plot these predictions to see what they look like:
###Code
y_pred_idx = y_pred.reshape(-1) # a 1D array rather than a column vector
plt.plot(X_test[y_pred_idx, 1], X_test[y_pred_idx, 2], 'go', label="Positive")
plt.plot(X_test[~y_pred_idx, 1], X_test[~y_pred_idx, 2], 'r^', label="Negative")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Well, that looks pretty bad, doesn't it? But let's not forget that the Logistic Regression model has a linear decision boundary, so this is actually close to the best we can do with this model (unless we add more features, as we will show in a second). Now let's start over, but this time we will add all the bells and whistles, as listed in the exercise:* Define the graph within a `logistic_regression()` function that can be reused easily.* Save checkpoints using a `Saver` at regular intervals during training, and save the final model at the end of training.* Restore the last checkpoint upon startup if training was interrupted.* Define the graph using nice scopes so the graph looks good in TensorBoard.* Add summaries to visualize the learning curves in TensorBoard.* Try tweaking some hyperparameters such as the learning rate or the mini-batch size and look at the shape of the learning curve. Before we start, we will add 4 more features to the inputs: ${x_1}^2$, ${x_2}^2$, ${x_1}^3$ and ${x_2}^3$. This was not part of the exercise, but it will demonstrate how adding features can improve the model. We will do this manually, but you could also add them using `sklearn.preprocessing.PolynomialFeatures`.
###Code
X_train_enhanced = np.c_[X_train,
np.square(X_train[:, 1]),
np.square(X_train[:, 2]),
X_train[:, 1] ** 3,
X_train[:, 2] ** 3]
X_test_enhanced = np.c_[X_test,
np.square(X_test[:, 1]),
np.square(X_test[:, 2]),
X_test[:, 1] ** 3,
X_test[:, 2] ** 3]
###Output
_____no_output_____
###Markdown
This is what the "enhanced" training set looks like:
###Code
X_train_enhanced[:5]
###Output
_____no_output_____
###Markdown
Ok, next let's reset the default graph:
###Code
reset_graph()
###Output
_____no_output_____
###Markdown
Now let's define the `logistic_regression()` function to create the graph. We will leave out the definition of the inputs `X` and the targets `y`. We could include them here, but leaving them out will make it easier to use this function in a wide range of use cases (e.g. perhaps we will want to add some preprocessing steps for the inputs before we feed them to the Logistic Regression model).
###Code
def logistic_regression(X, y, initializer=None, seed=42, learning_rate=0.01):
n_inputs_including_bias = int(X.get_shape()[1])
with tf.name_scope("logistic_regression"):
with tf.name_scope("model"):
if initializer is None:
initializer = tf.random_uniform([n_inputs_including_bias, 1], -1.0, 1.0, seed=seed)
theta = tf.Variable(initializer, name="theta")
logits = tf.matmul(X, theta, name="logits")
y_proba = tf.sigmoid(logits)
with tf.name_scope("train"):
loss = tf.losses.log_loss(y, y_proba, scope="loss")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
loss_summary = tf.summary.scalar('log_loss', loss)
with tf.name_scope("init"):
init = tf.global_variables_initializer()
with tf.name_scope("save"):
saver = tf.train.Saver()
return y_proba, loss, training_op, loss_summary, init, saver
###Output
_____no_output_____
###Markdown
Let's create a little function to get the name of the log directory to save the summaries for Tensorboard:
###Code
from datetime import datetime
def log_dir(prefix=""):
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
if prefix:
prefix += "-"
name = prefix + "run-" + now
return "{}/{}/".format(root_logdir, name)
###Output
_____no_output_____
###Markdown
Next, let's create the graph, using the `logistic_regression()` function. We will also create the `FileWriter` to save the summaries to the log directory for Tensorboard:
###Code
n_inputs = 2 + 4
logdir = log_dir("logreg")
X = tf.placeholder(tf.float32, shape=(None, n_inputs + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
y_proba, loss, training_op, loss_summary, init, saver = logistic_regression(X, y)
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
###Output
_____no_output_____
###Markdown
At last we can train the model! We will start by checking whether a previous training session was interrupted, and if so we will load the checkpoint and continue training from the epoch number we saved. In this example we just save the epoch number to a separate file, but in chapter 11 we will see how to store the training step directly as part of the model, using a non-trainable variable called `global_step` that we pass to the optimizer's `minimize()` method.You can try interrupting training to verify that it does indeed restore the last checkpoint when you start it again.
###Code
n_epochs = 10001
batch_size = 50
n_batches = int(np.ceil(m / batch_size))
checkpoint_path = "/tmp/my_logreg_model.ckpt"
checkpoint_epoch_path = checkpoint_path + ".epoch"
final_model_path = "./my_logreg_model"
with tf.Session() as sess:
if os.path.isfile(checkpoint_epoch_path):
# if the checkpoint file exists, restore the model and load the epoch number
with open(checkpoint_epoch_path, "rb") as f:
start_epoch = int(f.read())
print("Training was interrupted. Continuing at epoch", start_epoch)
saver.restore(sess, checkpoint_path)
else:
start_epoch = 0
sess.run(init)
for epoch in range(start_epoch, n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = random_batch(X_train_enhanced, y_train, batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
loss_val, summary_str = sess.run([loss, loss_summary], feed_dict={X: X_test_enhanced, y: y_test})
file_writer.add_summary(summary_str, epoch)
if epoch % 500 == 0:
print("Epoch:", epoch, "\tLoss:", loss_val)
saver.save(sess, checkpoint_path)
with open(checkpoint_epoch_path, "wb") as f:
f.write(b"%d" % (epoch + 1))
saver.save(sess, final_model_path)
y_proba_val = y_proba.eval(feed_dict={X: X_test_enhanced, y: y_test})
os.remove(checkpoint_epoch_path)
###Output
Epoch: 0 Loss: 0.629985
Epoch: 500 Loss: 0.161224
Epoch: 1000 Loss: 0.119032
Epoch: 1500 Loss: 0.0973292
Epoch: 2000 Loss: 0.0836979
Epoch: 2500 Loss: 0.0743758
Epoch: 3000 Loss: 0.0675021
Epoch: 3500 Loss: 0.0622069
Epoch: 4000 Loss: 0.0580268
Epoch: 4500 Loss: 0.054563
Epoch: 5000 Loss: 0.0517083
Epoch: 5500 Loss: 0.0492377
Epoch: 6000 Loss: 0.0471673
Epoch: 6500 Loss: 0.0453766
Epoch: 7000 Loss: 0.0438187
Epoch: 7500 Loss: 0.0423742
Epoch: 8000 Loss: 0.0410892
Epoch: 8500 Loss: 0.0399709
Epoch: 9000 Loss: 0.0389202
Epoch: 9500 Loss: 0.0380107
Epoch: 10000 Loss: 0.0371557
###Markdown
Once again, we can make predictions by just classifying as positive all the instances whose estimated probability is greater or equal to 0.5:
###Code
y_pred = (y_proba_val >= 0.5)
precision_score(y_test, y_pred)
recall_score(y_test, y_pred)
y_pred_idx = y_pred.reshape(-1) # a 1D array rather than a column vector
plt.plot(X_test[y_pred_idx, 1], X_test[y_pred_idx, 2], 'go', label="Positive")
plt.plot(X_test[~y_pred_idx, 1], X_test[~y_pred_idx, 2], 'r^', label="Negative")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Now that's much, much better! Apparently the new features really helped a lot. Try starting the tensorboard server, find the latest run and look at the learning curve (i.e., how the loss evaluated on the test set evolves as a function of the epoch number):```$ tensorboard --logdir=tf_logs``` Now you can play around with the hyperparameters (e.g. the `batch_size` or the `learning_rate`) and run training again and again, comparing the learning curves. You can even automate this process by implementing grid search or randomized search. Below is a simple implementation of a randomized search on both the batch size and the learning rate. For the sake of simplicity, the checkpoint mechanism was removed.
###Code
from scipy.stats import reciprocal
n_search_iterations = 10
for search_iteration in range(n_search_iterations):
batch_size = np.random.randint(1, 100)
learning_rate = reciprocal(0.0001, 0.1).rvs(random_state=search_iteration)
n_inputs = 2 + 4
logdir = log_dir("logreg")
print("Iteration", search_iteration)
print(" logdir:", logdir)
print(" batch size:", batch_size)
print(" learning_rate:", learning_rate)
print(" training: ", end="")
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
y_proba, loss, training_op, loss_summary, init, saver = logistic_regression(
X, y, learning_rate=learning_rate)
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
n_epochs = 10001
n_batches = int(np.ceil(m / batch_size))
final_model_path = "./my_logreg_model_%d" % search_iteration
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = random_batch(X_train_enhanced, y_train, batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
loss_val, summary_str = sess.run([loss, loss_summary], feed_dict={X: X_test_enhanced, y: y_test})
file_writer.add_summary(summary_str, epoch)
if epoch % 500 == 0:
print(".", end="")
saver.save(sess, final_model_path)
print()
y_proba_val = y_proba.eval(feed_dict={X: X_test_enhanced, y: y_test})
y_pred = (y_proba_val >= 0.5)
print(" precision:", precision_score(y_test, y_pred))
print(" recall:", recall_score(y_test, y_pred))
###Output
Iteration 0
logdir: tf_logs/logreg-run-20170606195328/
batch size: 19
learning_rate: 0.00443037524522
training: .....................
precision: 0.979797979798
recall: 0.979797979798
Iteration 1
logdir: tf_logs/logreg-run-20170606195605/
batch size: 80
learning_rate: 0.00178264971514
training: .....................
precision: 0.969696969697
recall: 0.969696969697
Iteration 2
logdir: tf_logs/logreg-run-20170606195646/
batch size: 73
learning_rate: 0.00203228544324
training: .....................
precision: 0.969696969697
recall: 0.969696969697
Iteration 3
logdir: tf_logs/logreg-run-20170606195730/
batch size: 6
learning_rate: 0.00449152382514
training: .....................
precision: 0.980198019802
recall: 1.0
Iteration 4
logdir: tf_logs/logreg-run-20170606200523/
batch size: 24
learning_rate: 0.0796323472178
training: .....................
precision: 0.980198019802
recall: 1.0
Iteration 5
logdir: tf_logs/logreg-run-20170606200726/
batch size: 75
learning_rate: 0.000463425058329
training: .....................
precision: 0.912621359223
recall: 0.949494949495
Iteration 6
logdir: tf_logs/logreg-run-20170606200810/
batch size: 86
learning_rate: 0.0477068184194
training: .....................
precision: 0.98
recall: 0.989898989899
Iteration 7
logdir: tf_logs/logreg-run-20170606200851/
batch size: 87
learning_rate: 0.000169404470952
training: .....................
precision: 0.888888888889
recall: 0.808080808081
Iteration 8
logdir: tf_logs/logreg-run-20170606200932/
batch size: 61
learning_rate: 0.0417146119941
training: .....................
precision: 0.980198019802
recall: 1.0
Iteration 9
logdir: tf_logs/logreg-run-20170606201026/
batch size: 92
learning_rate: 0.000107429229684
training: .....................
precision: 0.882352941176
recall: 0.757575757576
|
ann_keras_mxnet.ipynb | ###Markdown
Run Same ANN on mxnetcompiled to use GPU on my GTX 770 CUDA 3.0 machineRemember to write to ~/.keras/keras.json as below:```json{ "backend": "mxnet", "image_data_format": "channels_first"}```
###Code
# if not using tensorflow, import keras
import os
import numpy as np
import keras
#from tensorflow import keras
print('keras version:', keras.__version__, ', keras backend:', keras.backend.backend(), ', image format:', keras.backend.image_data_format())
# this is the magic that will display matplotlib on the jupyter notebook
%matplotlib inline
from matplotlib import pyplot as plt
# flag to keep track of how we will change the format
is_channels_first = (keras.backend.image_data_format() == 'channels_first')
# get mnist data
mnist = keras.datasets.mnist
print('loading MNIST data...')
# loads the data only if for the first time
(x_train, y_train),(x_test, y_test) = mnist.load_data()
#show the "shape" of downloaded data
print('train data size:', x_train.shape)
print('train label (expected) value size:', y_train.shape)
print('test data size:', x_test.shape)
print('test expected value:',y_test.shape)
print('\n\ndisplaying few training samples')
#function to copy 1 mage to larger image map
def copy_image(target , ty, tx, src):
for y in range(28):
for x in range(28):
target[ty*28+y][tx*28+x] = src[y][x]
return target
# show 20 x 20
ysize = 20
xsize = 20
start_offset = 0
base_index = start_offset + (ysize * xsize)
image = np.zeros((28*ysize, 28*xsize), dtype=np.int)
for y in range(ysize):
for x in range(xsize):
index = y*xsize + x
src = x_train[index + base_index]
image = copy_image(image , y ,x , src)
%matplotlib inline
from matplotlib import pyplot as plt
plt.figure(figsize=(7,7))
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(image , cmap='gray_r')
plt.show()
plt.close()
###Output
displaying few training samples
###Markdown
The Artificial Neural Network will implement 1 input layer, 1 hiden layer, and 1 output layerSince the data is in 28x28, we will convert that into a "flat" 28x28 => 784 array and convert 0-255 integers into 0.0 - 1.0 values (i.e. normalize the data )And we will "hot encode" single label to be 10 digit array
###Code
x_train_reshaped = x_train.reshape(x_train.shape[0],784)
x_test_reshaped = x_test.reshape(x_test.shape[0], 784)
x_train_reshaped = x_train_reshaped.astype('float32')
x_test_reshaped = x_test_reshaped.astype('float32')
x_train_reshaped /= 255.0
x_test_reshaped /= 255.0
y_hot_train = keras.utils.to_categorical(y_train, num_classes=10)
y_hot_test = keras.utils.to_categorical(y_test, num_classes=10)
print('x_train_reshaped:', x_train_reshaped.shape)
print('x_test_reshaped:', x_test_reshaped.shape)
print('y_hot_train:', y_hot_train.shape)
print('y_hot_test:', y_hot_test.shape)
###Output
x_train_reshaped: (60000, 784)
x_test_reshaped: (10000, 784)
y_hot_train: (60000, 10)
y_hot_test: (10000, 10)
###Markdown
Below sets up the ANN in keras way and makes it ready for useThe input layer is implied => 784 inputsThe hidden layer has 512 nodesThe final output layer has 10 nodes, where each represents the probability of each digit Let's "run" the model against the training dataThe model.fit() method trains the model. The "optimizer" defines the strategy used to train or tweak the weights and bias betwen the nodes. The loss defines what simple loss function will be used to determine how "well" your model is behaving. The output indicates that after 10 "runs" (or epoch) of the training, you get about 98% accuracy on the validation data - the data set that are NOT being used to train.
###Code
model = keras.models.Sequential()
# input layer is just 784 inputs coming in as defined in the hidden layer below
# hidden layer
model.add( keras.layers.Dense(512, input_shape=(784,), activation='relu'))
#output layer
model.add( keras.layers.Dense(10, activation='softmax'))
# compile to model
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
#train the model with train data
fit_history = model.fit(x_train_reshaped, y_hot_train,
epochs=25 ,
batch_size=200,
validation_data=(x_test_reshaped,y_hot_test)
)
# show procession of training...
plt.plot(fit_history.history['loss'])
plt.plot(fit_history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
plt.plot(fit_history.history['acc'])
plt.plot(fit_history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
Lets see how the model worked against the "training" data set and the "test" data set.The loss graph shows how well the "loss" is being minimized on the training and the validation loss is a average of the "loss" is against the 10,000 validation data set. the above diagram indicates that we are "over-fitting" => that the model's accuracy on the training set improves but against the test set, it doesn't improve at allIn fact, if you look at the train accuracy at the last epoch run, it is 100% !! Furthermore, the "loss" for the validation slowly creeps up while the "loss" for training goes down consistently. This is another indication of an "over-fitting"A model that has over-fit on the training data will not perform well against unseen data. We should tried to remedy this. One of the available "tricks" is to introduce a "Dropout" which randomly zeros out percentage of connection between the layers.Let's train the model and see how well it performs. We have to train a bit longer.
###Code
model2 = keras.models.Sequential()
# input layer is just 784 inputs coming in as defined in the hidden layer below
# hidden layer
model2.add( keras.layers.Dense(512, input_shape=(784,), activation='relu'))
model2.add( keras.layers.Dropout(rate=0.5))
#output layer
model2.add( keras.layers.Dense(10, activation='softmax'))
# compile to model
model2.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model2.summary()
#train the model with train data
fit_history2 = model2.fit(x_train_reshaped, y_hot_train,
epochs=35 ,
batch_size=200,
validation_data=(x_test_reshaped,y_hot_test)
)
# show procession of training...
plt.plot(fit_history2.history['loss'])
plt.plot(fit_history2.history['val_loss'])
plt.title('model2 loss ')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
plt.plot(fit_history2.history['acc'])
plt.plot(fit_history2.history['val_acc'])
plt.title('model2 accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
the model still over fits a little, but it is A LOT better than when no Dropout was usedThe validation accuracy is about 0.984x to 0.985x. It does not improve Let's plot what's call confusion matrix that shows the mismatched matrix
###Code
# predict for my test data
predictions = model2.predict(x_test_reshaped)
my_matrix = np.zeros( (10,10), dtype='int')
# count of good guesses
count_matrix = np.zeros( (10,), dtype='int')
good_matrix = np.zeros( (10,), dtype='int')
# iterate through 10,000 test data
for i in range(10000):
count_matrix[y_test[i]] +=1
guess = np.argmax(predictions[i])
if guess == y_test[i]:
good_matrix[guess] +=1
else:
# increment [expected][guess] matrix
my_matrix[y_test[i]][guess] += 1
# show good matrix
print('Good guesses:')
for i in range(10):
percent = "( {:.2f}".format((good_matrix[i] * 100.0) / count_matrix[i]) + " %)"
print('match count for:',i,'=', good_matrix[i] , '/',count_matrix[i] , percent)
print('\nConfusion Matrix')
fig = plt.figure()
plt.xticks( range(10))
plt.yticks( range(10))
for y in range(10):
for x in range(10):
if my_matrix[y][x] != 0:
# put text
plt.text( x-len(str(x)) * 0.2, y+0.1, str(my_matrix[y][x]))
plt.xlabel('prediction')
plt.ylabel('expected')
plt.imshow(my_matrix, cmap='YlOrRd')
plt.colorbar()
plt.show()
plt.close()
###Output
Confusion Matrix
###Markdown
The matrix shows that number 4s are confused as 9s. Let's just plot these problematic 4ssYou can change the two numbers and click "Run" to view the other "confused" predictions
###Code
non_match_list = []
for i in range(10000):
if y_test[i] == 4:
guess = np.argmax(predictions[i])
if guess == 9:
non_match_list.append(i)
fig = plt.figure( figsize = (10,2))
for i in range(len(non_match_list)):
plt.subplot(1,20,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
index = non_match_list[i]
plt.imshow(x_test[index], cmap='gray_r')
plt.show()
plt.close()
###Output
_____no_output_____ |
Regression/Support Vector Machine/SVR_RobustScaler_PowerTransformer.ipynb | ###Markdown
Support Vector Regression with RobustScaler and PowerTransformer This Code template is for regression analysis using simple Support Vector Regressor(SVR) based on the Support Vector Machine algorithm with feature rescaling technique RobustScaler and feature transformation technique PowerTransformer in a pipeline. Required Packages
###Code
import warnings
import numpy as np
import pandas as pd
import seaborn as se
from sklearn.svm import SVR
from sklearn.preprocessing import PowerTransformer, RobustScaler
from sklearn.pipeline import make_pipeline
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
file_path= ""
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
features =[]
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
target=''
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
X=df[features]
Y=df[target]
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature and target set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
###Output
_____no_output_____
###Markdown
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
###Code
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
###Output
_____no_output_____
###Markdown
Data Rescaling Robust ScalerStandardization of a dataset is a common requirement for many machine learning estimators. Typically this is done by removing the mean and scaling to unit variance. However, outliers can often influence the sample mean / variance in a negative way. In such cases, the median and the interquartile range often give better results.The `Robust Scaler` removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile). Feature TransformationPower transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.[More on PowerTransformer module and parameters](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html) ModelSupport vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.A Support Vector Machine is a discriminative classifier formally defined by a separating hyperplane. In other terms, for a given known/labelled data points, the SVM outputs an appropriate hyperplane that classifies the inputted new cases based on the hyperplane. In 2-Dimensional space, this hyperplane is a line separating a plane into two segments where each class or group occupied on either side.Here we will use SVR, the svr implementation is based on libsvm. The fit time scales at least quadratically with the number of samples and maybe impractical beyond tens of thousands of samples. Model Tuning Parameters 1. C : float, default=1.0> Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty. 2. kernel : {‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’}, default=’rbf’> Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to pre-compute the kernel matrix from data matrices; that matrix should be an array of shape (n_samples, n_samples). 3. gamma : {‘scale’, ‘auto’} or float, default=’scale’> Gamma is a hyperparameter that we have to set before the training model. Gamma decides how much curvature we want in a decision boundary. 4. degree : int, default=3> Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.Using degree 1 is similar to using a linear kernel. Also, increasing degree parameter leads to higher training times.
###Code
model=make_pipeline(RobustScaler(),PowerTransformer(),SVR())
model.fit(x_train,y_train)
###Output
_____no_output_____
###Markdown
Model AccuracyWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.> **score**: The **score** function returns the coefficient of determination R2 of the prediction.
###Code
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
###Output
Accuracy score 51.30 %
###Markdown
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
###Code
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
###Output
R2 Score: 51.30 %
Mean Absolute Error 24.54
Mean Squared Error 1029.84
###Markdown
Prediction PlotFirst, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
###Code
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
###Output
_____no_output_____ |
Compression.ipynb | ###Markdown
Clickhouse Compression ComparisonFor this test, I tried several compression codecs and precision reduction methods on 4 MERRA-2 nodes with 9 variables and 4 years of data. For spatial-temporal data in particular, there may be better ways to store it and compress it than a tabular database - see zfp compression and zarr files. But I wanted to check out clickhouse. Conclusions**The most important source of size reduction is eliminating false precision (but storage gains are probably not worth the risk of data loss)**. The next thing to do is to use the Gorilla XOR codec, unless the data is a simple counter, like a timestamp, in which case use DoubleDelta.This produced a dataset only 30% the size of the uncompressed data. Snappy compressed parquet was 37%, for comparison.
###Code
import pandas as pd
from pathlib import Path
import numpy as np
import altair as alt
df = pd.read_csv('./clickhouse_compression_test.tsv', sep='\t')
df[['precision', 'codec_']] = df['table'].str.split('_', expand=True)[[0,1]]
df.head()
base = alt.Chart(df).properties(
width=90,
height=400,
)
ticks = base.mark_tick(thickness=4, size=25).encode(
x='precision:N',
y='compressed:Q',
#row='codec_:N',
color='codec:N',
#column='name:N'
)
reference = base.mark_rule(color='black', strokeDash=[3,5]).encode(
y='uncompressed:Q',
)
alt.layer(
ticks,
reference,
).facet(
'name:N',
)
###Output
_____no_output_____
###Markdown
Zoom in on indices with log scale
###Code
alt.Chart(df).mark_tick(thickness=4, size=25).encode(
x='precision:N',
y=alt.Y('compressed:Q', scale=alt.Scale(type='log', base=2)),
#row='codec_:N',
color='codec:N',
column='name:N',
).transform_filter(
alt.FieldOneOfPredicate(field='name', oneOf=['lat', 'lon', 'time'])
).properties(
width=100,
height=500,
)
###Output
_____no_output_____
###Markdown
Minimum Compressed size [MB]
###Code
min_size = df.groupby(by='name')['compressed'].min().sum() / 2**20
min_size
uncompressed_size = df.groupby(by='name')['uncompressed'].first().sum() / 2**20
uncompressed_size
min_size / uncompressed_size
###Output
_____no_output_____
###Markdown
Gorilla only
###Code
g = df.loc[df['codec_'] == 'gorilla'].groupby(by=['precision'])[['compressed', 'uncompressed']].sum()
g['ratio'] = g['compressed'] / g['uncompressed']
g
###Output
_____no_output_____
###Markdown
Gorilla with DoubleDelta timestamps
###Code
gdd = df.query('name != "time" & codec_ == "gorilla"').groupby(by='precision')[['compressed', 'uncompressed']].sum() + df.query('name == "time" & codec_ == "dd"').groupby(by='precision')[['compressed', 'uncompressed']].sum()
gdd
df.query('name == "time" & codec_ == "dd"').groupby(by='precision')[['compressed', 'uncompressed']].sum()
gdd['ratio'] = gdd['compressed'] / gdd['uncompressed']
gdd
###Output
_____no_output_____
###Markdown
Snappy Parquet for comparison
###Code
root = Path('/mnt/c/data/merra_texas/')
root.exists()
parq = dict(fp16=None, round=None, full32=None)
for key in parq.keys():
fsize = 0
for file in (root / key).glob('*.parquet'):
fsize += file.stat().st_size
parq[key] = fsize
pq = pd.DataFrame(parq, index=[0]).T
pq['ratio'] = pq[0] / g.loc['fp16', 'uncompressed']
pq
###Output
_____no_output_____ |
ipynb/Iceland.ipynb | ###Markdown
Iceland* Homepage of project: https://oscovida.github.io* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Iceland.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Iceland");
# load the data
cases, deaths, region_label = get_country_data("Iceland")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Iceland.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
Iceland* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Iceland.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Iceland", weeks=5);
overview("Iceland");
compare_plot("Iceland", normalise=True);
# load the data
cases, deaths = get_country_data("Iceland")
# get population of the region for future normalisation:
inhabitants = population("Iceland")
print(f'Population of "Iceland": {inhabitants} people')
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 1000 rows
pd.set_option("max_rows", 1000)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Iceland.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
Iceland* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Iceland.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Iceland", weeks=5);
overview("Iceland");
compare_plot("Iceland", normalise=True);
# load the data
cases, deaths = get_country_data("Iceland")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Iceland.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____ |
10_caps_y_floors.ipynb | ###Markdown
Caps y Floors Hasta ahora sólo hemos visto productos lineales de tasa de interés. Hoy introducimos el primer ejemplo de productos no lineales (u opciones) de tasa de interés. Configuración
###Code
from finrisk import QC_Financial_3 as Qcf
from IPython.core.display import HTML
from IPython.display import Image
import modules.auxiliary as aux
import pandas as pd
###Output
_____no_output_____
###Markdown
Se genera el calendario NY.
###Code
bus_cal = aux.get_cal(aux.BusCal.NY)
###Output
_____no_output_____
###Markdown
Esta función devuelve una pata flotante con Libor USD 3M.
###Code
def get_libor_usd_3m_leg(
rp: Qcf.RecPay,
notional: float,
fecha_inicio: Qcf.QCDate,
maturity: Qcf.Tenor,
spread: float = 0.0) -> Qcf.Leg:
# Se da de alta los parámetros requeridos
periodicidad_pago = Qcf.Tenor('3M')
periodo_irregular_pago = Qcf.StubPeriod.SHORTFRONT
lag_pago = 0
periodicidad_fijacion = Qcf.Tenor('3M')
periodo_irregular_fijacion = Qcf.StubPeriod.SHORTFRONT
bus_adj_rule = Qcf.BusyAdjRules.MODFOLLOW
amort_es_flujo = True
gearing = 1.0 # intereses -> gearing * Libor + spread
# vamos a usar el mismo calendario para pago y fijaciones
lag_de_fijacion = 2
# Definición del índice
codigo = 'LIBORUSD3M'
lin_act360 = Qcf.QCInterestRate(0.0, Qcf.QCAct360(), Qcf.QCLinearWf())
fixing_lag = Qcf.Tenor('2d')
tenor = Qcf.Tenor('3m')
usd = Qcf.QCUSD()
libor_usd_3m = Qcf.InterestRateIndex(codigo,
lin_act360,
fixing_lag,
tenor,
bus_cal,
bus_cal,
usd)
# Fin índice
meses = maturity.get_years() * 12 + maturity.get_months()
fecha_final = fecha_inicio.add_months(meses)
ibor_leg = Qcf.LegFactory.build_bullet_ibor_leg(
rp,
fecha_inicio,
fecha_final,
bus_adj_rule,
periodicidad_pago,
periodo_irregular_pago,
bus_cal,
lag_pago,
periodicidad_fijacion,
periodo_irregular_fijacion,
bus_cal,
lag_de_fijacion,
libor_usd_3m,
notional,
amort_es_flujo,
usd,
spread,
gearing)
return ibor_leg
###Output
_____no_output_____
###Markdown
Formato para la visualización de los `DataFrame`.
###Code
frmt = {
'nominal': '{:,.2f}',
'amortizacion': '{:,.2f}',
'interes': '{:,.2f}',
'flujo': '{:,.2f}',
'interes_cap': '{:,.2f}',
'interes_total': '{:,.2f}',
'valor_tasa': '{:.6%}',
'tasa': '{:.6%}'
}
###Output
_____no_output_____
###Markdown
Black-Scholes-Merton Revisited Recordemos la fórmula para una *call* sobre una acción que no paga dividendos.$$\begin{equation}C\left(S,t\right)=e^{-rT}\left[Se^{rT}\cdot N\left(d_1\right)-K\cdot N\left(d_2\right)\right]\end{equation}$$$$\begin{equation}d_1=\frac{\ln\frac{Se^{rT}}{K}+\frac{1}{2}\sigma^2T}{\sigma\sqrt{T}}\end{equation}$$$$\begin{equation}d_2=d_1-\sigma\sqrt{T}\end{equation}$$ En la medida ajustada por riesgo tenemos que $Se^{rT}=\mathbb{E}_t^Q\left(S_T\right)$ y por lo tanto:$$\begin{equation}C\left(S,t\right)=df\left(r,T\right)\left[\mathbb{E}_t^Q\left(S_T\right)\cdot N\left(d_1\right)-K\cdot N\left(d_2\right)\right]\end{equation}$$$$\begin{equation}d_1=\frac{\ln\frac{\mathbb{E}_t^Q\left(S_T\right)}{K}+\frac{1}{2}\sigma^2T}{\sigma\sqrt{T}}\end{equation}$$$$\begin{equation}d_2=d_1-\sigma\sqrt{T}\end{equation}$$ Caps - Un Cap es una opción que ofrece protección contra subidas de alguna tasa de interés.- Por ejemplo, una empresa puede tener un financiamiento a tasa variable y estar interesada en tener protección contra incrementos de la tasa de referencia por sobre un cierto umbral.- Veamos el ejemplo de un Cap al 0.35% para proteger un crédito en USD a 2Y a tasa flotante Libor 3M. Crédito a Libor USD 3M Se construye un crédito a Libor USD 3M a 2Y.
###Code
credito = get_libor_usd_3m_leg(Qcf.RecPay.PAY, 10000000, Qcf.QCDate(26, 10, 2020), Qcf.Tenor('2Y'))
###Output
_____no_output_____
###Markdown
Visualizamos y vemos como ninguno de los cupones tiene su *fixing*, es decir el valor de la Libor a la fecha de fixing.
###Code
aux.show_leg(credito, 'IborCashflow', '').style.format(frmt)
###Output
_____no_output_____
###Markdown
[www.global-rates.com (valores Libor USD)](https://www.global-rates.com/en/interest-rates/libor/american-dollar/american-dollar.aspx) Vamos a considerar un **escenario arbitrario** en que los *fixings* de la Libor aumentan 0.04% en cada nuevo cupón. El *fixing* del primer cupón corresponde al valor de la Libor USD 3M del 22-10-2020.
###Code
first_fixing = .0021475
increment = .0004 # Suponemos que los fixings sucesivos van aumentando en este monto.
for i in range(credito.size()):
cshflw = credito.get_cashflow_at(i)
cshflw.set_interest_rate_value(first_fixing + i * increment)
###Output
_____no_output_____
###Markdown
Vemos como, en este escenario, los intereses a pagar aumentan hasta más del doble del monto de intereses del primer cupón.
###Code
df_cred = aux.show_leg(credito, 'IborCashflow', '')
df_cred.style.format(frmt)
###Output
_____no_output_____
###Markdown
Crédito a Libor USD 3M + Cap En un escenario como el anterior, quien ha suscrito el crédito, podría tener interés en contratar protección contra las subidas de la Libor. En particular, podría contratar un Cap de Libor a 2Y.El payoff de un Cap, está dado por la siguiente función `caplet` asociada a cada uno de los cupones del crédito.**NB:** la función se llama `caplet` por que corresponde a una componente del Cap total.
###Code
def caplet(notional: float,
fixing: float,
strike: float,
fecha_inicio: Qcf.QCDate,
fecha_final: Qcf.QCDate) -> float:
valor_tasa = max(fixing - strike, 0)
lin_act360 = Qcf.QCInterestRate(valor_tasa, Qcf.QCAct360(), Qcf.QCLinearWf())
wf = lin_act360.wf(fecha_inicio, fecha_final)
# Los interes que se devengan con valor_tasa entre la fecha_inicio y la fecha_final
return notional * (wf - 1)
###Output
_____no_output_____
###Markdown
Apliquemos `caplet` a cada uno de los cupones y *fixings* del escenario, considerando una tasa Cap (o strike) de 0.35%.
###Code
strike = .0035
df_cred['interes_cap'] = df_cred.apply(
lambda row: -caplet(
row['nominal'],
row['valor_tasa'],
strike,
Qcf.build_qcdate_from_string(row['fecha_inicial']),
Qcf.build_qcdate_from_string(row['fecha_final'])
),
axis=1
)
df_cred['interes_total'] = df_cred['interes'] + df_cred['interes_cap']
###Output
_____no_output_____
###Markdown
Vemos como cada `caplet` provee un flujo de caja positivo, cada vez que el *fixing* de la Libor supera el valor de la tasa Cap o strike del Cap.Los últimos pagos de intereses (total) están a una tasa efectiva igual al *strike* (0.35%).
###Code
df_cred.style.format(frmt)
###Output
_____no_output_____
###Markdown
Fórmula de Valorización - Cada componente individual de un Cap se dice caplet- Evaluemos un caplet: su pago está dado por$$\begin{equation}yf\cdot N\cdot max\left(L_{T_f}-L_K,0\right)\end{equation}$$- Donde $yf$ es la fracción de año que se usa para la tasa de referencia (lo más usual es Act/360), $N$ es el nocional del contrato, $L_K$ es la tasa Cap y $T_f$ es la fecha en la que se produce el *fixing*.- Notar que, aunque el pago queda determinado al tiempo $T_f$, éste se realiza sólo cuando ha concluido el período de devengo dado por $yf$. - Vemos, por lo tanto, que el pago de cada caplet está dado por una fórmula de Call sobre $L_{T_f}$ con strike $L_K$.- Queremos entonces poder aplicar una fórmula de BSM para el valor del caplet.- El mercado ha adoptado como estándar el uso de la fórmula de BSM para este producto (en el formato fácil de recordar que es utilizando el valor esperado del activo subyacente). De esta forma se obtiene que:$$\begin{equation}caplet=yf\cdot N\cdot e^{-r_{OIS}\cdot T_v}\left[\mathbb{E}_t^Q\left(L_{T_f}\right)\cdot N\left(d_1\right)-L_K\cdot N\left(d_2\right)\right]\end{equation}$$$$\begin{equation}d_1=\frac{\ln\frac{\mathbb{E}_t^Q\left(L_{T_f}\right)}{L_K}+\frac{1}{2}\sigma^2T_f}{\sigma\sqrt{T_f}}\end{equation}$$$$\begin{equation}d_2=d_1-\sigma\sqrt{T_f}\end{equation}$$Donde $T_f$ es el plazo hasta el fixing de la opción, $T_v$ es el plazo hasta el pago de la opción y la tasa de descuento es la que proviene de la curva *OIS*. Floors - Un Floor es una opción que ofrece protección contra bajadas de alguna tasa de interés.- Por ejemplo, un inversionista puede tener activos a tasa variable y estar interesado en protección contra disminuciones de la tasa de referencia por debajo un cierto umbral.- Veamos el ejemplo de un Floor al 0.01% para proteger un activo en USD a 2Y a tasa flotante Libor 3M. Activo a Libor USD 3M Se construye un activo al Libor USD 3M. Podría ser un bono o el mismo crédito anterior desde el punto de vista del banco que lo otorga.
###Code
activo = get_libor_usd_3m_leg(Qcf.RecPay.RECEIVE,
10000000, Qcf.QCDate(26, 10, 2020), Qcf.Tenor('2Y'))
###Output
_____no_output_____
###Markdown
Visualizamos y vemos como ninguno de los cupones tiene su *fixing*, es decir el valor de la Libor a la fecha de fixing.
###Code
aux.show_leg(activo, 'IborCashflow', '').style.format(frmt)
###Output
_____no_output_____
###Markdown
Vamos a considerar un **escenario arbitrario** en que los *fixings* de la Libor disminuyen 0.03% en cada nuevo cupón. El *fixing* del primer cupón corresponde al valor de la Libor USD 3M del 22-10-2020.
###Code
increment = -.0003 # Suponemos que los fixings sucesivos van cambiando en este monto.
for i in range(activo.size()):
cshflw = activo.get_cashflow_at(i)
cshflw.set_interest_rate_value(first_fixing + i * increment)
###Output
_____no_output_____
###Markdown
Vemos como, en este escenario, los intereses por recibir llegan casi a 0 en el último cupón.
###Code
df_activo = aux.show_leg(activo, 'IborCashflow', '')
df_activo.style.format(frmt)
###Output
_____no_output_____
###Markdown
Activo a Libor USD 3M + Floor En un escenario como el anterior, el tenedor del bono o crédito, podría tener interés en contratar protección contra las bajadas de la Libor. En particular, podría contratar un Floor de Libor a 2Y.El payoff de un Floor, está dado por la siguiente función `floorlet` asociada a cada uno de los cupones del activo.**NB:** la función se llama `floorlet` por que corresponde a una componente del Floor total.
###Code
def floorlet(notional: float,
fixing: float,
strike: float,
fecha_inicio: Qcf.QCDate,
fecha_final: Qcf.QCDate) -> float:
valor_tasa = max(strike - fixing, 0)
lin_act360 = Qcf.QCInterestRate(valor_tasa, Qcf.QCAct360(), Qcf.QCLinearWf())
wf = lin_act360.wf(fecha_inicio, fecha_final)
return notional * (wf - 1)
###Output
_____no_output_____
###Markdown
Apliquemos `floorlet` a cada uno de los cupones y *fixings* del escenario, considerando una tasa Floor (o strike) de 0.15%.
###Code
strike = .0015
df_activo['interes_floor'] = df_activo.apply(
lambda row: floorlet(
row['nominal'],
row['valor_tasa'],
strike,
Qcf.build_qcdate_from_string(row['fecha_inicial']),
Qcf.build_qcdate_from_string(row['fecha_final'])
),
axis=1
)
df_activo['interes_total'] = df_activo['interes'] + df_activo['interes_floor']
###Output
_____no_output_____
###Markdown
Vemos como cada `floorlet` provee un flujo de caja positivo, cada vez que el *fixing* de la Libor es inferior al valor de la tasa Floor o strike del Floor.Los últimos pagos de intereses (total) están a una tasa efectiva igual al *strike* (0.15%).
###Code
df_activo.style.format(frmt)
###Output
_____no_output_____
###Markdown
Fórmula de Valorización - Cada componente individual de un floor se dice floorlet.- Evaluemos un floorlet: su pago está dado por$$\begin{equation}yf\cdot N\cdot max\left(L_K-L_{T_f},0\right)\end{equation}$$- Donde $yf$ es la fracción de año que se usa para la tasa de referencia (lo más usual es Act/360), $N$ es el nocional del contrato y $L_K$ es la tasa floor.- Notar que, aunque el pago queda determinado al tiempo $T_f$, éste se realiza sólo cuando ha concluido el período de devengo dado por $yf$. - Vemos, por lo tanto, que el pago de cada floorlet está dado por una fórmula de Put sobre $L_{T_f}$ con strike $L_K$- Queremos entonces poder aplicar una fórmula de BSM para el valor del floorlet.- El mercado ha adoptado como estándar el uso de la fórmula de BSM para este producto (en el formato fácil de recordar que es utilizando el valor esperado del activo subyacente). De esta forma se obtiene que:$$\begin{equation}floorlet=yf\cdot N\cdot e^{-r_{OIS}\cdot T_v}\left[L_K\cdot N\left(-d_2\right)-\mathbb{E}_t^Q\left(L_{T_f}\right)\cdot N\left(-d_1\right)\right]\end{equation}$$$$\begin{equation}d_1=\frac{\ln\frac{\mathbb{E}_t^Q\left(L_{T_f}\right)}{L_K}+\frac{1}{2}\sigma^2T_f}{\sigma\sqrt{T_f}}\end{equation}$$$$\begin{equation}d_2=d_1-\sigma\sqrt{T_f}\end{equation}$$Donde $T_f$ es el plazo hasta el fixing de la opción, $T_v$ es el plazo hasta el pago de la opción y la tasa de descuento es la que proviene de la curva *OIS*. EjercicioRecordemos la Put – Call parity, ésta nos dice que:$$\begin{equation}Call-Put=Forward\end{equation}$$- ¿Qué forma asume esta relación para el caso de Caps y Floors?- Notar que si compro un Cap y un Floor con la misma estructura temporal y el mismo strike sucede que: - Si la tasa flotante está por arriba del strike recibo este exceso - Si la tasa flotante está por debajo del strike pago este déficit $$\begin{equation}Cap-Floor=Swap\end{equation}$$ Matriz de Volatilidad En esta pantalla para cotizar Caps, se puede ver el valor del parámetro de volatilidad utilizado para calcular el valor del Cap (campo NPV).
###Code
Image(url="img/20201027_cap.gif", width=900, height=720)
###Output
_____no_output_____
###Markdown
Dicha volatilidad se deduce de la siguiente matriz de volatilidad, la cual, a su vez, proviene de cotizaciones de bancos y otras instituciones activas en este mercado.
###Code
Image(url="img/20201027_cap_vol.gif", width=900, height=720)
curvas = pd.read_csv('data/20201027_curvas.csv')
curvas.columns = ['fecha', 'plazo', 'tasa', 'codigo']
curvas.style.format(frmt)
df_libor3m = curvas[curvas.codigo == 'LIBORUSD3MBBG']
df_sofr = curvas[curvas.codigo == 'USDSOFR']
df_sofr.style.format(frmt)
###Output
_____no_output_____
###Markdown
EjercicioUtilizando las curvas *Libor* y *SOFR* de más arriba, valorice el Cap a 5Y con strike ATM. Las convenciones de tasas de ambas curvas es Com Act/365.**NOTA:** El valor esperado de la Libor a una cierta fecha usando la curva **Libor de BBG** se calcula como:$$\begin{equation}\mathbb{E}_t^Q\left(L_{T_f}\right)=\left(\frac{P_L\left(t,T_f\right)}{P_L\left(t,T_{f+3M}\right)}-1\right)\cdot\frac{1}{yf\left(T_f,T_{f+3M}\right)}\end{equation}$$Donde $P_L\left(t,T\right)$ es el factor de descuento obtenido desde la curva Libor entre $t=hoy$ y $T$ y $f$ es la fecha de inicio de la Libor.Idealmente, utilicen funciones de la librería.
###Code
plazos = [90, 180, 270, 360]
tasas = [.01, .012, .014, .016]
df_90 = 1/(1+tasas[0])**(plazos[0]/365)
df_90
df_180 = 1/(1+tasas[1])**(plazos[1]/365)
df_180
ValorEsperado_L3M_en_90D = (df_90 / df_180 - 1) * 360 / (plazos[1] - plazos[0])
ValorEsperado_L3M_en_90D
###Output
_____no_output_____ |
notebooks/euclidean_distance.ipynb | ###Markdown
Import NumPy and Check Python Version
###Code
import sys
import numpy as np
print('You\'re running python %s' % sys.version.split(' ')[0])
###Output
You're running python 3.8.9
###Markdown
Euclidean Distances in PythonMany machine learning algorithms access their input data primarily through pairwise (Euclidean) distances, therefore it is important that we have a fast function that computes pairwise distances of input vectors.Assume we have $n$ data vectors $\mathbf{x_1},\dots,\mathbf{x_n}\in{\cal R}^d$ and $m$ vectors $\mathbf{z_1},\dots,z_m\in{\cal R}^d$. With these data vectors, let us define two matrices $X=[\mathbf{x_1},\dots,\mathbf{x_n}]\in{\cal R}^{n\times d}$, where the $i^{th}$ row is a vector $\mathbf{x_i}$ and similarly $Z=[\mathbf{z_1},\dots,\mathbf{z_m}]\in{\cal R}^{m\times d}$.We want a distance function that takes as input these two matrices $X$ and $Z$ and outputs a matrix $D\in{\cal R}^{n\times m}$, where $$D_{ij}=\sqrt{(\mathbf{x_i}-\mathbf{z_j})(\mathbf{x_i}-\mathbf{z_j})^\top}.$$
###Code
def l2distanceSlow(X, Z=None):
if Z is None:
Z = X
n, d = X.shape
m = Z.shape[0]
# allocate memory for the output matrix
D = np.zeros((n,m))
for i in range(n): # loop over vectors in X
for j in range(m): # loop over vectors in Z
D[i,j] = 0.0
for k in range(d): # loop over dimensions
# compute l2-distance between the ith and jth vector
D[i,j] = D[i,j] + (X[i,k] - Z[j,k]) ** 2;
D[i,j] = np.sqrt(D[i,j]); # take square root
return D
X = np.random.rand(700, 100)
print('Running the naive version for the...')
%time Dslow = l2distanceSlow(X)
###Output
Running the naive version for the...
Wall time: 56.4 s
###Markdown
This code defines some random data in $X$ and computes the corresponding distance matrix $D$. The %time statement determines how long this code takes to run. This implementation is much too slow for such a simple operation on a small amount of data, and writing code like this to deal with matrices will result in code that takes days to run.As a general rule, we should avoid tight loops at all cost. We can do much better by performing bulk matrix operations using the NumPy package, which calls highly optimized compiled code behind the scenes. Efficient Programming with NumPyAlthough there is an execution overhead per line in Python, matrix operations are optimized and fast. Python for scientific computing can be very fast if almost all the time is spent on a few heavy duty matrix operations. In this exercise, we will transform the function above into a few matrix operations without any loops at all.The key to efficient programming in Python for machine learning in general is to think about it in terms of mathematics and not in terms of loops. ExercisesThe following three exercises will implement the euclidean distance function without loops.Exercise 1: Inner-Product MatrixShow that the Inner-Product Matrix (Gram matrix) can be expressed in terms of pure matrix multiplication.$$G_{ij}=\mathbf{x}_i\mathbf{z}_j^\top$$
###Code
def innerproduct(X, Z=None):
"""This function computes the inner-product matrix."""
if Z is None:
Z = X
res = np.dot(X, Z.T)
return res
###Output
_____no_output_____
###Markdown
Exercise 2: Derive the Distance MatrixLet us define two new matrices $S,R\in{\cal R}^{n\times m}$ $$S_{ij}=\mathbf{x}_i\mathbf{x}_i^\top, \ \ R_{ij}=\mathbf{z}_j\mathbf{z}_j^\top.$$Show that the squared-euclidean matrix $D^2\in{\cal R}^{n\times m}$, defined as$$D^2_{ij}=(\mathbf{x}_i-\mathbf{z}_j)(\mathbf{x}_i-\mathbf{z}_j)^\top,$$can be expressed as a linear combination of the matrix $S, G, R$. Exercise 3: Implement l2distanceImplement the function l2distance, which computes the Euclidean distance matrix $D$ without a single loop.
###Code
def l2distance(X, Z=None):
"""This function computes the Euclidean distance matrix."""
if Z is None:
Z = X
n, d1 = X.shape
m, d2 = Z.shape
assert (d1 == d2), 'Dimensions of input vectors must match!'
# matrix linear combination
D = -2 * innerproduct(X,Z)
S = np.expand_dims(np.sum(np.power(X,2), axis=1),1)
R = np.expand_dims(np.sum(np.power(Z,2), axis=1),1)
res = S + R.T + D
res = np.maximum(res, 0)
res = np.sqrt(res)
return res
###Output
_____no_output_____
###Markdown
Let's now compare the speed of l2-distance function against the previous naïve implementation:
###Code
import time
current_time = lambda: int(round(time.time() * 1000))
X = np.random.rand(700,100)
Z = np.random.rand(300,100)
print('Running the naïve version...')
before = current_time()
Dslow = l2distanceSlow(X)
after = current_time()
t_slow = after - before
print('{:2.0f} ms'.format(t_slow))
print('Running the vectorized version...')
before = current_time()
Dfast = l2distance(X)
after = current_time()
t_fast = after - before
print('{:2.0f} ms'.format(t_fast))
speedup = t_slow / t_fast
print('The two methods should deviate by very little: {:05.6f}'.format(np.linalg.norm(Dfast - Dslow)))
print('but your NumPy code was {:05.2f} times faster!'.format(speedup))
###Output
Running the naïve version...
58608 ms
Running the vectorized version...
21 ms
The two methods should deviate by very little: 0.000002
but your NumPy code was 2790.86 times faster!
|
_notebooks/2022-01-21-rl-tictactoe.ipynb | ###Markdown
Solving Tic-Tac-Toe with RL Setup Imports
###Code
from abc import ABC, abstractmethod
import os
import pickle
import collections
import numpy as np
import random
import logging
import sys
import matplotlib.pylab as plt
###Output
_____no_output_____
###Markdown
Params
###Code
class Args:
agent_type = "q" # 'q' for Q-learning, 's' for SARSA, Specify the computer agent learning algorithm.
path = None # Specify the path for the agent pickle file. Defaults to q_agent.pkl for AGENT_TYPE='q' and "sarsa_agent.pkl for AGENT_TYPE='s'.
load = False # whether to load trained agent
teacher_episodes = 0 # employ teacher agent who knows the optimal strategy and will play for TEACHER_EPISODES games
def get_args():
args = Args()
# set default path
if args.path is None:
args.path = 'q_agent.pkl' if args.agent_type == 'q' else 'sarsa_agent.pkl'
return args
###Output
_____no_output_____
###Markdown
Logger
###Code
logging.basicConfig(stream=sys.stdout,
level = logging.DEBUG,
format='%(asctime)s [%(levelname)s] : %(message)s',
datefmt='%d-%b-%y %H:%M:%S')
logger = logging.getLogger('Tic-Tac-Toe Logger')
###Output
_____no_output_____
###Markdown
Utilities Environment
###Code
class Game:
""" The game class. New instance created for each new game. """
def __init__(self, agent, teacher=None):
self.agent = agent
self.teacher = teacher
# initialize the game board
self.board = [['-', '-', '-'], ['-', '-', '-'], ['-', '-', '-']]
def playerMove(self):
"""
Querry player for a move and update the board accordingly.
"""
if self.teacher is not None:
action = self.teacher.makeMove(self.board)
self.board[action[0]][action[1]] = 'X'
else:
printBoard(self.board)
while True:
move = input("Your move! Please select a row and column from 0-2 "
"in the format row,col: ")
print('\n')
try:
row, col = int(move[0]), int(move[2])
except ValueError:
print("INVALID INPUT! Please use the correct format.")
continue
if row not in range(3) or col not in range(3) or not self.board[row][col] == '-':
print("INVALID MOVE! Choose again.")
continue
self.board[row][col] = 'X'
break
def agentMove(self, action):
"""
Update board according to agent's move.
"""
self.board[action[0]][action[1]] = 'O'
def checkForWin(self, key):
"""
Check to see whether the player/agent with token 'key' has won.
Returns a boolean holding truth value.
Parameters
----------
key : string
token of most recent player. Either 'O' or 'X'
"""
# check for player win on diagonals
a = [self.board[0][0], self.board[1][1], self.board[2][2]]
b = [self.board[0][2], self.board[1][1], self.board[2][0]]
if a.count(key) == 3 or b.count(key) == 3:
return True
# check for player win on rows/columns
for i in range(3):
col = [self.board[0][i], self.board[1][i], self.board[2][i]]
row = [self.board[i][0], self.board[i][1], self.board[i][2]]
if col.count(key) == 3 or row.count(key) == 3:
return True
return False
def checkForDraw(self):
"""
Check to see whether the game has ended in a draw. Returns a
boolean holding truth value.
"""
draw = True
for row in self.board:
for elt in row:
if elt == '-':
draw = False
return draw
def checkForEnd(self, key):
"""
Checks if player/agent with token 'key' has ended the game. Returns -1
if the game is still going, 0 if it is a draw, and 1 if the player/agent
has won.
Parameters
----------
key : string
token of most recent player. Either 'O' or 'X'
"""
if self.checkForWin(key):
if self.teacher is None:
printBoard(self.board)
if key == 'X':
logger.info("Player wins!")
else:
logger.info("RL agent wins!")
return 1
elif self.checkForDraw():
if self.teacher is None:
printBoard(self.board)
logger.info("It's a draw!")
return 0
return -1
def playGame(self, player_first):
"""
Begin the tic-tac-toe game loop.
Parameters
----------
player_first : boolean
Whether or not the player will move first. If False, the
agent goes first.
"""
# Initialize the agent's state and action
if player_first:
self.playerMove()
prev_state = getStateKey(self.board)
prev_action = self.agent.get_action(prev_state)
# iterate until game is over
while True:
# execute oldAction, observe reward and state
self.agentMove(prev_action)
check = self.checkForEnd('O')
if not check == -1:
# game is over. +1 reward if win, 0 if draw
reward = check
break
self.playerMove()
check = self.checkForEnd('X')
if not check == -1:
# game is over. -1 reward if lose, 0 if draw
reward = -1*check
break
else:
# game continues. 0 reward
reward = 0
new_state = getStateKey(self.board)
# determine new action (epsilon-greedy)
new_action = self.agent.get_action(new_state)
# update Q-values
self.agent.update(prev_state, new_state, prev_action, new_action, reward)
# reset "previous" values
prev_state = new_state
prev_action = new_action
# append reward
# Game over. Perform final update
self.agent.update(prev_state, None, prev_action, None, reward)
def start(self):
"""
Function to determine who moves first, and subsequently, start the game.
If a teacher is employed, first mover is selected at random.
If a human is playing, the human is asked whether he/she would
like to move fist.
"""
if self.teacher is not None:
# During teaching, chose who goes first randomly with equal probability
if random.random() < 0.5:
self.playGame(player_first=False)
else:
self.playGame(player_first=True)
else:
while True:
response = input("Would you like to go first? [y/n]: ")
print('')
if response == 'n' or response == 'no':
self.playGame(player_first=False)
break
elif response == 'y' or response == 'yes':
self.playGame(player_first=True)
break
else:
logger.info("Invalid input. Please enter 'y' or 'n'.")
def printBoard(board):
"""
Prints the game board as text output to the terminal.
Parameters
----------
board : list of lists
the current game board
"""
print(' 0 1 2\n')
for i, row in enumerate(board):
print('%i ' % i, end='')
for elt in row:
print('%s ' % elt, end='')
print('\n')
def getStateKey(board):
"""
Converts 2D list representing the board state into a string key
for that state. Keys are used for Q-value hashing.
Parameters
----------
board : list of lists
the current game board
"""
key = ''
for row in board:
for elt in row:
key += elt
return key
class Teacher:
"""
A class to implement a teacher that knows the optimal playing strategy.
Teacher returns the best move at any time given the current state of the game.
Note: things are a bit more hard-coded here, as this was not the main focus of
the exercise so I did not spend as much time on design/style. Everything works
properly when tested.
Parameters
----------
level : float
teacher ability level. This is a value between 0-1 that indicates the
probability of making the optimal move at any given time.
"""
def __init__(self, level=0.9):
"""
Ability level determines the probability that the teacher will follow
the optimal strategy as opposed to choosing a random available move.
"""
self.ability_level = level
def win(self, board, key='X'):
""" If we have two in a row and the 3rd is available, take it. """
# Check for diagonal wins
a = [board[0][0], board[1][1], board[2][2]]
b = [board[0][2], board[1][1], board[2][0]]
if a.count('-') == 1 and a.count(key) == 2:
ind = a.index('-')
return ind, ind
elif b.count('-') == 1 and b.count(key) == 2:
ind = b.index('-')
if ind == 0:
return 0, 2
elif ind == 1:
return 1, 1
else:
return 2, 0
# Now check for 2 in a row/column + empty 3rd
for i in range(3):
c = [board[0][i], board[1][i], board[2][i]]
d = [board[i][0], board[i][1], board[i][2]]
if c.count('-') == 1 and c.count(key) == 2:
ind = c.index('-')
return ind, i
elif d.count('-') == 1 and d.count(key) == 2:
ind = d.index('-')
return i, ind
return None
def blockWin(self, board):
""" Block the opponent if she has a win available. """
return self.win(board, key='O')
def fork(self, board):
""" Create a fork opportunity such that we have 2 threats to win. """
# Check all adjacent side middles
if board[1][0] == 'X' and board[0][1] == 'X':
if board[0][0] == '-' and board[2][0] == '-' and board[0][2] == '-':
return 0, 0
elif board[1][1] == '-' and board[2][1] == '-' and board[1][2] == '-':
return 1, 1
elif board[1][0] == 'X' and board[2][1] == 'X':
if board[2][0] == '-' and board[0][0] == '-' and board[2][2] == '-':
return 2, 0
elif board[1][1] == '-' and board[0][1] == '-' and board[1][2] == '-':
return 1, 1
elif board[2][1] == 'X' and board[1][2] == 'X':
if board[2][2] == '-' and board[2][0] == '-' and board[0][2] == '-':
return 2, 2
elif board[1][1] == '-' and board[1][0] == '-' and board[0][1] == '-':
return 1, 1
elif board[1][2] == 'X' and board[0][1] == 'X':
if board[0][2] == '-' and board[0][0] == '-' and board[2][2] == '-':
return 0, 2
elif board[1][1] == '-' and board[1][0] == '-' and board[2][1] == '-':
return 1, 1
# Check all cross corners
elif board[0][0] == 'X' and board[2][2] == 'X':
if board[1][0] == '-' and board[2][1] == '-' and board[2][0] == '-':
return 2, 0
elif board[0][1] == '-' and board[1][2] == '-' and board[0][2] == '-':
return 0, 2
elif board[2][0] == 'X' and board[0][2] == 'X':
if board[2][1] == '-' and board[1][2] == '-' and board[2][2] == '-':
return 2, 2
elif board[1][0] == '-' and board[0][1] == '-' and board[0][0] == '-':
return 0, 0
return None
def blockFork(self, board):
""" Block the opponents fork if she has one available. """
corners = [board[0][0], board[2][0], board[0][2], board[2][2]]
# Check all adjacent side middles
if board[1][0] == 'O' and board[0][1] == 'O':
if board[0][0] == '-' and board[2][0] == '-' and board[0][2] == '-':
return 0, 0
elif board[1][1] == '-' and board[2][1] == '-' and board[1][2] == '-':
return 1, 1
elif board[1][0] == 'O' and board[2][1] == 'O':
if board[2][0] == '-' and board[0][0] == '-' and board[2][2] == '-':
return 2, 0
elif board[1][1] == '-' and board[0][1] == '-' and board[1][2] == '-':
return 1, 1
elif board[2][1] == 'O' and board[1][2] == 'O':
if board[2][2] == '-' and board[2][0] == '-' and board[0][2] == '-':
return 2, 2
elif board[1][1] == '-' and board[1][0] == '-' and board[0][1] == '-':
return 1, 1
elif board[1][2] == 'O' and board[0][1] == 'O':
if board[0][2] == '-' and board[0][0] == '-' and board[2][2] == '-':
return 0, 2
elif board[1][1] == '-' and board[1][0] == '-' and board[2][1] == '-':
return 1, 1
# Check all cross corners (first check for double fork opp using the corners array)
elif corners.count('-') == 1 and corners.count('O') == 2:
return 1, 2
elif board[0][0] == 'O' and board[2][2] == 'O':
if board[1][0] == '-' and board[2][1] == '-' and board[2][0] == '-':
return 2, 0
elif board[0][1] == '-' and board[1][2] == '-' and board[0][2] == '-':
return 0, 2
elif board[2][0] == 'O' and board[0][2] == 'O':
if board[2][1] == '-' and board[1][2] == '-' and board[2][2] == '-':
return 2, 2
elif board[1][0] == '-' and board[0][1] == '-' and board[0][0] == '-':
return 0, 0
return None
def center(self, board):
""" Pick the center if it is available. """
if board[1][1] == '-':
return 1, 1
return None
def corner(self, board):
""" Pick a corner move. """
# Pick opposite corner of opponent if available
if board[0][0] == 'O' and board[2][2] == '-':
return 2, 2
elif board[2][0] == 'O' and board[0][2] == '-':
return 0, 2
elif board[0][2] == 'O' and board[2][0] == '-':
return 2, 0
elif board[2][2] == 'O' and board[0][0] == '-':
return 0, 0
# Pick any corner if no opposites are available
elif board[0][0] == '-':
return 0, 0
elif board[2][0] == '-':
return 2, 0
elif board[0][2] == '-':
return 0, 2
elif board[2][2] == '-':
return 2, 2
return None
def sideEmpty(self, board):
""" Pick an empty side. """
if board[1][0] == '-':
return 1, 0
elif board[2][1] == '-':
return 2, 1
elif board[1][2] == '-':
return 1, 2
elif board[0][1] == '-':
return 0, 1
return None
def randomMove(self, board):
""" Chose a random move from the available options. """
possibles = []
for i in range(3):
for j in range(3):
if board[i][j] == '-':
possibles += [(i, j)]
return possibles[random.randint(0, len(possibles)-1)]
def makeMove(self, board):
"""
Trainer goes through a hierarchy of moves, making the best move that
is currently available each time. A touple is returned that represents
(row, col).
"""
# Chose randomly with some probability so that the teacher does not always win
if random.random() > self.ability_level:
return self.randomMove(board)
# Follow optimal strategy
a = self.win(board)
if a is not None:
return a
a = self.blockWin(board)
if a is not None:
return a
a = self.fork(board)
if a is not None:
return a
a = self.blockFork(board)
if a is not None:
return a
a = self.center(board)
if a is not None:
return a
a = self.corner(board)
if a is not None:
return a
a = self.sideEmpty(board)
if a is not None:
return a
return self.randomMove(board)
###Output
_____no_output_____
###Markdown
Agents
###Code
class Learner(ABC):
"""
Parent class for Q-learning and SARSA agents.
Parameters
----------
alpha : float
learning rate
gamma : float
temporal discounting rate
eps : float
probability of random action vs. greedy action
eps_decay : float
epsilon decay rate. Larger value = more decay
"""
def __init__(self, alpha, gamma, eps, eps_decay=0.):
# Agent parameters
self.alpha = alpha
self.gamma = gamma
self.eps = eps
self.eps_decay = eps_decay
# Possible actions correspond to the set of all x,y coordinate pairs
self.actions = []
for i in range(3):
for j in range(3):
self.actions.append((i,j))
# Initialize Q values to 0 for all state-action pairs.
# Access value for action a, state s via Q[a][s]
self.Q = {}
for action in self.actions:
self.Q[action] = collections.defaultdict(int)
# Keep a list of reward received at each episode
self.rewards = []
def get_action(self, s):
"""
Select an action given the current game state.
Parameters
----------
s : string
state
"""
# Only consider the allowed actions (empty board spaces)
possible_actions = [a for a in self.actions if s[a[0]*3 + a[1]] == '-']
if random.random() < self.eps:
# Random choose.
action = possible_actions[random.randint(0,len(possible_actions)-1)]
else:
# Greedy choose.
values = np.array([self.Q[a][s] for a in possible_actions])
# Find location of max
ix_max = np.where(values == np.max(values))[0]
if len(ix_max) > 1:
# If multiple actions were max, then sample from them
ix_select = np.random.choice(ix_max, 1)[0]
else:
# If unique max action, select that one
ix_select = ix_max[0]
action = possible_actions[ix_select]
# update epsilon; geometric decay
self.eps *= (1.-self.eps_decay)
return action
def save(self, path):
""" Pickle the agent object instance to save the agent's state. """
if os.path.isfile(path):
os.remove(path)
f = open(path, 'wb')
pickle.dump(self, f)
logger.info('model saved at {}'.format(path))
f.close()
@abstractmethod
def update(self, s, s_, a, a_, r):
pass
class Qlearner(Learner):
"""
A class to implement the Q-learning agent.
"""
def __init__(self, alpha, gamma, eps, eps_decay=0.):
super().__init__(alpha, gamma, eps, eps_decay)
def update(self, s, s_, a, a_, r):
"""
Perform the Q-Learning update of Q values.
Parameters
----------
s : string
previous state
s_ : string
new state
a : (i,j) tuple
previous action
a_ : (i,j) tuple
new action. NOT used by Q-learner!
r : int
reward received after executing action "a" in state "s"
"""
# Update Q(s,a)
if s_ is not None:
# hold list of Q values for all a_,s_ pairs. We will access the max later
possible_actions = [action for action in self.actions if s_[action[0]*3 + action[1]] == '-']
Q_options = [self.Q[action][s_] for action in possible_actions]
# update
self.Q[a][s] += self.alpha*(r + self.gamma*max(Q_options) - self.Q[a][s])
else:
# terminal state update
self.Q[a][s] += self.alpha*(r - self.Q[a][s])
# add r to rewards list
self.rewards.append(r)
class SARSAlearner(Learner):
"""
A class to implement the SARSA agent.
"""
def __init__(self, alpha, gamma, eps, eps_decay=0.):
super().__init__(alpha, gamma, eps, eps_decay)
def update(self, s, s_, a, a_, r):
"""
Perform the SARSA update of Q values.
Parameters
----------
s : string
previous state
s_ : string
new state
a : (i,j) tuple
previous action
a_ : (i,j) tuple
new action
r : int
reward received after executing action "a" in state "s"
"""
# Update Q(s,a)
if s_ is not None:
self.Q[a][s] += self.alpha*(r + self.gamma*self.Q[a_][s_] - self.Q[a][s])
else:
# terminal state update
self.Q[a][s] += self.alpha*(r - self.Q[a][s])
# add r to rewards list
self.rewards.append(r)
###Output
_____no_output_____
###Markdown
Simulation
###Code
class GameLearning(object):
"""
A class that holds the state of the learning process. Learning
agents are created/loaded here, and a count is kept of the
games that have been played.
"""
def __init__(self, args, alpha=0.5, gamma=0.9, epsilon=0.1):
if args.load:
# load an existing agent and continue training
if not os.path.isfile(args.path):
raise ValueError("Cannot load agent: file does not exist.")
with open(args.path, 'rb') as f:
agent = pickle.load(f)
else:
# check if agent state file already exists, and ask
# user whether to overwrite if so
if os.path.isfile(args.path):
print('An agent is already saved at {}.'.format(args.path))
while True:
response = input("Are you sure you want to overwrite? [y/n]: ")
if response.lower() in ['y', 'yes']:
break
elif response.lower() in ['n', 'no']:
print("OK. Quitting.")
sys.exit(0)
else:
print("Invalid input. Please choose 'y' or 'n'.")
if args.agent_type == "q":
agent = Qlearner(alpha,gamma,epsilon)
else:
agent = SARSAlearner(alpha,gamma,epsilon)
self.games_played = 0
self.path = args.path
self.agent = agent
def beginPlaying(self):
""" Loop through game iterations with a human player. """
print("Welcome to Tic-Tac-Toe. You are 'X' and the computer is 'O'.")
def play_again():
logger.info("Games played: %i" % self.games_played)
while True:
play = input("Do you want to play again? [y/n]: ")
if play == 'y' or play == 'yes':
return True
elif play == 'n' or play == 'no':
return False
else:
print("Invalid input. Please choose 'y' or 'n'.")
while True:
game = Game(self.agent)
game.start()
self.games_played += 1
self.agent.save(self.path)
if not play_again():
print("OK. Quitting.")
break
def beginTeaching(self, episodes):
""" Loop through game iterations with a teaching agent. """
teacher = Teacher()
# Train for alotted number of episodes
while self.games_played < episodes:
game = Game(self.agent, teacher=teacher)
game.start()
self.games_played += 1
# Monitor progress
if self.games_played % 1000 == 0:
logger.info("Games played: %i" % self.games_played)
# save final agent
self.agent.save(self.path)
def plot_agent_reward(rewards):
""" Function to plot agent's accumulated reward vs. iteration """
plt.plot(np.cumsum(rewards))
plt.title('Agent Cumulative Reward vs. Iteration')
plt.ylabel('Reward')
plt.xlabel('Episode')
plt.show()
###Output
_____no_output_____
###Markdown
Jobs
###Code
logger.info('JOB START: employing expert to play game with q-learning')
# get default args
args = get_args()
# set args
args.teacher_episodes = 1000
# initialize game instance
gl = GameLearning(args)
# letting teacher/expert play
gl.beginTeaching(args.teacher_episodes)
logger.info('JOB END: employing expert to play game with q-learning')
logger.info('JOB START: playing game with on q-learning')
# get default args
args = get_args()
# initialize game instance
gl = GameLearning(args)
# start playing
gl.beginPlaying()
logger.info('JOB END: playing game with on q-learning')
logger.info('JOB START: employing expert to play game with sarsa')
# get default args
args = get_args()
# set args
args.agent_type = 's'
args.teacher_episodes = 1000
# initialize game instance
gl = GameLearning(args)
# letting teacher/expert play
gl.beginTeaching(args.teacher_episodes)
logger.info('JOB END: employing expert to play game with sarsa')
logger.info('JOB START: playing game with on sarsa')
# get default args
args = get_args()
# set args
args.agent_type = 's'
# initialize game instance
gl = GameLearning(args)
# start playing
gl.beginPlaying()
logger.info('JOB END: playing game with on sarsa')
logging.getLogger('matplotlib').setLevel(logging.WARNING)
logger.info('JOB START: plotting agent rewards')
# get default args
args = get_args()
# loading agent
with open(args.path, 'rb') as f:
agent = pickle.load(f)
# plot rewards
plot_agent_reward(agent.rewards)
logger.info('JOB END: plotting agent rewards')
###Output
08-Nov-21 11:06:02 [INFO] : JOB START: plotting agent rewards
|
TF_Neural_Network.ipynb | ###Markdown
To introduce non linearity into the system we use activation functions. for example we use sigmoid, Tanh, ReLU and so on...Here we will learn to create a simple deep neural network and tune its hyperparameters.
###Code
%tensorflow_version 2.x
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras import layers
from matplotlib import pyplot as plt
import seaborn as sns
# The following lines adjust the granularity of reporting.
pd.options.display.max_rows = 10
pd.options.display.float_format = "{:.1f}".format
print("Imported modules.")
train_df = pd.read_csv('https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv', sep=',')
train_df = train_df.reindex(np.random.permutation(train_df.index))
test_df = pd.read_csv('https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv', sep=',')
# normalizing the values, by calculating z score
train_df_mean = train_df.mean()
train_df_std = train_df.std()
train_df_norm = (train_df - train_df_mean) / train_df_std
test_df_mean = test_df.mean()
test_df_std = test_df.std()
test_df_norm = (test_df - test_df_mean) / test_df_std
train_df_norm.head()
feature_column = [] # empty list for holding all feature columns.
resolution_in_zs = 0.3
# create bucket feature column for latitude.
latitude_numeric_column = tf.feature_column.numeric_column('latitude')
latitude_boundaries = list(np.arange(int(min(train_df_norm['latitude'])), int(max(train_df_norm['latitude'])), resolution_in_zs))
bucket_latitude = tf.feature_column.bucketized_column(latitude_numeric_column, latitude_boundaries)
# create bucket feature column for longitude.
longitude_numeric_column = tf.feature_column.numeric_column('longitude')
longitude_boundaries = list(np.arange(int(min(train_df_norm['longitude'])), int(max(train_df_norm['longitude'])), resolution_in_zs))
bucket_longitude = tf.feature_column.bucketized_column(longitude_numeric_column, longitude_boundaries)
# feature cross for longitude and latitude
longi_x_lati = tf.feature_column.crossed_column([bucket_latitude, bucket_longitude], hash_bucket_size=100)
longi_x_lati_col = tf.feature_column.indicator_column(longi_x_lati)
feature_column.append(longi_x_lati_col)
# feature column for median income
median_income = tf.feature_column.numeric_column('median_income')
feature_column.append(median_income)
# feature column for population
population = tf.feature_column.numeric_column('population')
feature_column.append(population)
# convert the list of feature column into a dense layer that will later be fed into network for training.
my_feature_layer = tf.keras.layers.DenseFeatures(feature_column)
# define plotting function
def plot_the_loss_curve(epochs, mse):
"""Plot a curve of loss vs. epoch."""
plt.figure()
plt.xlabel("Epoch")
plt.ylabel("Mean Squared Error")
plt.plot(epochs, mse, label="Loss")
plt.legend()
plt.ylim([mse.min()*0.95, mse.max() * 1.03])
plt.show()
# creating functions to create and train model
def create_model(my_learning_rate, feature_layer):
'''create and compile a simple linear regression model'''
model = tf.keras.Sequential()
model.add(feature_layer)
model.add(tf.keras.layers.Dense(units=1, input_shape=(1,)))
model.compile(optimizer=tf.keras.optimizers.RMSprop(my_learning_rate), loss='mean_squared_error', metrics=[tf.keras.metrics.MeanSquaredError()])
return model
def train_model(model, dataset, epochs, batch_size, label_name):
features = {key:np.array(value) for key, value in dataset.items()}
label = np.array(features.pop(label_name))
history = model.fit(x=features, y=label, batch_size=batch_size, epochs=epochs, shuffle=True)
epochs = history.epoch
hist = pd.DataFrame(history.history)
rmse = hist['mean_squared_error']
return epochs, rmse
learning_rate = 0.01
epochs = 15
batch_size = 1000
label_name = 'median_house_value'
my_model = create_model(learning_rate, my_feature_layer)
epochs, rmse = train_model(my_model, train_df_norm, epochs, batch_size, label_name)
plot_the_loss_curve(epochs, rmse)
test_feature = {key:np.array(value) for key, value in test_df_norm.items()}
test_label = np.array(test_feature.pop(label_name))
print("\n Evaluate the linear regression model against the test set:")
my_model.evaluate(x=test_feature, y=test_label, batch_size=batch_size)
def create_deep_model(my_learning_rate, feature_layer):
model = tf.keras.Sequential()
model.add(feature_layer)
model.add(tf.keras.layers.Dense(10, 'relu', name='hidden1'))
model.add(tf.keras.layers.Dense(6, 'relu', name='hidden2'))
model.add(tf.keras.layers.Dense(1, name='output'))
model.compile(optimizer=tf.keras.optimizers.RMSprop(my_learning_rate), loss='mean_squared_error', metrics=[tf.keras.metrics.MeanSquaredError()])
return model
def train_deep_model(model, dataset, epochs, label_name, batch_size=None):
features = {key:np.array(value) for key, value in dataset.items()}
label = np.array(features.pop(label_name))
history = model.fit(x=features, y = label, epochs = epochs, batch_size=batch_size, shuffle=True)
epochs = history.epoch
hist = pd.DataFrame(history.history)
mse = hist['mean_squared_error']
return epochs, mse
learning_rate = 0.01
epochs = 20
batch_size = 1000
label_name = 'median_house_value'
my_model = create_deep_model(learning_rate, my_feature_layer)
epochs, mse = train_deep_model(my_model, train_df_norm, epochs, label_name, batch_size)
plot_the_loss_curve(epochs, mse)
my_model.evaluate(x=test_feature, y=test_label, batch_size=batch_size)
def create_model_dense_regularizatio(my_learning_rate, feature_layer):
model = tf.keras.Sequential()
model.add(feature_layer)
model.add(tf.keras.layers.Dense(10, 'relu', kernel_regularizer=tf.keras.regularizers.l2(0.005), name='hidden1'))
model.add(tf.keras.layers.Dense(6, 'relu', kernel_regularizer=tf.keras.regularizers.l2(0.005), name='hidden2'))
model.add(tf.keras.layers.Dropout(0.1))
model.add(tf.keras.layers.Dense(1, name='output'))
model.compile(optimizer=tf.keras.optimizers.RMSprop(my_learning_rate), loss='mean_squared_error', metrics=[tf.keras.metrics.MeanSquaredError()])
return model
learning_rate = 0.01
epochs = 20
batch_size = 1000
label_name = 'median_house_value'
my_model = create_model_dense_regularizatio(learning_rate, my_feature_layer)
epochs, mse = train_deep_model(my_model, train_df_norm, epochs, label_name, batch_size)
plot_the_loss_curve(epochs, mse)
my_model.evaluate(x=test_feature, y=test_label, batch_size=batch_size)
###Output
Train on 17000 samples
Epoch 1/20
17000/17000 [==============================] - 1s 32us/sample - loss: 0.6016 - mean_squared_error: 0.5314
Epoch 2/20
17000/17000 [==============================] - 0s 4us/sample - loss: 0.4498 - mean_squared_error: 0.4032
Epoch 3/20
17000/17000 [==============================] - 0s 3us/sample - loss: 0.4459 - mean_squared_error: 0.4058
Epoch 4/20
17000/17000 [==============================] - 0s 4us/sample - loss: 0.4312 - mean_squared_error: 0.3950
Epoch 5/20
17000/17000 [==============================] - 0s 4us/sample - loss: 0.4298 - mean_squared_error: 0.3966
Epoch 6/20
17000/17000 [==============================] - 0s 4us/sample - loss: 0.4291 - mean_squared_error: 0.3975
Epoch 7/20
17000/17000 [==============================] - 0s 4us/sample - loss: 0.4246 - mean_squared_error: 0.3941
Epoch 8/20
17000/17000 [==============================] - 0s 4us/sample - loss: 0.4309 - mean_squared_error: 0.4012
Epoch 9/20
17000/17000 [==============================] - 0s 4us/sample - loss: 0.4193 - mean_squared_error: 0.3903
Epoch 10/20
17000/17000 [==============================] - 0s 4us/sample - loss: 0.4192 - mean_squared_error: 0.3914
Epoch 11/20
17000/17000 [==============================] - 0s 4us/sample - loss: 0.4223 - mean_squared_error: 0.3951
Epoch 12/20
17000/17000 [==============================] - 0s 4us/sample - loss: 0.4219 - mean_squared_error: 0.3950
Epoch 13/20
17000/17000 [==============================] - 0s 4us/sample - loss: 0.4190 - mean_squared_error: 0.3930
Epoch 14/20
17000/17000 [==============================] - 0s 4us/sample - loss: 0.4182 - mean_squared_error: 0.3925
Epoch 15/20
17000/17000 [==============================] - 0s 4us/sample - loss: 0.4181 - mean_squared_error: 0.3929
Epoch 16/20
17000/17000 [==============================] - 0s 3us/sample - loss: 0.4124 - mean_squared_error: 0.3877
Epoch 17/20
17000/17000 [==============================] - 0s 4us/sample - loss: 0.4102 - mean_squared_error: 0.3855
Epoch 18/20
17000/17000 [==============================] - 0s 4us/sample - loss: 0.4172 - mean_squared_error: 0.3934
Epoch 19/20
17000/17000 [==============================] - 0s 4us/sample - loss: 0.4158 - mean_squared_error: 0.3923
Epoch 20/20
17000/17000 [==============================] - 0s 4us/sample - loss: 0.4150 - mean_squared_error: 0.3915
|
script/example/Simulation.ipynb | ###Markdown
Reading and plotting simulation output
###Code
import microval.simulation as msim
import microval.experiment as mexp
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Parameters to adjust.
###Code
## Simulation output
# Heatmap as image file
simulation_image_file = r"P8_SDV183.png"
# Simulated region
ebsd_extend_P8 = (0.0, 856.2000300000001, 30.484095, 499.523474)
## For overlay plot
# Paths to experimental *.mat files
binaraydata_mat_file_name = r"image_time_series_red.mat"
segmenteddata_mat_file_name = r"foreground_mask.mat"
fatiguedata_mat_file_name = r"fatigue_data_red.mat"
# Scale factor for conversion of image data to physical domain
scale_factor = 0.6
###Output
_____no_output_____
###Markdown
Simulation output Read simulation output and transform appropriately.
###Code
sim = msim.SimulationResult(simulation_image_file)
sim.flip(updown=False, leftright=True)
sim.transform_to(ebsd_extend_P8)
###Output
_____no_output_____
###Markdown
Plot image.
###Code
fig=plt.figure(figsize=(16,9))
sim.imshow()
###Output
_____no_output_____
###Markdown
Overlay simulation with binary data Read binary data and plot overlay.
###Code
bindat = mexp.BinaryData(binaraydata_mat_file_name=binaraydata_mat_file_name, fatiguedata_mat_file_name=fatiguedata_mat_file_name)
fig=plt.figure(figsize=(16,9))
sim.imshow()
bindat.imshow(scale_factor = scale_factor, time_step_number = bindat.get_num_binary_data()-1, alpha = 0.5, region = ebsd_extend_P8)
###Output
\\fe00fs45.de.bosch.com\nae2rng$\10_script\micro-fat-val-framework\script\microval\microval\__init__.py:37: MatplotlibDeprecationWarning: Case-insensitive properties were deprecated in 3.3 and support will be removed two minor releases later
plt.imshow(X,
###Markdown
Overlay simulation with SEM data Overlay simulation results with SEM image data. Plot the latter as black points irrespective of the segmentation result (cracks/extrusions) for simplicity.
###Code
segdat = mexp.SegmentedData(segmenteddata_mat_file_name=segmenteddata_mat_file_name,fatiguedata_mat_file_name=fatiguedata_mat_file_name)
fig=plt.figure(figsize=(16,9))
sim.imshow()
segdat.imshow(scale_factor = scale_factor, as_binary = True, invert_blackwhite=True, alpha=0.5, region = ebsd_extend_P8)
###Output
_____no_output_____ |
03_test_automl/notebooks/automl_test.ipynb | ###Markdown
getdata- Wine tasting: https://www.kaggle.com/zynicide/wine-reviews/download- Titanic: https://www.kaggle.com/c/titanic/data
###Code
from automl import runner
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
# import sklearn.ensemble.RandomForestClassifier
# import sklearn.metrics.accuracy_score
import pandas as pd
import numpy as np
import re
import os.path
###Output
_____no_output_____
###Markdown
Wine data
###Code
# Load wine data
if os.path.isfile("../data/wine.csv"):
df_wine = pd.read_csv("../data/wine.csv", encoding="utf-8", sep=";")
elif os.path.isfile("../data/winemag-data-130k-v2.csv"):
df_wine = pd.read_csv("../data/winemag-data-130k-v2.csv", encoding="utf-8", sep=",")
elif os.path.isfile("../data/winemag-data_first150k.json"):
df_wine = pd.read_json("../data/winemag-data_first150k.json", encoding="utf-8")
df_wine.head(3)
summary = runner.naive_runner(df_wine,target="points")
summary
###Output
_____no_output_____
###Markdown
Titanic
###Code
if os.path.isfile("../data/titanic_train.csv"):
df_titanic = pd.read_csv("../data/titanic_train.csv", encoding="utf-8", sep=",")
df_titanic.head(3)
model = RandomForestClassifier(n_estimators=10)
eval_metric = accuracy_score
titanic_summary = runner.naive_runner(
df=df_titanic
,target="Survived"
,cols_rm=["PassengerId"]
,model=model
,eval_metric=eval_metric)
titanic_summary
###Output
_____no_output_____ |
SIC_AI_Coding_Exercises/SIC_AI_Chapter_02_Coding_Exercises/ex_0103.ipynb | ###Markdown
Coding Exercise 0103 1. List: 1.1. Creating lists:
###Code
x = [1, 3, True, False, 'Hello', [3,5,7]]
type(x)
# A range.
range(1,5)
# A range cast as list.
x = list(range(1,5))
x
list(range(1,10,2))
list(range(0,21,5))
###Output
_____no_output_____
###Markdown
1.2. Indexing and slicing lists:
###Code
x = [1, 2, ['Life', 'is']]
x[1]
x[2]
x[2][1]
x = [1, 2, 3, ['a', 'b', 'c'], 4, 5]
x[2:5]
x[3][:2]
###Output
_____no_output_____
###Markdown
1.3. Operations with lists:
###Code
a = [1, 2, 3]
b = [4, 5, 6]
a + b
x = [1, 2, 3]
x*3
###Output
_____no_output_____
###Markdown
1.4. Changing list content:
###Code
x = [1, 2, 3]
x[1] = -1
x
x[1:3] = [3, 5, 7, 9, 11]
x
###Output
_____no_output_____
###Markdown
Let's carefully distinguish the following cases:
###Code
# Replace part of a list.
x = [1, 2, 3]
x[1:2] = [-1, -2, -3]
x
# List a particular element of the list.
x = [1, 2, 3]
x[1] = [-1, -2, -3]
x
###Output
_____no_output_____
###Markdown
1.5. Deleting part of a list:
###Code
x = [1, 2, 3, 4, 5]
x[1:3] = []
x
x = [1, 2, 3, 4, 5]
del x[2]
x
###Output
_____no_output_____
###Markdown
1.6. Functions and methods for list objects:
###Code
x = [2,4,6,8,10]
len(x)
x = [1, 2, 3]
x.append(4)
x.append([5,6])
x
x = [1, 2, 3, 4, 5]
x.reverse()
x
# The sort() method changes the list permanently.
x = [3, 1, 5, 2, 4]
x.sort()
x
x = [3, 1, 5, 2, 4]
x.sort(reverse=True)
x
# The function sorted() just returns a view without changing the list permanently.
x = [3, 1, 5, 2, 4]
sorted(x)
# We can verify that 'x' is the same as before.
x
###Output
_____no_output_____
###Markdown
2. Tuple: 2.1. Creating tuples:
###Code
x = (1, 3, True, False, 'Hello', [3,5,7])
type(x)
# A range cast as tuple.
x = tuple(range(1,5))
x
###Output
_____no_output_____
###Markdown
2.2. Indexing and slicing tuples:
###Code
x = (1, 2, ('Life', 'is'))
x[2]
x[2][1]
x = (1, 2, 3, ['a', 'b', 'c'], 4, 5)
x[3][:2]
###Output
_____no_output_____
###Markdown
2.3. Operations with tuples:
###Code
a = (1, 2, 3)
b = (4, 5, 6)
a + b
a * 3
###Output
_____no_output_____
###Markdown
3. Dictionary: 3.1. Creating dictionaries:
###Code
d = {'x':[1,2,3], 'y':[4,5,6]}
d['x']
d['y']
###Output
_____no_output_____
###Markdown
3.2. Adding key-value pairs, deletion, update:
###Code
d = {}
d['name'] = 'John'
d['gender'] = 'Male'
d['age'] = 23
d
del d['age']
d
d['name'] = 'Andrew'
d
###Output
_____no_output_____
###Markdown
3.3. Dictionary methods and operations:
###Code
d.keys()
d.values()
d.items()
d.clear()
d
d = {'name': 'John', 'gender': 'Male', 'age': 23}
d.get('wage', 0)
# Check whether 'name' is a key of d.
'name' in d
# Check whether 'wage' is a key of d.
'wage' in d
###Output
_____no_output_____
###Markdown
4. Set: 4.1. Creating a set:
###Code
b = set([1,2,3,3,3,3,4,5])
b
###Output
_____no_output_____
###Markdown
4.2. Operations with sets:
###Code
s1 = set([1, 2, 3, 4, 5])
s2 = set([4, 5, 6, 7, 8])
s1 & s2
s1.intersection(s2)
s1 | s2
s1.union(s2)
s1 - s2
s1.difference(s2)
a = set([1, 2, 3, 4, 5])
a.add(6)
a
a.update([7, 8, 9])
a
a.remove(9)
a
###Output
_____no_output_____ |
dataset_analyses/sample_error_examples.ipynb | ###Markdown
Import
###Code
%matplotlib widget
import pickle
import os
import pandas as pd
import numpy as np
import json
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import roc_auc_score, roc_curve
###Output
_____no_output_____
###Markdown
Define
###Code
def compute_px(ppl, lls):
lengths = np.array([len(ll) for ll in lls])
logpx = np.log(ppl) * lengths * -1
return logpx
def compute_auroc_all(id_msp, id_px, id_ppl, ood_msp, ood_px, ood_ppl, do_print=False):
score_px = compute_auroc(-id_px, -ood_px)
score_py = compute_auroc(-id_msp, -ood_msp)
score_ppl = compute_auroc(id_ppl, ood_ppl)
if do_print:
print(f"P(x): {score_px:.3f}")
print(f"P(y | x): {score_py:.3f}")
print(f"Perplexity: {score_ppl:.3f}")
scores = {
'p_x': score_px,
'p_y': score_py,
'ppl': score_ppl
}
return scores
def compute_auroc(id_pps, ood_pps, normalize=False, return_curve=False):
y = np.concatenate((np.ones_like(ood_pps), np.zeros_like(id_pps)))
scores = np.concatenate((ood_pps, id_pps))
if normalize:
scores = (scores - scores.min()) / (scores.max() - scores.min())
if return_curve:
return roc_curve(y, scores)
else:
return 100*roc_auc_score(y, scores)
def compute_far(id_pps, ood_pps, rate=5, return_indices=False):
if return_indices:
cut_off = np.percentile(ood_pps, rate)
id_indices = [i for i, pps in enumerate(id_pps) if pps > cut_off]
ood_indices = [i for i, pps in enumerate(ood_pps) if pps < cut_off]
return {'id': id_indices, 'ood': ood_indices}
else:
incorrect = len(id_pps[id_pps > np.percentile(ood_pps, rate)])
return 100*incorrect / len(id_pps)
def compute_metric_all(id_msp, id_px, id_ppl, ood_msp, ood_px, ood_ppl, metric='auroc', do_print=False):
if metric == 'auroc':
score_px = compute_auroc(-id_px, -ood_px)
score_py = compute_auroc(-id_msp, -ood_msp)
score_ppl = compute_auroc(id_ppl, ood_ppl)
elif metric == 'far':
score_px = compute_far(-id_px, -ood_px)
score_py = compute_far(-id_msp, -ood_msp)
score_ppl = compute_far(id_ppl, ood_ppl)
else:
raise Exception('Invalid metric name')
if do_print:
print(f"Metric {metric}:")
print(f"P(x): {score_px:.3f}")
print(f"P(y | x): {score_py:.3f}")
print(f"Perplexity: {score_ppl:.3f}\n")
scores = {
'p_x': score_px,
'p_y': score_py,
'ppl': score_ppl
}
return scores
def read_model_out(fname):
ftype = fname.split('.')[1]
if ftype == 'pkl':
with open(fname, 'rb') as f:
return pickle.load(f)
elif ftype == 'npy':
return np.load(fname)
else:
raise KeyError(f'{ftype} not supported')
###Output
_____no_output_____
###Markdown
Summarize Presettings
###Code
verbose = False
repo = os.path.dirname(os.path.dirname(os.path.abspath('__file__')))
print(repo)
output_dir = os.path.join(repo, 'output')
fig_dir = os.path.join(repo, 'figs')
train_sets = ['imdb', 'sst2']
eval_sets = ['imdb', 'sst2', 'snli', 'counterfactual-imdb', 'rte']
methods = ['msp', 'lls', 'pps']
signals = {}
for train_set in train_sets:
for eval_set in eval_sets:
signals[(train_set, eval_set)] = {method: None for method in methods}
###Output
_____no_output_____
###Markdown
Import Signals
###Code
method2ftype={
'msp': 'npy',
'lls': 'pkl',
'pps': 'npy',
}
###Output
_____no_output_____
###Markdown
Subsampling Indices
###Code
def get_indices(fname):
with open(fname, 'r') as f:
return [int(x) for x in f.readlines()]
roberta_dir = os.path.join(repo, 'roberta')
subsample_indices = {
data_name: get_indices(os.path.join(roberta_dir, f'{data_name}_indices.txt'))
for data_name in train_sets
}
###Output
_____no_output_____
###Markdown
GPT2
###Code
best_lr = {
'imdb': '5e-5',
'sst2': '5e-5',
}
methods = ['lls', 'pps']
not_readys = []
for (train_set, eval_set), signals_dict in signals.items():
for method in methods:
signal_fname = os.path.join(output_dir, 'gpt2', train_set, f'{eval_set}_{best_lr[train_set]}_{method}.{method2ftype[method]}')
if not os.path.exists(signal_fname):
not_readys.append((train_set, eval_set, method))
continue
signal = read_model_out(signal_fname)
if eval_set in train_sets:
idxs = subsample_indices[eval_set]
signal = [signal[idx] for idx in idxs]
signals_dict[method] = signal
for not_ready in not_readys:
print(not_ready)
###Output
_____no_output_____
###Markdown
RoBERTa
###Code
methods = ['msp']
not_readys = []
model_type = 'roberta-base'
for (train_set, eval_set), signals_dict in signals.items():
for method in methods:
signal_fname = os.path.join(output_dir, 'roberta', train_set, f'{model_type}_{eval_set}_{method}.{method2ftype[method]}')
if not os.path.exists(signal_fname):
not_readys.append((train_set, eval_set, method))
continue
signal = read_model_out(signal_fname)
if train_set != eval_set and eval_set in train_sets:
idxs = subsample_indices[eval_set]
signal = np.array([signal[idx] for idx in idxs])
signals_dict[method] = signal
for not_ready in not_readys:
print(not_ready)
###Output
_____no_output_____
###Markdown
Get Error Indices by FAR95
###Code
score2plot = {
'p_x': r'GPT2: $p(x)$',
'ppl': 'GPT2: PPL',
'p_y': 'RoBERTa: MSP',
}
metric2plot = {
'auroc': 'AUROC',
'far': 'FAR95'
}
dataset2plot = {
'imdb': 'IMDB',
'sst2': 'SST-2',
'snli': 'SNLI',
'counterfactual-imdb': 'c-IMDB',
'rte': 'RTE',
}
error_indices = {}
not_ready = []
for train_set in train_sets:
for eval_set in eval_sets:
if train_set == eval_set:
continue
ood_signal_dict = signals[(train_set, eval_set)]
id_signal_dict = signals[(train_set, train_set)]
skip=False
for value in ood_signal_dict.values():
if isinstance(value, type(None)):
skip=True
if skip:
not_ready.append((train_set, eval_set))
continue
pps_errors = compute_far(id_signal_dict['pps'], ood_signal_dict['pps'], return_indices=True)
msp_errors = compute_far(-id_signal_dict['msp'], -ood_signal_dict['msp'], return_indices=True)
error_indices[(train_set, eval_set)] = {'pps': pps_errors, 'msp': msp_errors}
print(list(error_indices.keys()))
###Output
[('imdb', 'sst2'), ('imdb', 'snli'), ('imdb', 'counterfactual-imdb'), ('imdb', 'rte'), ('sst2', 'imdb'), ('sst2', 'snli'), ('sst2', 'counterfactual-imdb'), ('sst2', 'rte')]
###Markdown
Partition Indices
###Code
indices_parts = {}
for ood_key, errors_dict in error_indices.items():
pps_id = set(errors_dict['pps']['id'])
pps_ood = set(errors_dict['pps']['ood'])
msp_id = set(errors_dict['msp']['id'])
msp_ood = set(errors_dict['msp']['ood'])
union = {
'id': pps_id.union(msp_id),
'ood': pps_ood.union(msp_ood),
}
common = {
'id': pps_id.intersection(msp_id),
'ood': pps_ood.intersection(msp_ood),
}
pps_only = {
'id': pps_id.difference(msp_id),
'ood': pps_ood.difference(msp_ood),
}
msp_only = {
'id': msp_id.difference(pps_id),
'ood': msp_ood.difference(pps_ood),
}
indices_parts[ood_key] = {
'union': union,
'common': common,
'pps': pps_only,
'msp': msp_only,
}
###Output
_____no_output_____
###Markdown
Partition Stats
###Code
stats = []
for (indomain, ood), partitions in indices_parts.items():
total_count = len(partitions['union']['id']) + len(partitions['union']['ood'])
common_count = len(partitions['common']['id']) + len(partitions['common']['ood'])
msp_count = len(partitions['msp']['id']) + len(partitions['msp']['ood']) + common_count
pps_count = len(partitions['pps']['id']) + len(partitions['pps']['ood']) + common_count
row = {
'in domain': indomain,
'ood': ood,
'total count': total_count,
'common count': common_count,
'msp only count': msp_count - common_count,
'pps only count': pps_count - common_count,
'common ratio': common_count/total_count,
'msp only ratio': (msp_count - common_count)/total_count,
'pps only ratio': (pps_count - common_count)/total_count,
}
stats.append(row)
print(pd.DataFrame(stats))
pd.DataFrame(stats).to_csv(os.path.join('.', 'error_analysis', 'error_counts_ratios.csv'))
###Output
in domain ood total count common count msp only count \
0 imdb sst2 18063 6200 10932
1 imdb snli 19371 2874 637
2 imdb counterfactual-imdb 19920 16083 1408
3 imdb rte 19770 4077 48
4 sst2 imdb 2429 177 1362
5 sst2 snli 1261 97 600
6 sst2 counterfactual-imdb 723 50 551
7 sst2 rte 363 85 173
pps only count common ratio msp only ratio pps only ratio
0 931 0.343243 0.605215 0.051542
1 15860 0.148366 0.032884 0.818750
2 2429 0.807380 0.070683 0.121938
3 15645 0.206222 0.002428 0.791351
4 890 0.072869 0.560725 0.366406
5 564 0.076923 0.475813 0.447264
6 122 0.069156 0.762102 0.168741
7 105 0.234160 0.476584 0.289256
###Markdown
Sample Examples
###Code
with open(os.path.join('.', 'all_val_data.p'), 'rb') as f:
datasets = pickle.load(f)
seed = 42
np.random.seed(seed)
nsample = 10
examples = []
ignore_ood = []
parts = ['common', 'msp', 'pps']
domains = ['id', 'ood']
for (indomain, ood), partitions in indices_parts.items():
if ood in ignore_ood:
continue
data = {
'id': datasets[('id', 'val', indomain)]['text'],
'ood': datasets[('ood', 'val', ood)]['text'],
}
for part in parts:
for domain in domains:
sample_size = min(nsample, len(partitions[part][domain]))
sample = np.random.choice(list(partitions[part][domain]), size=sample_size, replace=False)
print(indomain, ood, part, domain)
for idx in sample:
examples.append({
'in domain': indomain,
'ood': ood,
'text': data[domain][idx],
'domain': domain,
'dataset': indomain if domain == 'id' else ood,
'partition': part,
})
pd.DataFrame(examples).to_csv(os.path.join('.', 'error_analysis', 'error_examples.csv'))
###Output
imdb sst2 common id
imdb sst2 common ood
imdb sst2 msp id
imdb sst2 msp ood
imdb sst2 pps id
imdb sst2 pps ood
imdb snli common id
imdb snli common ood
imdb snli msp id
imdb snli msp ood
imdb snli pps id
imdb snli pps ood
imdb counterfactual-imdb common id
imdb counterfactual-imdb common ood
imdb counterfactual-imdb msp id
imdb counterfactual-imdb msp ood
imdb counterfactual-imdb pps id
imdb counterfactual-imdb pps ood
imdb rte common id
imdb rte common ood
imdb rte msp id
imdb rte msp ood
imdb rte pps id
imdb rte pps ood
sst2 imdb common id
sst2 imdb common ood
sst2 imdb msp id
sst2 imdb msp ood
sst2 imdb pps id
sst2 imdb pps ood
sst2 snli common id
sst2 snli common ood
sst2 snli msp id
sst2 snli msp ood
sst2 snli pps id
sst2 snli pps ood
sst2 counterfactual-imdb common id
sst2 counterfactual-imdb common ood
sst2 counterfactual-imdb msp id
sst2 counterfactual-imdb msp ood
sst2 counterfactual-imdb pps id
sst2 counterfactual-imdb pps ood
sst2 rte common id
sst2 rte common ood
sst2 rte msp id
sst2 rte msp ood
sst2 rte pps id
sst2 rte pps ood
###Markdown
Consistent OOD Examples
###Code
seed = 42
np.random.seed(seed)
import string
def normalize_text(s):
"""Lower text and remove punctuation, articles and extra whitespace."""
def remove_articles(text):
regex = re.compile(r'\b(a|an|the)\b', re.UNICODE)
return re.sub(regex, ' ', text)
def white_space_fix(text):
return ' '.join(text.split())
def remove_punc(text):
exclude = set(string.punctuation)
return ''.join(ch for ch in text if ch not in exclude)
def lower(text):
return text.lower()
#return white_space_fix(remove_articles(remove_punc(lower(s))))
return white_space_fix(remove_punc(lower(s)))
def get_tokens(s):
if not s: return []
return normalize_text(s).split()
ood_stats, ood_examples = [], []
ignore_tasks = []
parts = ['common', 'msp', 'pps']
domains = ['ood']
nsample = 10
for (indomain, ood), partitions in indices_parts.items():
if ood in ignore_tasks:
continue
total_count = len(partitions['union']['ood'])
common_count = len(partitions['common']['ood'])
msp_count = len(partitions['msp']['ood']) + common_count
pps_count = len(partitions['pps']['ood']) + common_count
row = {
'in domain': indomain,
'ood': ood,
'total count': total_count,
'common count': common_count,
'msp only count': msp_count - common_count,
'pps only count': pps_count - common_count,
'common ratio': common_count/total_count,
'msp only ratio': (msp_count - common_count)/total_count,
'pps only ratio': (pps_count - common_count)/total_count,
}
ood_stats.append(row)
data = {'ood': datasets[('ood', 'val', ood)]}
for part in parts:
for domain in domains:
sample_size = min(nsample, len(partitions[part][domain]))
sample = np.random.choice(list(partitions[part][domain]), size=sample_size, replace=False)
print(indomain, ood, part, domain)
for idx in sample:
ood_examples.append({
'in domain': indomain,
'ood': ood,
'text': data[domain]['text'][idx],
'unigram length': len(get_tokens(data[domain]['text'][idx])),
'label': data[domain]['label'][idx],
'domain': domain,
'dataset': indomain if domain == 'id' else ood,
'partition': part,
})
pd.DataFrame(ood_stats).to_csv(os.path.join('.', 'error_analysis', 'ood_error_counts_ratios.csv'))
pd.DataFrame(ood_examples).to_csv(os.path.join('.', 'error_analysis', 'ood_error_examples.csv'))
###Output
imdb sst2 common ood
imdb sst2 msp ood
imdb sst2 pps ood
imdb snli common ood
imdb snli msp ood
imdb snli pps ood
imdb counterfactual-imdb common ood
imdb counterfactual-imdb msp ood
imdb counterfactual-imdb pps ood
imdb rte common ood
imdb rte msp ood
imdb rte pps ood
sst2 imdb common ood
sst2 imdb msp ood
sst2 imdb pps ood
sst2 snli common ood
sst2 snli msp ood
sst2 snli pps ood
sst2 counterfactual-imdb common ood
sst2 counterfactual-imdb msp ood
sst2 counterfactual-imdb pps ood
sst2 rte common ood
sst2 rte msp ood
sst2 rte pps ood
###Markdown
Error
###Code
all_ood_examples = []
ignore_tasks = []
parts = ['common', 'msp', 'pps']
domains = ['ood']
nli = ['snli', 'rte']
nsample = 10
for (indomain, ood), partitions in indices_parts.items():
data = {'ood': datasets[('ood', 'val', ood)]}
for part in parts:
for domain in domains:
for idx in list(partitions[part][domain]):
overlap = 0
if ood in nli:
for punct in [ '.', '!', '?']:
text_list = data[domain]['text'][idx].split(punct)
if len(text_list) > 1:
break
if text_list[-1] == '':
prem, hyp = ' '.join(text_list[:-2]), text_list[-2]
else:
prem, hyp = ' '.join(text_list[:-1]), text_list[-1]
prem = set(prem.split(' '))
hyp = set(hyp.split(' '))
overlap = len(hyp.intersection(prem)) / len(hyp)
all_ood_examples.append({
'in domain': indomain,
'ood': ood,
'text': data[domain]['text'][idx],
'unigram length': len(get_tokens(data[domain]['text'][idx])),
'label': data[domain]['label'][idx],
'domain': domain,
'dataset': indomain if domain == 'id' else ood,
'partition': part,
'overlap': overlap
})
all_ood_examples_df = pd.DataFrame(all_ood_examples)
###Output
_____no_output_____
###Markdown
Lengths
###Code
keeps = ['in domain', 'ood', 'unigram length', 'partition']
groupby = ['in domain', 'ood', 'partition']
grouped_lengths = all_ood_examples_df[keeps].groupby(groupby)
lengths_summary = grouped_lengths.mean()
lengths_summary['std'] = grouped_lengths.std()['unigram length']
lengths_summary = lengths_summary.rename(columns = {'unigram length':'mean'})
print(lengths_summary)
lengths_summary.to_csv(os.path.join('.', 'error_analysis', 'ood_error_lengths_stats.csv'))
###Output
_____no_output_____
###Markdown
Labels
###Code
keeps = ['in domain', 'ood', 'label', 'partition', 'unigram length']
groupby = ['in domain', 'ood', 'partition', 'label']
grouped_labels = all_ood_examples_df[keeps].groupby(groupby)
keeps = ['in domain', 'ood', 'partition', 'unigram length']
groupby = ['in domain', 'ood', 'partition']
grouped_counts = all_ood_examples_df[keeps].groupby(groupby)
grouped_counts_df = grouped_counts.count().rename(columns = {'unigram length': 'total'})
print(grouped_counts_df)
labels_summary = grouped_labels.count().rename(columns = {'unigram length': 'counts'})
print(labels_summary)
labels_total_summary = labels_summary.join(grouped_counts_df, on=['in domain', 'ood', 'partition'], how='left')
labels_total_summary['ratio'] = labels_total_summary['counts'] / labels_total_summary['total']
print(labels_total_summary)
labels_total_summary.to_csv(os.path.join('.', 'error_analysis', 'ood_error_labels_stats.csv'))
###Output
_____no_output_____
###Markdown
Overlap
###Code
keeps = ['in domain', 'ood', 'overlap', 'partition']
groupby = ['in domain', 'ood', 'partition']
grouped_overlaps = all_ood_examples_df[keeps].groupby(groupby)
overlaps_summary = grouped_overlaps.mean()
overlaps_summary['std'] = grouped_overlaps.std()['overlap']
overlaps_summary = overlaps_summary.rename(columns = {'overlap':'mean'})
print(overlaps_summary)
overlaps_summary.to_csv(os.path.join('.', 'error_analysis', 'ood_error_overlaps_stats.csv'))
###Output
_____no_output_____
###Markdown
Check Signals
###Code
print('sst2 in domain', 'msp', len(signals[('sst2', 'sst2')]['msp']))
print('sst2 out domain', 'msp',len(signals[('imdb', 'sst2')]['msp']))
print('sst2 in domain', 'pps',len(signals[('sst2', 'sst2')]['pps']))
print('sst2 out domain', 'pps',len(signals[('imdb', 'sst2')]['pps']))
print('='*45)
print('imdb in domain', 'msp', len(signals[('imdb', 'imdb')]['msp']))
print('imdb out domain', 'msp',len(signals[('sst2', 'imdb')]['msp']))
print('imdb in domain', 'pps',len(signals[('imdb', 'imdb')]['pps']))
print('imdb out domain', 'pps',len(signals[('sst2', 'imdb')]['pps']))
print('snli', 'sst2 in domain', 'msp', len(signals[('sst2', 'snli')]['msp']))
print('snli', 'imdb in domain', 'msp',len(signals[('imdb', 'snli')]['msp']))
print('='*45)
print('rte', 'sst2 in domain', 'msp', len(signals[('sst2', 'rte')]['msp']))
print('rte', 'imdb in domain', 'msp',len(signals[('imdb', 'rte')]['msp']))
print('snli', 'sst2 in domain', 'pps', len(signals[('sst2', 'snli')]['pps']))
print('snli', 'imdb in domain', 'pps',len(signals[('imdb', 'snli')]['pps']))
print('='*45)
print('rte', 'sst2 in domain', 'pps', len(signals[('sst2', 'rte')]['pps']))
print('rte', 'imdb in domain', 'pps',len(signals[('imdb', 'rte')]['pps']))
###Output
snli sst2 in domain pps 10000
snli imdb in domain pps 10000
=============================================
rte sst2 in domain pps 277
rte imdb in domain pps 277
|
tests/ddpg_test.ipynb | ###Markdown
Create environment
###Code
env_name = "Pendulum-v0"
env = gym.make(env_name)
env.seed(0)
state_size = env.observation_space.shape[0]
action_size = env.action_space.shape[0]
print("State size:", state_size, "\nAction size:", action_size)
print(env.action_space.high, env.action_space.low)
###Output
State size: 3
Action size: 1
[2.] [-2.]
###Markdown
Define networks for different algorithms
###Code
tmax = 500
n_episodes = 2_000
seed = 0
###Output
_____no_output_____
###Markdown
DDPG TestTest the standard DDPG algorithm
###Code
value_net = models.DDPGValueNetwork(state_size, action_size)
policy_net = models.DDPGPolicyNetwork(state_size, action_size)
max_act = env.action_space.high
min_act = env.action_space.low
theta=0.15
sigma=0.2
noise_process = OrnsteinUhlenbeck(x_size=env.action_space.shape, mu=0,
sigma_init=sigma, sigma_final=sigma,
sigma_horizon=1, theta=theta)
action = env.action_space.sample()
#noise_proc = OrnsteinUhlenbeck(x0=np.zeros_like(action))
lr_val = 1e-3
lr_pol = 1e-4
# init agent:
agent = DDPG(policy_net=policy_net,
value_net=value_net,
lr_val=lr_val,
lr_pol=lr_pol,
buf_size=int(1e6),
device=device,
max_grad_norm=0.5,
noise_process=noise_process,
min_act=min_act,
max_act=max_act,
learn_every=1,
warm_up=1e4,
seed=0)
alg_name = "ddpg_{}".format(env_name)
max_score = -20.
scores = agent.train(env, tmax, n_episodes, alg_name, max_score)
plot(scores, 50)
###Output
_____no_output_____
###Markdown
4.1 Trained Agent Demonstration
###Code
agent.test(env, tmax, render=True, n_episodes=5)
###Output
_____no_output_____ |
Week 07 - Mid Term Project/Week07-Project-BookingCancellation.ipynb | ###Markdown
Hotel Booking Cancellation Prediction Problem: all bookings always have possibilities of being cancelled by client, but there may have cancellation trend to be used to predict future cancellation. Goal: predict hotel booking will be cancelled or not (feature **is_canceled**). ML Model: Classification type. Dataset: [Kaggle - Hotel Booking Demand](https://www.kaggle.com/jessemostipak/hotel-booking-demand)
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.feature_extraction import DictVectorizer
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from xgboost import XGBClassifier
df = pd.read_csv("hotel_bookings.csv")
df.head()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 119390 entries, 0 to 119389
Data columns (total 32 columns):
hotel 119390 non-null object
is_canceled 119390 non-null int64
lead_time 119390 non-null int64
arrival_date_year 119390 non-null int64
arrival_date_month 119390 non-null object
arrival_date_week_number 119390 non-null int64
arrival_date_day_of_month 119390 non-null int64
stays_in_weekend_nights 119390 non-null int64
stays_in_week_nights 119390 non-null int64
adults 119390 non-null int64
children 119386 non-null float64
babies 119390 non-null int64
meal 119390 non-null object
country 118902 non-null object
market_segment 119390 non-null object
distribution_channel 119390 non-null object
is_repeated_guest 119390 non-null int64
previous_cancellations 119390 non-null int64
previous_bookings_not_canceled 119390 non-null int64
reserved_room_type 119390 non-null object
assigned_room_type 119390 non-null object
booking_changes 119390 non-null int64
deposit_type 119390 non-null object
agent 103050 non-null float64
company 6797 non-null float64
days_in_waiting_list 119390 non-null int64
customer_type 119390 non-null object
adr 119390 non-null float64
required_car_parking_spaces 119390 non-null int64
total_of_special_requests 119390 non-null int64
reservation_status 119390 non-null object
reservation_status_date 119390 non-null object
dtypes: float64(4), int64(16), object(12)
memory usage: 29.1+ MB
###Markdown
EDA target value distribution
###Code
df["is_canceled"].value_counts().plot.bar(rot=0)
###Output
_____no_output_____
###Markdown
nan value
###Code
df.isna().sum()
###Output
_____no_output_____
###Markdown
!To be removed: agent & company; then fill NA: country & children with MODE
###Code
df = df.drop(["agent", "company", "country"], axis=1)
numerical_cols = df.drop(["is_canceled"], axis=1).select_dtypes(include=np.number).columns.tolist()
categorical_cols = df.drop(["is_canceled"], axis=1).select_dtypes(include=object).columns.tolist()
numerical_cols
for col in numerical_cols:
plt.figure(figsize=(15,5))
sns.distplot(df[col])
plt.title(col)
plt.show()
categorical_cols
for col in categorical_cols:
if col == "country":
plt.figure(figsize=(5,30))
sns.countplot(y=df[col])
plt.title(col)
plt.show()
else:
plt.figure(figsize=(15,3))
sns.countplot(df[col])
plt.title(col)
plt.show()
###Output
C:\Users\DV\Anaconda3\lib\site-packages\seaborn\_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
###Markdown
Feare Engineering
###Code
categorical_cols
df["reservation_date_year"] = pd.to_datetime(df["reservation_status_date"]).dt.year
df["reservation_date_month"] = pd.to_datetime(df["reservation_status_date"]).dt.month
df["reservation_date_day"] = pd.to_datetime(df["reservation_status_date"]).dt.day
df = df.drop(["reservation_status_date"], axis=1)
df.hotel = df.hotel.astype('category').cat.codes
df.arrival_date_year = df.arrival_date_year.astype('category').cat.codes
df.arrival_date_month = df.arrival_date_month.astype('category').cat.codes
df.meal = df.meal.astype('category').cat.codes
df.market_segment = df.market_segment.astype('category').cat.codes
df.distribution_channel = df.distribution_channel.astype('category').cat.codes
df.reserved_room_type = df.reserved_room_type.astype('category').cat.codes
df.assigned_room_type = df.assigned_room_type.astype('category').cat.codes
df.deposit_type = df.deposit_type.astype('category').cat.codes
df.customer_type = df.customer_type.astype('category').cat.codes
df.reservation_status = df.reservation_status.astype('category').cat.codes
df.reservation_date_year = df.reservation_date_year.astype('category').cat.codes
sc = MinMaxScaler()
df.adr = sc.fit_transform(np.expand_dims(df.adr, axis=1))
df.adr = df.adr.astype(int)
df.children = sc.fit_transform(np.expand_dims(df.children, axis=1))
df.children = df.children.fillna(0)
df.children = df.children.astype(int)
df.info()
df = df.astype("int64")
df.iloc[0]
###Output
_____no_output_____
###Markdown
Split Dataset
###Code
df = df.reset_index()
X = df.drop(["is_canceled"], axis=1)
y = df["is_canceled"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# dv = DictVectorizer(sparse=False)
# train_dict = df_train.to_dict(orient='records')
# X_train = dv.fit_transform(train_dict)
# test_dict = df_test.to_dict(orient='records')
# X_test = dv.transform(test_dict)
###Output
_____no_output_____
###Markdown
Model Training
###Code
np.any(np.isnan(X_train))
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
y_pred = logreg.predict(X_test)
print(accuracy_score(y_pred, y_test))
dt = DecisionTreeClassifier()
dt.fit(X_train, y_train)
y_pred = dt.predict(X_test)
print(accuracy_score(y_pred, y_test))
rf = RandomForestClassifier()
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)
print(accuracy_score(y_pred, y_test))
xgb = XGBClassifier()
xgb.fit(X_train, y_train)
y_pred = xgb.predict(X_test)
print(accuracy_score(y_pred, y_test))
###Output
1.0
|
Python for Finance - Code Files/85 Obtaining the Efficient Frontier in Python - Part I/CSV/Python 3 CSV/Obtaining the Efficient Frontier - Part I - Solution_CSV.ipynb | ###Markdown
Obtaining the Efficient Frontier - Part I *Suggested Answers follow (usually there are multiple ways to solve a problem in Python).* We are in the middle of a set of 3 Python lectures that will help you reproduce the Markowitz Efficient Frontier. Let’s split this exercise into 3 parts and cover the first part here. Begin by loading data for Walmart and Facebook from the 1st of January 2014 until today.
###Code
import numpy as np
import pandas as pd
from pandas_datareader import data as wb
import matplotlib.pyplot as plt
%matplotlib inline
assets = ['WMT', 'FB']
pf_data = pd.read_csv('D:/Python/Walmart_FB_2014_2017.csv', index_col='Date')
###Output
_____no_output_____
###Markdown
Do a quick check of the data, normalize it to 100, and see how the 2 stocks were doing during the given timeframe.
###Code
pf_data.tail()
(pf_data / pf_data.iloc[0] * 100).plot(figsize=(10, 5))
###Output
_____no_output_____
###Markdown
Calculate their logarithmic returns.
###Code
log_returns = np.log(pf_data / pf_data.shift(1))
###Output
_____no_output_____
###Markdown
Create a variable that carries the number of assets in your portfolio.
###Code
num_assets = len(assets)
num_assets
###Output
_____no_output_____
###Markdown
The portfolio need not be equally weighted. So, create a variable, called “weights”. Let it contain as many randomly generated values as there are assets in your portfolio. Don’t forget these values should be neither smaller than 0 nor equal or greater than 1! *Hint: There is a specific NumPy function that allows you to generate such values. It is the one we used in the lecture - NumPy.random.random().*
###Code
weights = np.random.random(num_assets)
weights /= np.sum(weights)
weights
###Output
_____no_output_____
###Markdown
Sum the obtained values to obtain 1 – summing up the weights to 100%!
###Code
weights[0] + weights[1]
###Output
_____no_output_____ |
03_Grouping/Alcohol_Consumption/Exercise.ipynb | ###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('../../data/drinks.csv', ',')
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks_grouped = drinks.groupby('continent').sum()
drinks_grouped['beer_avg'] = drinks_grouped['beer_servings']/(drinks_grouped['beer_servings'] + drinks_grouped['spirit_servings']
+drinks_grouped['wine_servings'] + drinks_grouped['total_litres_of_pure_alcohol'])
drinks_grouped.beer_avg.sort_values(ascending=False).head(1)
drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks_grouped.wine_servings.describe()
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = "https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv"
drinks = pd.read_csv(url)
drinks.head(5)
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
beer_cons_by_cont = drinks.groupby("continent").mean().sort_values(by = "beer_servings", ascending = False)
beer_servs = round(beer_cons_by_cont.loc[beer_cons_by_cont.index[0]].beer_servings)
print("the continent consuming the most beer is " +
beer_cons_by_cont.index[0] +
" with " +
str(beer_servs) +
" beer servings")
###Output
the continent consuming the most beer is EU with 194.0 beer servings
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby("continent").wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
booz_stats = drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
booz_stats
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(url, sep=',')
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent')['beer_servings'].mean().idxmax()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent')['wine_servings'].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent')['spirit_servings'].agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarizes as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(url)
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
averageBeer = drinks.groupby('continent').beer_servings.mean()
averageBeer.sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.spirit_servings.agg(['mean','min','max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['min', 'max', 'mean'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarizes as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.spirit_servings.min()
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = "https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv"
drinks = pd.read_csv(url, sep=",")
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').total_litres_of_pure_alcohol.mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').total_litres_of_pure_alcohol.median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
conti = drinks.groupby('continent')
conti.spirit_servings.aggregate(['mean','min','max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv", sep=",")
drinks.head(10)
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
pd.DataFrame(drinks.groupby(by="continent").beer_servings.mean().sort_values(ascending=False)).head(1)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
pd.DataFrame(drinks.groupby(by="continent").wine_servings.describe())
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby(by="continent").mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby(by="continent").median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby(by="continent").spirit_servings.agg(["mean", "min", "max"])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks=pd.read_csv('../../exercise_data/drinks.csv',sep=',')
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby(['continent']).agg(np.mean).sort_values('beer_servings',ascending=False)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').agg(np.median)
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
a=drinks.groupby('continent').spirit_servings.agg([np.mean,np.min,np.max])
print(a)
print(a.__class__)
###Output
mean amin amax
continent
AF 16.339623 0 152
AS 60.840909 0 326
EU 132.555556 0 373
OC 58.437500 0 254
SA 114.750000 25 302
<class 'pandas.core.frame.DataFrame'>
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(url)
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
continents = drinks.groupby('continent').mean('beer_servings').sort_values('beer_servings', ascending=False)
continents
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').describe()['wine_servings']
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean('total_litres_of_pure_alcohol')
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median('total_litres_of_pure_alcohol')
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').describe()['spirit_servings']
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv")
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby("continent")["beer_servings"].mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby("continent")["wine_servings"].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby("continent").mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby("continent").median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby("continent")["spirit_servings"].agg(["mean", "min", "max"])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarizes as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
pd.set_option("display.max_rows", 0)
pd.__version__
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').beer_servings.mean().sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcoohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcoohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(url)
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean','min','max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks.head(5)
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby(by='continent').mean()['beer_servings']
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby(by='continent').describe()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby(by='continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max']).reset_index()
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
avg_beer = drinks.beer_servings.mean()
avg_continent = drinks.groupby('continent').beer_servings.mean()
avg_continent[avg_continent > avg_beer]
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean','min','max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(url)
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby(['continent'])['beer_servings'].mean().sort_values().index[-1]
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent')['wine_servings'].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent')['spirit_servings'].agg(['min','mean','max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby(['continent']).agg('mean')['beer_servings']
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent')['wine_servings'].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').agg('mean')
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent')['spirit_servings'].agg(['mean','min','max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv).
###Code
url = r'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
data = pd.read_csv(url)
###Output
_____no_output_____
###Markdown
Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.DataFrame(data)
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks_continental = drinks.groupby('continent')
drinks_continental['beer_servings'].mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks_continental['wine_servings'].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks_continental.mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks_continental.median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks_continental['spirit_servings'].agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarizes as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv).
###Code
pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv")
###Output
_____no_output_____
###Markdown
Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv")
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
beer_rank = drinks.groupby(by="continent").mean()
beer_rank.sort_values("beer_servings", ascending=False)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby(by='continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcoohol consumption per continent for every column
###Code
drinks.groupby(by='continent').total_litres_of_pure_alcohol.mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcoohol consumption per continent for every column
###Code
drinks.groupby(by='continent').total_litres_of_pure_alcohol.median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby(by="continent").spirit_servings.agg(["min","max","mean"])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent')['beer_servings'].mean().sort_values(ascending = False)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent')['spirit_servings'].agg(['min', 'mean', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarizes as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks=pd.read_csv('C:/Users/GH-16/OneDrive/Documents/GitHub/pandas_exercises/03_Grouping/Alcohol_Consumption/drinks.csv')
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks[drinks['wine_servings']>drinks['wine_servings'].mean()]['continent'].drop_duplicates()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks[['wine_servings','continent']].groupby('continent').describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcoohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcoohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
spirit_servings=drinks.agg({'spirit_servings':['mean','min','max']})
spirit_servings.head()
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarizes as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks=pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv",sep=',')
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').idxmax()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').idxmax().describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcoohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcoohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent')['spirit_servings'].idxmax().describe()
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(url)
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
avg_beer_per_continent = drinks.groupby('continent').beer_servings.mean()
avg_beer_per_continent.sort_values(ascending=False).head(1)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = "https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv"
drinks = pd.read_csv(url)
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby("continent")["beer_servings"].mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby("continent").wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby("continent").mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby("continent").median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby("continent")["spirit_servings"].agg([np.mean, np.min, np.max])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarizes as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent')[['beer_servings']].mean().sort_values?
drinks.groupby('continent')[['beer_servings']].mean().sort_values('beer_servings', ascending = False)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent')[['wine_servings']].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent')[['spirit_servings']].describe()['spirit_servings'][['mean', 'min', 'max']]
drinks.groupby('continent')[['spirit_servings']].describe().columns
idx = pd.IndexSlice
drinks.groupby('continent')[['spirit_servings']].describe().loc[:,idx['spirit_servings',['mean', 'min', 'max']]]
drinks.groupby('continent')[['spirit_servings']].describe().loc[:,(['spirit_servings'],['mean', 'min', 'max'])]
drinks.groupby('continent')[['spirit_servings']].agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks.info()
drinks.head()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 193 entries, 0 to 192
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 country 193 non-null object
1 beer_servings 193 non-null int64
2 spirit_servings 193 non-null int64
3 wine_servings 193 non-null int64
4 total_litres_of_pure_alcohol 193 non-null float64
5 continent 170 non-null object
dtypes: float64(1), int64(3), object(2)
memory usage: 9.2+ KB
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent')['beer_servings'].mean().sort_values(ascending = False).head(1)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent')['wine_servings'].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent')['spirit_servings'].agg([np.mean, min, max])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
link = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(link)
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').mean().sort_values('beer_servings', ascending = False)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
contin = drinks.groupby('continent')
contin.wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
contin.mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
contin.median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
contin.spirit_servings.agg(['mean','median','max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
df = pd.read_csv('drinks.csv')
df.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drink_on_average = df.groupby('country').beer_servings.mean()
# drink_on_average.head()
print(drink_on_average.iloc[0][0])
###Output
Afghanistan
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
df.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
df.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
df.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
df.groupby('continent').spirit_servings.agg(['mean', 'min', 'max', "median"])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(url)
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent')[['beer_servings']].mean().sort_values('beer_servings', ascending=False).iloc[0]
# Original solution:
#
# drinks.groupby('continent')[['beer_servings']].mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent')[['wine_servings']].sum()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').mean()['beer_servings'].sort_values(ascending=False).head(1)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent')['spirit_servings'].agg(['mean','min','max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv', sep = ',')
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
beer = drinks[['continent','beer_servings']]
beer.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
wine = drinks.groupby('continent').wine_servings.describe()
wine
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
alcohol = drinks.groupby('continent')
alcohol.mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
alcohol.median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
spirit = drinks.groupby('continent').spirit_servings.agg(['mean', 'min',
'max'])
spirit
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
beer = drinks.groupby('continent').beer_servings.sum().sort_values(ascending=False)
beer.head()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
wine = drinks.groupby('continent').wine_servings.describe()
wine.head()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
alc = drinks.groupby('continent').mean()
alc.head()
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(url, sep = ',')
drinks.head(10)
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').beer_servings.mean().sort_values(ascending = False)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(url,sep=',')
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby(by=['country']).mean().sort_values('beer_servings',ascending=False)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby(['continent']).describe().T
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby(['continent']).mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby(['continent']).median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent')["spirit_servings"].agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv")
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby("continent").mean()["beer_servings"].idxmax()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby("continent").describe()["wine_servings"]
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby("continent").mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby("continent").median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby("continent")["spirit_servings"].agg(["mean", "min", "max"])
# drinks["spirit_servings"].mean(), drinks["spirit_servings"].min(), drinks["spirit_servings"].max()
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
df = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
df.head()
dfg = df.groupby('continent')
dfg
type(dfg)
type(df)
dfg
dfg.mean()
dfg.get_group('EU')
dfg.get_group('AF')
dfg.mean()
###Output
_____no_output_____
###Markdown
[ ] Step 4. Which continent drinks more beer on average? 1. he calculado el valor maximom2. y lo estoy pasando dentro del [ ] para que me filtre...3. por que no me sale?
###Code
mask = drinks.beer_servings.max()
drinks[mask]
mask = drinks.beer_servings == drinks.beer_servings.max()
drinks[mask]
drinks = drinks.set_index('country')
max_drinkers = drinks.beer_servings.idxmax()
drinks.loc[max_drinkers, :]
#drinks[drinks.beer_servings.max()] #por qué no funciona¿?¿?¿?#
c=drinks.beer_servings
c.sort_values(ascending=False)
drinks.sort_values('beer_servings', ascending=False)
drinks.sort_values('beer_servings', ascending=False).iloc[0]
drinks.sort_values('beer_servings', ascending=False).iloc[[0]]
drinks.sort_values('beer_servings', ascending=False).head(1)
drinks.sort_values('beer_servings', ascending=False).loc[['Namibia']]
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent')['beer_servings'].mean().sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent')['spirit_servings'].agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv")
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
grouped_by_continent = drinks.groupby("continent")
grouped_by_continent["beer_servings"].agg("mean").sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
grouped_by_continent["wine_servings"].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
grouped_by_continent.agg("mean")
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
grouped_by_continent.agg("median")
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
grouped_by_continent["spirit_servings"].agg(["mean", "min", "max"])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarizes as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.sort_values('beer_servings', ascending=False).head()
# drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent')['wine_servings'].describe()
# drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcoohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent')['spirit_servings'].agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = "https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv"
drinks = pd.read_csv(url,',')
drinks.head()
drinks.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 193 entries, 0 to 192
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 country 193 non-null object
1 beer_servings 193 non-null int64
2 spirit_servings 193 non-null int64
3 wine_servings 193 non-null int64
4 total_litres_of_pure_alcohol 193 non-null float64
5 continent 170 non-null object
dtypes: float64(1), int64(3), object(2)
memory usage: 9.2+ KB
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.sort_values('beer_servings',ascending= False).head(1)
drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(url)
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks_max_beer = drinks.groupby(by='continent')['beer_servings'].mean()
drinks_max_beer
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent')['wine_servings'].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent')['spirit_servings'].agg(['mean', 'min', 'max'])
#"agg" sirve para efectua una o más operaciones sobre el eje especificado.
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv",",")
drinks.head(10)
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby("continent")["wine_servings"].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby("continent").mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby("continent").median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby("continent")["spirit_servings"].agg([np.mean,np.min,np.max])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(url)
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv',
sep=',')
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.dtypes
# strinks = drinks.continent.convert_dtypes(infer_objects=False, convert_string=True)
as_drinks = drinks[drinks.continent == 'AS']
AS_litres = as_drinks.total_litres_of_pure_alcohol.sum()
na_drinks = drinks[drinks.continent == 'NA']
NA_litres = na_drinks.total_litres_of_pure_alcohol.sum()
eu_drinks = drinks[drinks.continent == 'EU']
EU_litres = eu_drinks.total_litres_of_pure_alcohol.sum()
sa_drinks = drinks[drinks.continent == 'SA']
SA_litres = sa_drinks.total_litres_of_pure_alcohol.sum()
af_drinks = drinks[drinks.continent == 'AF']
AF_litres = af_drinks.total_litres_of_pure_alcohol.sum()
au_drinks = drinks[drinks.continent == 'AU']
AU_litres = au_drinks.total_litres_of_pure_alcohol.sum()
continent_litres = [AS_litres, NA_litres, EU_litres, SA_litres, AF_litres, AU_litres]
continent_litres
# I was able to come to the conclusion that Europe is the winner here,
# but again, I"m going about this in a hugely round about way.
# all it takes to get an answer is 1 line of code...
# I'll need to learn more or try again when my brain isn't tired.
# here is the better way to do this:
drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').mean().min().max()
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
# ...maybe data science isn't for me, or maybe my mind is just soup today and
# none of this makes any sense...
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarizes as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(url, sep = ',')
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').beer_servings.mean().sort_values(ascending = False)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcoohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcoohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean', 'median', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
df = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
df
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
df.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
df.groupby('continent').wine_servings.describe()
#Porque sale un DF y no un float64 como el anterior?
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
df.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
df.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
df.groupby('continent').spirit_servings.min()
df.groupby('continent').spirit_servings.max()
df.groupby('continent').spirit_servings.mean()
#FULLERÍA
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarizes as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(url,index_col='country')
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.loc[drinks['beer_servings'].idxmax()].name
drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
bycontinent = drinks.groupby('continent')
bycontinent.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
bycontinent.mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
bycontinent.agg(np.median)
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
bycontinent['spirit_servings'].agg(['mean','min','max'])
bycontinent.agg(max_spirits=pd.NamedAgg(column='spirit_servings', aggfunc=np.max),min_spirits=pd.NamedAgg(column='spirit_servings', aggfunc=np.min),mean_spirits=pd.NamedAgg(column='spirit_servings', aggfunc=np.mean))
bycontinent['spirit_servings'].agg(max_spirits=lambda x: np.max(x),min_spirits=lambda x: np.min(x),mean_spirits=lambda x: np.mean(x))
bycontinent['spirit_servings'].agg({'max_spirits':lambda x: np.max(x),'min_spirits':lambda x: np.min(x),'mean_spirits':lambda x: np.mean(x)}) #warning deprecated to use dicts
###Output
/anaconda3/envs/FTDS/lib/python3.7/site-packages/ipykernel_launcher.py:1: FutureWarning: using a dict on a Series for aggregation
is deprecated and will be removed in a future version. Use named aggregation instead.
>>> grouper.agg(name_1=func_1, name_2=func_2)
"""Entry point for launching an IPython kernel.
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = "https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv"
drinks = pd.read_csv(url)
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby("continent")[["beer_servings"]].mean().sort_values("beer_servings", ascending=False)
# Europe
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby("continent")[["wine_servings"]].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby("continent").mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby("continent").median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks[["spirit_servings"]].aggregate(["min", "mean", "max"])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(url)
drinks.head(3)
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent')\
.agg({'beer_servings':'sum'})\
.sort_values(by='beer_servings', ascending=False).iloc[0:1]
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
calc = lambda x: round(x*100/x.sum(),1) #x['wine_servings']/
cont_pct_drinks = drinks.groupby('continent')['beer_servings', 'spirit_servings', 'wine_servings'].sum()\
.agg(calc, axis=1)
cont_pct_drinks
sns.set('talk', 'ticks')
fig,ax = plt.subplots(1)
fig.patch.set_facecolor('white') # fig.patch.set_alpha(1)
_ = cont_pct_drinks.reset_index()
colors = ['gold', 'green', 'purple']
dict_cont={'SA': 'South Am.', 'OC': 'Oceania', 'EU': 'Europe',
'AS': 'Asia', 'AF': 'Africa'}
_.plot.barh(x='continent', y=_.columns[1:], width=0.9,color=colors, stacked=True,rot=0, ax=ax)
ax.set_xlabel("Servings(percent)")
vals = _.iloc[:,1:].values.reshape(-1)
for i, (p, v) in enumerate(zip(ax.patches, vals)):
ax.text(p.get_width()/2 + p.get_x()-2, p.get_y()+0.35, str(v)+'%', fontsize=14)
ax.set_yticklabels(_['continent'].map(dict_cont))
sns.despine()
plt.legend(['beers', 'spirits', 'wines'], bbox_to_anchor=(1,1.15), fontsize=17, ncol=3)
plt.show()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.head()
drinks.rename(columns={'country': 'cnt', 'beer_servings': 'beer',
'spirit_servings': 'spirit', 'wine_servings': 'wine',
'total_litres_of_pure_alcohol': 'tot_alc', 'continent': 'cont' },
inplace=True)
drinks.head(2)
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
agg_df = drinks.groupby('cont').agg([np.median, min, max])
agg_df.columns = ['__'.join(c).strip() for c in agg_df.columns.values]
agg_df.head()
fig, ax = plt.subplots(1)
ax.scatter(x=np.arange(5), y=agg_df.reset_index()['tot_alc__min'], s=50, c='b', label='min', zorder=10)
ax.scatter(x=np.arange(5), y=agg_df.reset_index()['tot_alc__max'], s=50, c='r', label='max', zorder=10)
ax.vlines(x=np.arange(5),
ymin=agg_df.reset_index()['tot_alc__min'],
ymax=agg_df.reset_index()['tot_alc__max'],
color='k', zorder=5)
agg_df.reset_index().plot.bar(x='cont', y='tot_alc__median', width=0.9, ax=ax, rot=0,color='grey', zorder=2)
ax.legend(frameon=1, loc='upper left', prop={'size': 15}).get_frame().set_alpha(0.3)
ax.axis(option='square')
ax.set_xlabel('Continent')
sns.despine()
# Which country in Europe drink no alcohol ?
drinks[(drinks['tot_alc']==0) & (drinks['cont']=='EU')]
# Which country in each continent Europe drink less alcohol (but not zero) ?
drinks[drinks['tot_alc']!=0].groupby('cont').min()
# Which country in each continent Europe drink the more alcohol ?
drinks.groupby('cont').max()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks['spirit'].agg([min, max, np.mean])
# Countries that have minimum value for spirit_servings
for n in drinks.loc[drinks['spirit']==drinks['spirit'].min()]['cnt'].to_list():
print(n,end=' | ')
# Country that have maximum value for spirit_servings
print(drinks.loc[drinks['spirit']==drinks['spirit'].max()]['cnt'])
# Mean value for spirit servings
drinks['spirit'].mean()
# List of countries drinking less spirit than mean value
for n in drinks.loc[drinks['spirit']<drinks['spirit'].mean()]['cnt'].to_list():
print(n, end=' | ')
# List of countries drinking more spirit than mean value
for n in drinks.loc[drinks['spirit']>drinks['spirit'].mean()]['cnt'].to_list():
print(n, end=' | ')
###Output
Albania | Andorra | Antigua & Barbuda | Armenia | Bahamas | Barbados | Belarus | Belgium | Belize | Bosnia-Herzegovina | Brazil | Bulgaria | Canada | Chile | China | Cook Islands | Costa Rica | Croatia | Cuba | Cyprus | Czech Republic | Denmark | Dominica | Dominican Republic | Estonia | Finland | France | Gabon | Georgia | Germany | Greece | Grenada | Guyana | Haiti | Honduras | Hungary | India | Ireland | Jamaica | Japan | Kazakhstan | Kyrgyzstan | Latvia | Liberia | Lithuania | Luxembourg | Malta | Mongolia | Montenegro | Netherlands | Nicaragua | Niue | Panama | Paraguay | Peru | Philippines | Poland | Moldova | Romania | Russian Federation | St. Kitts & Nevis | St. Lucia | St. Vincent & the Grenadines | Serbia | Slovakia | Spain | Sri Lanka | Suriname | Switzerland | Thailand | Trinidad & Tobago | Ukraine | United Arab Emirates | United Kingdom | USA | Uzbekistan | Venezuela |
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').mean()['beer_servings'].sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').describe()['wine_servings']
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.describe().loc[:,['mean','min','max']]
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv")
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby("continent").beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby("continent").wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(["mean", "min","max"])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarizes as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
d = drinks
beer_avg = d.beer_servings.mean()
g = d.groupby(['continent'])[['beer_servings']].mean()
g[g.beer_servings > beer_avg]
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
# d.groupby('continent')[['wine_servings']].describe()
d.groupby('beer_servings').min()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
d.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
d.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv")
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
c = drinks.groupby(['continent']).mean()
c
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
for i,ind in enumerate(c.index):
print(f"{ind} drinks {np.round(c.loc[ind,'wine_servings'],2)} servings of wine on average")
###Output
AF drinks 16.26 servings of wine on average
AS drinks 9.07 servings of wine on average
EU drinks 142.22 servings of wine on average
OC drinks 35.62 servings of wine on average
SA drinks 62.42 servings of wine on average
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
c.columns
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
description = drinks.describe()
description.loc['max', 'spirit_servings']
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarizes as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcoohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcoohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean', 'min','max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').mean().sort_values('beer_servings', ascending = False).head(1)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent')['wine_servings'].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent')['spirit_servings'].agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drink = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv', sep = ',')
drink.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
beer_continent = drink.groupby('continent')['beer_servings'].describe()
con.head()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
wine_continent = drink.groupby('continent')['wine_servings'].describe()
wine_continent.head()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
alcohol_continent_mean = drink.groupby('continent')['total_litres_of_pure_alcohol'].mean()
print(alcohol_continent_mean.head())
###Output
continent
AF 3.007547
AS 2.170455
EU 8.617778
OC 3.381250
SA 6.308333
Name: total_litres_of_pure_alcohol, dtype: float64
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
alcohol_continent_median = drink.groupby('continent')['total_litres_of_pure_alcohol'].median()
print(alcohol_continent_median.head())
###Output
continent
AF 2.30
AS 1.20
EU 10.00
OC 1.75
SA 6.85
Name: total_litres_of_pure_alcohol, dtype: float64
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
alcohol_continent = drink.groupby('continent')['total_litres_of_pure_alcohol']
print(alcohol_continent.mean())
print(alcohol_continent.min())
print(alcohol_continent.max())
###Output
continent
AF 3.007547
AS 2.170455
EU 8.617778
OC 3.381250
SA 6.308333
Name: total_litres_of_pure_alcohol, dtype: float64
continent
AF 0.0
AS 0.0
EU 0.0
OC 0.0
SA 3.8
Name: total_litres_of_pure_alcohol, dtype: float64
continent
AF 9.1
AS 11.5
EU 14.4
OC 10.4
SA 8.3
Name: total_litres_of_pure_alcohol, dtype: float64
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv).
###Code
### Step 3. Assign it to a variable called drinks.
drinks = pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv", sep=",")
drinks.columns
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').beer_servings.mean().sort_values(ascending=False).head(1)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarizes as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(url, header = 0, sep = ',')
drinks.columns
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby(['continent']).mean().sort_values(['beer_servings'], ascending=False)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby(['continent']).describe().wine_servings
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby(['continent']).mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby(['continent']).median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
df = pd.DataFrame({})
df['spirit_min'] = pd.Series([drinks.spirit_servings.min()])
df['spirit_max'] = pd.Series([drinks.spirit_servings.max()])
df['spirit_mean'] = pd.Series([drinks.spirit_servings.mean()])
df
df = drinks.groupby(['continent']).min().spirit_servings.reset_index()
df['spirit_servings_max'] = drinks.groupby(['continent']).max().spirit_servings.reset_index().spirit_servings
df['spirit_servings_mean'] = drinks.groupby(['continent']).mean().spirit_servings.reset_index().spirit_servings
df.rename(columns={'spirit_servings':'spirit_servings_min'})
# !!! agg好用
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = "https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv"
drinks = pd.read_csv(url)
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby("continent")["beer_servings"].mean().sort_values(ascending=False).head()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby("continent")["wine_servings"].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby("continent").mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby("country").median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby("continent")["spirit_servings"].agg(["mean", "min", "max"])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarizes as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
link = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(link)
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
df = drinks.groupby('continent')['beer_servings'].mean()
df.sort_values()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent')['wine_servings'].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
drinks.groupby('continent')['spirit_servings'].agg(['mean', 'min', 'max'])
pd.DataFrame.agg?
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv")
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks_gb_avg = drinks.groupby(["continent"]).agg({'beer_servings': 'mean'})
drinks_gb_avg.sort_values(by="beer_servings", ascending=False)
#.agg({"beer_servings": "average"})
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
#drinks.groupby(["continent"]).agg({'wine_servings': {'count', 'mean', 'min', 'max'}})
drinks.groupby("continent")[["wine_servings"]].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby(["continent"]).mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby(["continent"]).median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby(["continent"]).agg({"spirit_servings":{"mean", "min", "max"}})
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url= 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
df = (pd.read_csv (url, sep = ','))
df.head (10)
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
df.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
df.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
## primero agrupo por continente y le pido que saque la media de alcohol, esto lo meto en una variable nueva.
df.groupby('continent').total_litres_of_pure_alcohol.mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
df.groupby('continent').total_litres_of_pure_alcohol.median()
###Output
_____no_output_____
###Markdown
Step 8 [__]. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
media = df.spirit_servings.mean()
media
maximo = df.spirit_servings.max()
maximo
minimo = df.spirit_servings.min()
minimo
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import seaborn as sns
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv")
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
con=drinks.groupby('continent')['beer_servings'].mean()
pd.DataFrame(con)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
wine=drinks.groupby('continent')['wine_servings'].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent')['spirit_servings'].agg(['min','max','mean','median'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv).
###Code
url = r'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
data = pd.read_csv(url)
data[:3]
###Output
_____no_output_____
###Markdown
Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.DataFrame(data)
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent')['beer_servings'].mean().sort_values(ascending=False).head(1)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent')['wine_servings'].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent')['spirit_servings'].agg(['mean','min','max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv")
drinks.head(10)
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['max', 'min', 'mean'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(url, sep = ',', keep_default_na=False) #because our data NA was interpreted as NaN
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby(drinks.continent).mean()[['beer_servings']].sort_values(['beer_servings'],ascending = False)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby(drinks.continent).wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').agg(['mean','min','max'])[['spirit_servings']]
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drunk_continent = drinks.groupby(by='continent')['beer_servings'].mean()
drunk_continent.sort_values(ascending=False).index[0]
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
wine_continent = drinks.groupby(by='continent')['wine_servings']
wine_continent.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby(by='continent')[['beer_servings','spirit_servings','wine_servings']].mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby(by='continent')[['beer_servings','spirit_servings','wine_servings']].median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks_grouped = pd.DataFrame()
drinks_grouped['spirit_servings_mean'] = drinks.groupby(by='continent')['spirit_servings'].mean()
drinks_grouped['spirit_servings_max'] = drinks.groupby(by='continent')['spirit_servings'].max()
drinks_grouped['spirit_servings_min'] = drinks.groupby(by='continent')['spirit_servings'].min()
drinks_grouped.head()
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarizes as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent')['beer_servings'].sum() #EU
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent')['wine_servings'].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcoohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcoohol consumption per continent for every column
###Code
drinks.groupby('continent').total_litres_of_pure_alcohol.mean()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby()
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
drinks = pd.read_csv(url)
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby(['continent'])['beer_servings'].mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby(['continent'])['wine_servings'].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby(['continent'])['total_litres_of_pure_alcohol', 'beer_servings', 'wine_servings'].mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby(['continent'])
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks['spirit_servings'].describe()
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.sort_values(by='beer_servings', ascending=False).iloc[0:1, :]
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby(by='continent')['wine_servings'].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby(by='continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby(by='continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby(by='continent').agg({'spirit_servings': ['mean', 'min', 'max']})
#drinks.agg({'spirit_servings': ['mean', 'min', 'max']})
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent')['beer_servings'].mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent')['wine_servings'].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').agg({'spirit_servings': ['mean', 'min', 'max']})
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarizes as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent')['beer_servings'].mean().idxmax()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').describe()['wine_servings']
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['min', 'max', 'mean'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
drinks.head()
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean','max','min'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks=pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv',sep=',')
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').mean()[['beer_servings']]
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent')[['wine_servings']].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent')[['total_litres_of_pure_alcohol']].mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent')[['total_litres_of_pure_alcohol']].median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent')[['spirit_servings']].describe()
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
url = "https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv"
drinks = pd.read_csv(url, sep=",")
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.groupby('continent')["wine_servings"].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarizes as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv")
drinks
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby('continent').beer_servings.mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks.loc[:, ['country', 'wine_servings']]
drinks.groupby('continent').wine_servings.describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcoohol consumption per continent for every column
###Code
d = drinks.loc[:, ['continent', 'total_litres_of_pure_alcohol']]
d.groupby(['continent']).mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcoohol consumption per continent for every column
###Code
drinks.groupby(["continent"]).median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame DataFrame.agg(func, axis=0, *args, **kwargs)Aggregate using one or more operations over the specified axis.he aggregation operations are always performed over an axis, either theindex (default) or the column axis. This behavior is different from`numpy` aggregation functions (`mean`, `median`, `prod`, `sum`, `std`,`var`), where the default is to compute the aggregation of the flattenedarray, e.g., ``numpy.mean(arr_2d)`` as opposed to ``numpy.mean(arr_2d,axis=0)``.
###Code
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
###Output
_____no_output_____ |
Deep learning model/Senshagen_project.ipynb | ###Markdown
Recurrent Neural Network Part 1 - Data Preprocessing Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import datetime as dt
###Output
_____no_output_____
###Markdown
Importing the training set
###Code
dataset_train = pd.read_csv("no2.csv")
###Output
_____no_output_____
###Markdown
Cleaning the data
###Code
col_to_remove_no2 =('wkt_geom', 'id', 'timestam_1', 'unit_NO2', 'unit_PM10', 'unit_RH', 'unit_P', 'value_PM10', 'value_RH', 'value_P')
for name in col_to_remove_no2:
if name in dataset_train.columns:
dataset_train = dataset_train.drop(name,1)
dates=[]
for i in range(len(dataset_train)):
dates.append(dataset_train['timestamp_'][i])
cleaned_dates_no2=[]
for date in dates:
value = dt.datetime.strptime(date, '%Y-%m-%d')
cleaned_dates_no2.append(value)
dates=[] #emptying list
dataset_train['dates'] = cleaned_dates_no2
##creating list of dataframes for each no2 sensor
no2_sensors = dataset_train.label.unique()
dataset_per_no2_sensor =[] #list of dataframes for each NO2 sensor
for sensor in no2_sensors:
dataset_per_no2_sensor.append(dataset_train.loc[dataset_train['label'] == sensor])
print(dataset_per_no2_sensor[0])
#CHECKING AND MITIGATING FOR NAN VALUES
window_size = 20 #select a window size to calculate average values for cells with no data
count =0
for dataset in dataset_per_no2_sensor:
for r in range(len(dataset)):
if np.isnan(dataset.iloc[r,2]):
count+=1
# print(np.isnan(dataset.iloc[r,2]))
window = range(window_size, -window_size)
mean_values=[]
for w in window:
if dataset.iloc[r+w,2] >=0:
mean_values.append(dataset.iloc[r+w,2])
if len(mean_values)!=0:
mean = (sum(mean_values))/len(mean_values)
dataset.iloc[r,2]=mean
if np.isnan(dataset.iloc[r,2]) :
mean = dataset['value_NO2'].mean()
dataset.iloc[r,2] = mean
#break into test and train dataset
no2_train=[]
no2_test=[]
train_split_ratio = 0.8
for dataset in dataset_per_no2_sensor:
print(dataset.isnull().any())
train_len = int(train_split_ratio*len(dataset))
test_len = len(dataset) - train_len
no2_train.append(dataset[:train_len])
no2_test.append(dataset[train_len:-1])
###Output
_____no_output_____
###Markdown
The model
###Code
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from keras.models import Sequential
from keras.layers import Dense, LSTM, Dropout, Embedding
from keras.regularizers import l2
from tensorflow.keras import activations
for i in range(len(no2_train)):
training_set = no2_train[i]
training_set= training_set.iloc[:,2:3].values
test_set = no2_test[i]
test_set= test_set.iloc[:,2:3].values
sc = MinMaxScaler(feature_range=(0,1))
training_set_sc = sc.fit_transform(training_set)
print(training_set_sc)
print(len(training_set_sc))
x_train=[]
y_train=[]
for x in range(144, len(training_set)):
x_train.append(training_set_sc[x-144:x,0])#should make 1198 records:60 columns
y_train.append(training_set_sc[x,0])
x_train, y_train = np.array(x_train), np.array(y_train)
x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1],1))# shape - batch_size, timestep, number of indicators
RNNregressor = Sequential()
RNNregressor.add(LSTM(units=50, return_sequences=True, input_shape=(x_train.shape[1],1)))
RNNregressor.add(Dropout(0.2))##20% dropout
RNNregressor.add(LSTM(units=50, return_sequences=True))
RNNregressor.add(Dropout(0.2))##20% dropout
RNNregressor.add(LSTM(units=50, return_sequences=True))
RNNregressor.add(Dropout(0.2))##20% dropout
RNNregressor.add(LSTM(units=50))
RNNregressor.add(Dropout(0.2))##20% dropout
RNNregressor.add(Dense(units=1))
RNNregressor.compile(optimizer = 'Adam', loss = 'mean_squared_error')#experiment with different optimizeers
RNNregressor.fit(x=x_train, y = y_train, epochs = 50, batch_size = 32 )
dataset_concat=dataset_per_no2_sensor[i]
dataset_concat = dataset_concat.iloc[:,2:3]
inputs = dataset_concat[len(dataset_concat)-len(test_set)-144 : ].values
inputs = inputs.reshape(-1,1)
inputs = sc.transform(inputs)
x_test=[]
for y in range(144, 144+120): #create upper bound for 20 financial days of jab 2k17, using timestep = 60
x_test.append(inputs[y-144:y,0])#should make 1198 records:60 columns
x_test = np.array(x_test)
x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1],1))# shape - batch_size, timestep, number of indicators
predicted_stock_price = RNNregressor.predict(x_test)
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
plt.plot(test_set[0:120], color ='red', label = "Real NO2")
plt.plot(predicted_stock_price, color ='blue', label = "predicted No2")
plt.title('No2 prediction prediction')
plt.xlabel('records')
plt.ylabel('No2 levels')
plt.legend()
plt.show()
# Use range for columns to ensure the right shape of the data for the np array --- all rows:one column
#we use the open column to decide the trends of the data
###Output
[[0.61415525]
[0.42170293]
[0.42170293]
...
[0.40357239]
[0.39833468]
[0.35616438]]
2894
Epoch 1/50
86/86 [==============================] - 17s 193ms/step - loss: 0.0145
Epoch 2/50
86/86 [==============================] - 17s 197ms/step - loss: 0.0113
Epoch 3/50
86/86 [==============================] - 18s 207ms/step - loss: 0.0111
Epoch 4/50
86/86 [==============================] - 17s 203ms/step - loss: 0.0115
Epoch 5/50
86/86 [==============================] - 18s 206ms/step - loss: 0.0106
Epoch 6/50
86/86 [==============================] - 18s 206ms/step - loss: 0.0102
Epoch 7/50
86/86 [==============================] - 18s 206ms/step - loss: 0.0101
Epoch 8/50
86/86 [==============================] - 18s 206ms/step - loss: 0.0096
Epoch 9/50
86/86 [==============================] - 18s 208ms/step - loss: 0.0095
Epoch 10/50
86/86 [==============================] - 18s 209ms/step - loss: 0.0092
Epoch 11/50
86/86 [==============================] - 18s 207ms/step - loss: 0.0092
Epoch 12/50
86/86 [==============================] - 18s 208ms/step - loss: 0.0088
Epoch 13/50
86/86 [==============================] - 18s 209ms/step - loss: 0.0087
Epoch 14/50
86/86 [==============================] - 18s 205ms/step - loss: 0.0084
Epoch 15/50
86/86 [==============================] - 18s 206ms/step - loss: 0.0074
Epoch 16/50
86/86 [==============================] - 17s 199ms/step - loss: 0.0069
Epoch 17/50
86/86 [==============================] - 17s 201ms/step - loss: 0.0064
Epoch 18/50
86/86 [==============================] - 17s 199ms/step - loss: 0.0061
Epoch 19/50
86/86 [==============================] - 17s 199ms/step - loss: 0.0058
Epoch 20/50
86/86 [==============================] - 18s 205ms/step - loss: 0.0054
Epoch 21/50
86/86 [==============================] - 17s 199ms/step - loss: 0.0055
Epoch 22/50
86/86 [==============================] - 17s 200ms/step - loss: 0.0053
Epoch 23/50
86/86 [==============================] - 17s 199ms/step - loss: 0.0051
Epoch 24/50
86/86 [==============================] - 17s 200ms/step - loss: 0.0051
Epoch 25/50
86/86 [==============================] - 17s 200ms/step - loss: 0.0050
Epoch 26/50
86/86 [==============================] - 17s 200ms/step - loss: 0.0049
Epoch 27/50
86/86 [==============================] - 17s 201ms/step - loss: 0.0048
Epoch 28/50
86/86 [==============================] - 17s 198ms/step - loss: 0.0050
Epoch 29/50
86/86 [==============================] - 17s 200ms/step - loss: 0.0048
Epoch 30/50
86/86 [==============================] - 17s 199ms/step - loss: 0.0047
Epoch 31/50
86/86 [==============================] - 17s 200ms/step - loss: 0.0048
Epoch 32/50
86/86 [==============================] - 18s 212ms/step - loss: 0.0046
Epoch 33/50
86/86 [==============================] - 17s 202ms/step - loss: 0.0046
Epoch 34/50
86/86 [==============================] - 17s 202ms/step - loss: 0.0046
Epoch 35/50
86/86 [==============================] - 17s 202ms/step - loss: 0.0047
Epoch 36/50
86/86 [==============================] - 17s 201ms/step - loss: 0.0045
Epoch 37/50
86/86 [==============================] - 17s 202ms/step - loss: 0.0047
Epoch 38/50
86/86 [==============================] - 17s 202ms/step - loss: 0.0045
Epoch 39/50
86/86 [==============================] - 17s 203ms/step - loss: 0.0046
Epoch 40/50
86/86 [==============================] - 18s 205ms/step - loss: 0.0045
Epoch 41/50
86/86 [==============================] - 18s 205ms/step - loss: 0.0045
Epoch 42/50
86/86 [==============================] - 18s 208ms/step - loss: 0.0044
Epoch 43/50
86/86 [==============================] - 18s 211ms/step - loss: 0.0044
Epoch 44/50
86/86 [==============================] - 19s 215ms/step - loss: 0.0045
Epoch 45/50
86/86 [==============================] - 17s 201ms/step - loss: 0.0043
Epoch 46/50
86/86 [==============================] - 17s 202ms/step - loss: 0.0044
Epoch 47/50
86/86 [==============================] - 17s 200ms/step - loss: 0.0044
Epoch 48/50
86/86 [==============================] - 17s 200ms/step - loss: 0.0043
Epoch 49/50
86/86 [==============================] - 17s 200ms/step - loss: 0.0043
Epoch 50/50
86/86 [==============================] - 17s 201ms/step - loss: 0.0044
WARNING:tensorflow:5 out of the last 13 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7fe877c85f28> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
|
scripts/proto_specter.ipynb | ###Markdown
To do:1. Create DocEmbSim module to get similarity based on a pre-trained SPECTER model file2. Think about a better way to represent nagative examples3. Design a feedback mechanisim -- the result of this feedback is translated to some 'connection weight' to an applicaiton-pair. Note: current connection weight sacle: 1. Non-related ('negative connection') 4. Destination application is rejected because target 5. Target application is rejected because destination
###Code
from allennlp.models.archival import load_archive
from allennlp.common.util import import_submodules
import_submodules('specter')
archive_file = "../model.tar.gz"
cuda_device = -1
metadata = '../data/my_training/test.txt'
included_text_fields = 'abstract title'
vocab_dir = './data/vocab/'
overrides = f"{{'model':{{'predict_mode':'true','include_venue':'false'}},\
'dataset_reader' : {{ 'type':'specter_data_reader','predict_mode':'true',\
'paper_features_path':'{metadata}',\
'included_text_fields': '{included_text_fields}'}},\
'vocabulary' : {{'directory_path':'{vocab_dir}'}}\
}}"
archive = load_archive(archive_file, cuda_device = cuda_device)
Predictor.from_archive(archive)
from allennlp.predictors import Predictor
predictor = Predictor.from_path("../model.tar.gz")
results = predictor.predict(sentence="Did Uriah honestly think he could beat The Legend of Zelda in under three hours?")
for word, tag in zip(results["words"], results["tags"]):
print(f"{word}\t{tag}")
Class trained_model(object) :
"""
Load trained SPECTER model for embedding.
Args:
-----
model_file path to the file containing the trained model
"""
def __init__(model_file) :
model_file = '../model.tar.gz'
import subprocess
import argparse
import logging
logging.basicConfig(level=logging.INFO)
def embed(ids, model, metadata, output_file, cuda_device=0, batch_size=1, vocab_dir='data/vocab',
included_text_fields = 'abstract title', weights_file=None
):
overrides = f"{{'model':{{'predict_mode':'true','include_venue':'false'}},'dataset_reader':{{'type':'specter_data_reader','predict_mode':'true','paper_features_path':'{metadata}','included_text_fields': '{included_text_fields}'}},'vocabulary':{{'directory_path':'{vocab_dir}'}}}}"
command = [
'python3',
'specter/predict_command.py',
'predict',
model,
ids,
'--include-package',
'specter',
'--predictor',
'specter_predictor',
'--overrides',
f'"{overrides}"',
'--cuda-device',
str(cuda_device),
'--output-file',
output_file,
'--batch-size',
str(batch_size),
'--silent'
]
if weights_file is not None:
command.extend(['--weights-file', args.weights_file])
logging.info('running command:')
logging.info(' '.join(command))
subprocess.run(' '.join(command), shell=True)
import sys
sys.path.append("../")
embed(ids = './data/my_training/test.txt', metadata='./data/my_training/metadata.json',
model='./model.tar.gz', output_file = './output.jsonl', vocab_dir='./data/vocab', batch_size=16,
cuda_device=-1)
!ls '../'
!pwd
###Output
/Users/kipnisal/specter/scripts
|
module2-vae/latent_variable_models.ipynb | ###Markdown
Homework 2. Latent Variable Models- Part 1 (5 points): VAEs on 2D Data- Part 2 (20 points): VAEs on images - VAE - VAE with AF Prior- \*Bonus
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Part 1: VAEs on 2D Data (5 points)Here we will train a simple VAE on 2D data, and look at situations in which latents are being used or not being used (i.e. when posterior collapse occurs) DataWe will use 4 datasets, each sampled from some gaussian.
###Code
def sample_data_1_a(count):
rand = np.random.RandomState(0)
return [[1.0, 2.0]] + (rand.randn(count, 2) * [[5.0, 1.0]]).dot(
[[np.sqrt(2) / 2, np.sqrt(2) / 2], [-np.sqrt(2) / 2, np.sqrt(2) / 2]])
def sample_data_2_a(count):
rand = np.random.RandomState(0)
return [[-1.0, 2.0]] + (rand.randn(count, 2) * [[1.0, 5.0]]).dot(
[[np.sqrt(2) / 2, np.sqrt(2) / 2], [-np.sqrt(2) / 2, np.sqrt(2) / 2]])
def sample_data_1_b(count):
rand = np.random.RandomState(0)
return [[1.0, 2.0]] + rand.randn(count, 2) * [[5.0, 1.0]]
def sample_data_2_b(count):
rand = np.random.RandomState(0)
return [[-1.0, 2.0]] + rand.randn(count, 2) * [[1.0, 5.0]]
def q1_sample_data(part, dset_id):
assert dset_id in [1, 2]
assert part in ['a', 'b']
if part == 'a':
if dset_id == 1:
dset_fn = sample_data_1_a
else:
dset_fn = sample_data_2_a
else:
if dset_id == 1:
dset_fn = sample_data_1_b
else:
dset_fn = sample_data_2_b
train_data, test_data = dset_fn(10000), dset_fn(2500)
return train_data.astype('float32'), test_data.astype('float32')
def visualize_q1_data(part, dset_id):
train_data, test_data = q1_sample_data(part, dset_id)
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.set_title('Train Data')
ax1.scatter(train_data[:, 0], train_data[:, 1])
ax2.set_title('Test Data')
ax2.scatter(test_data[:, 0], test_data[:, 1])
print(f'Dataset {dset_id}{part}')
plt.show()
visualize_q1_data('a', 1)
visualize_q1_data('a', 2)
visualize_q1_data('b', 1)
visualize_q1_data('b', 2)
###Output
_____no_output_____
###Markdown
Consruct and train a VAE with the following characteristics* 2D latent variables $z$ with a standard normal prior, $p(z) = N(0, I)$* An approximate posterior $q_\theta(z|x) = N(z; \mu_\theta(x), \Sigma_\theta(x))$, where $\mu_\theta(x)$ is the mean vector, and $\Sigma_\theta(x)$ is a diagonal covariance matrix* A decoder $p(x|z) = N(x; \mu_\phi(z), \Sigma_\phi(z))$, where $\mu_\phi(z)$ is the mean vector, and $\Sigma_\phi(z)$ is a diagonal covariance matrix**You will provide the following deliverables**1. Over the course of training, record the average full negative ELBO, reconstruction loss $E_xE_{z\sim q(z|x)}[-\log{p(x|z)}]$, and KL term $E_x[D_{KL}(q(z|x)||p(z))]$ of the training data (per minibatch) and test data (for your entire test set). Code is provided that automatically plots the training curves. 2. Report the final test set performance of your final model3. Samples of your trained VAE with ($z\sim p(z), x\sim N(x;\mu_\phi(z),\Sigma_\phi(z))$) and without ($z\sim p(z), x = \mu_\phi(z)$) decoder noise SolutionFill out the functions below, create additional classes/functions/cells if needed
###Code
from collections import OrderedDict, defaultdict
from tqdm import tqdm
import numpy as np
import torch
import torch.nn as nn
import torch.utils.data as data
import torch.optim as optim
class VAEModel(nn.Module):
def loss(self, x):
"""
returns dict with losses (loss_name -> loss_value)
"""
pass
def sample(self, n, noise=True):
"""
returns numpy array of n sampled points, shape=(n, 2)
"""
pass
class FullyConnectedVAE(VAEModel):
# YOUR CODE HERE (define encoder & decoder in __init__)
def loss(self, x):
mu_z, log_std_z = # YOUR CODE
z = # YOUR CODE
mu_x, log_std_x = # YOUR CODE
# Compute reconstruction loss
recon_loss = # YOUR CODE
# Compute KL
kl_loss = # YOUR CODE
loss = recon_loss + kl_loss
return {"loss": loss, "reconstruction_loss": recon_loss, "kl_loss": kl_loss}
def sample(self, n, noise=True):
# YOUR CODE
def train_epoch(model, train_loader, optimizer, epoch, grad_clip=None):
"""
train model on loader for single epoch
returns Dict[str, List[float]] - dict of losses on each training batch
"""
model.train()
losses = defaultdict(list)
# YOUR CODE
return losses
def valid_epoch(model, data_loader):
"""
evaluates model on dataset
returns Dict[str, float] - dict with average losses on entire dataset
"""
model.eval()
# YOUR CODE
return average_losses
def train_loop(model, train_loader, test_loader, epochs=10, lr=1e-3, grad_clip=None):
optimizer = optim.Adam(model.parameters(), lr=lr)
train_losses, test_losses = defaultdict(list), defaultdict(list)
for epoch in range(epochs):
train_loss = train_epoch(model, train_loader, optimizer, epoch, grad_clip)
test_loss = valid_epoch(model, test_loader)
for k in train_loss.keys():
train_losses[k].extend(train_loss[k])
test_losses[k].append(test_loss[k])
return train_losses, test_losses
def q1(train_data, test_data, part, dset_id):
"""
train_data: An (n_train, 2) numpy array of floats
test_data: An (n_test, 2) numpy array of floats
(You probably won't need to use the two inputs below, but they are there
if you want to use them)
part: An identifying string ('a' or 'b') of which part is being run. Most likely
used to set different hyperparameters for different datasets
dset_id: An identifying number of which dataset is given (1 or 2). Most likely
used to set different hyperparameters for different datasets
Returns
- a (# of training iterations, 3) numpy array of full negative ELBO, reconstruction loss E[-log p(x|z)],
and KL term E[KL(q(z|x) | p(z))] evaluated every minibatch
- a (# of epochs + 1, 3) numpy array of full negative ELBO, reconstruciton loss E[-p(x|z)],
and KL term E[KL(q(z|x) | p(z))] evaluated once at initialization and after each epoch
- a numpy array of size (1000, 2) of 1000 samples WITH decoder noise, i.e. sample z ~ p(z), x ~ p(x|z)
- a numpy array of size (1000, 2) of 1000 samples WITHOUT decoder noise, i.e. sample z ~ p(z), x = mu(z)
"""
""" YOUR CODE HERE """
model = # YOUR CODE HERE
train_loader = data.DataLoader(train_data, batch_size=128, shuffle=True)
test_loader = data.DataLoader(test_data, batch_size=128)
train_losses, test_losses = train_loop(model, train_loader, test_loader,
epochs=10, lr=1e-3)
train_losses = np.stack((train_losses['loss'], train_losses['reconstruction_loss'], train_losses['kl_loss']), axis=1)
test_losses = np.stack((test_losses['loss'], test_losses['reconstruction_loss'], test_losses['kl_loss']), axis=1)
samples_noise = model.sample(1000, noise=True)
samples_nonoise = model.sample(1000, noise=False)
return train_losses, test_losses, samples_noise, samples_nonoise
###Output
_____no_output_____
###Markdown
Results
###Code
def draw_2d(samples, title):
plt.scatter(samples[:, 0], samples[:, 1])
plt.title(title)
plt.show()
def plot_vae_training_plot(train_losses, test_losses, title):
elbo_train, recon_train, kl_train = train_losses[:, 0], train_losses[:, 1], train_losses[:, 2]
elbo_test, recon_test, kl_test = test_losses[:, 0], test_losses[:, 1], test_losses[:, 2]
plt.figure()
n_epochs = len(test_losses) - 1
x_train = np.linspace(0, n_epochs, len(train_losses))
x_test = np.arange(n_epochs + 1)
plt.plot(x_train, elbo_train, label='-elbo_train')
plt.plot(x_train, recon_train, label='recon_loss_train')
plt.plot(x_train, kl_train, label='kl_loss_train')
plt.plot(x_test, elbo_test, label='-elbo_test')
plt.plot(x_test, recon_test, label='recon_loss_test')
plt.plot(x_test, kl_test, label='kl_loss_test')
plt.legend()
plt.title(title)
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.show()
def q1_results(part, dset_id, fn):
print(f"Dataset {dset_id}{part}")
train_data, test_data = q1_sample_data(part, dset_id)
train_losses, test_losses, samples_noise, samples_nonoise = fn(train_data, test_data, part, dset_id)
print(f'Final -ELBO: {test_losses[-1, 0]:.4f}, Recon Loss: {test_losses[-1, 1]:.4f}, '
f'KL Loss: {test_losses[-1, 2]:.4f}')
plot_vae_training_plot(train_losses, test_losses, title=f'Dataset {dset_id}{part} Train Plot')
draw_2d(train_data, 'original data')
draw_2d(samples_noise, 'sampled with noise')
draw_2d(samples_nonoise, 'sampled without noise')
q1_results('a', 1, q1)
q1_results('a', 2, q1)
q1_results('b', 1, q1)
q1_results('b', 2, q1)
###Output
_____no_output_____
###Markdown
ReflectionCompare the sampled xs with and without decoder noise for datasets (a) and (b). For which datasets are the latents being used? Why is this happening (i.e. why are the latents being ignored in some cases)? **Write your answer (1-2 sentences):** Part 2: VAEs on ImagesAfter the previous exercise you should understand how to train simple VAE. Now let's move from 2D space to more complex image spaces. The training methodology is just the same, the only difference is if we want to have good results we should have better encoder and decoder models.In this section, you will train different VAE models on image datasets. Execute the cell below to visualize the two datasets (CIFAR10, and [SVHN](http://ufldl.stanford.edu/housenumbers/)).
###Code
from torchvision.datasets import SVHN, CIFAR10
from torchvision.utils import make_grid
def show_samples(samples, nrow=10, title='Samples'):
samples = (torch.FloatTensor(samples) / 255).permute(0, 3, 1, 2)
grid_img = make_grid(samples, nrow=nrow)
plt.figure()
plt.title(title)
plt.imshow(grid_img.permute(1, 2, 0))
plt.axis('off')
plt.show()
DATA_DIR = './data'
def get_cifar10():
train = CIFAR10(root=f'{DATA_DIR}/cifar10', train=True, download=True).data
test = CIFAR10(root=f'{DATA_DIR}/cifar10', train=False).data
return train, test
def get_svhn():
train = SVHN(root=f'{DATA_DIR}/svhn', split='train', download=True).data.transpose(0, 2, 3, 1)
test = SVHN(root=f'{DATA_DIR}/svhn', split='test', download=True).data.transpose(0, 2, 3, 1)
return train, test
def visualize_cifar10():
_, test = get_cifar10()
samples = test[np.random.choice(len(test), 100)]
show_samples(samples, title="CIFAR10 samples")
def visualize_svhn():
_, test = get_svhn()
print(test.shape)
samples = test[np.random.choice(len(test), 100)]
show_samples(samples, title="SVHN samples")
visualize_cifar10()
visualize_svhn()
###Output
_____no_output_____
###Markdown
Part (a) VAE (10 points)In this part, implement a standard VAE with the following characteristics:* 16-dim latent variables $z$ with standard normal prior $p(z) = N(0,I)$* An approximate posterior $q_\theta(z|x) = N(z; \mu_\theta(x), \Sigma_\theta(x))$, where $\mu_\theta(x)$ is the mean vector, and $\Sigma_\theta(x)$ is a diagonal covariance matrix* A decoder $p(x|z) = N(x; \mu_\phi(z), I)$, where $\mu_\phi(z)$ is the mean vector. (We are not learning the covariance of the decoder)You can play around with different architectures and try for better results, but the following encoder / decoder architecture below suffices (Note that image input is always $32\times 32$.```conv2d(in_channels, out_channels, kernel_size, stride, padding)transpose_conv2d(in_channels, out_channels, kernel_size, stride, padding)linear(in_dim, out_dim)Encoder conv2d(3, 32, 3, 1, 1) relu() conv2d(32, 64, 3, 2, 1) 16 x 16 relu() conv2d(64, 128, 3, 2, 1) 8 x 8 relu() conv2d(128, 256, 3, 2, 1) 4 x 4 relu() flatten() linear(4 * 4 * 256, 2 * latent_dim)Decoder linear(latent_dim, 4 * 4 * 128) relu() reshape(4, 4, 128) transpose_conv2d(128, 128, 4, 2, 1) 8 x 8 relu() transpose_conv2d(128, 64, 4, 2, 1) 16 x 16 relu() transpose_conv2d(64, 32, 4, 2, 1) 32 x 32 relu() conv2d(32, 3, 3, 1, 1)```You may find the following training tips helpful* When computing reconstruction loss and KL loss, average over the batch dimension and **sum** over the feature dimension* When computing reconstruction loss, it suffices to just compute MSE between the reconstructed $x$ and true $x$ (you can compute the extra constants if you want)* Use batch size 128, learning rate $10^{-3}$, and an Adam optimizer**You will provide the following deliverables**1. Over the course of training, record the average full negative ELBO, reconstruction loss, and KL term of the training data (per minibatch) and test data (for your entire test set). Code is provided that automatically plots the training curves. 2. Report the final test set performance of your final model3. 100 samples from your trained VAE4. 50 real-image / reconstruction pairs (for some $x$, encode and then decode)5. Interpolations of length 10 between 10 pairs of test images from your VAE (100 images total) SolutionFill out the function below and return the neccessary arguments. Feel free to create more cells if need be
###Code
def q2_a(train_data, test_data, dset_id):
"""
train_data: An (n_train, 32, 32, 3) uint8 numpy array of color images with values in {0, ..., 255}
test_data: An (n_test, 32, 32, 3) uint8 numpy array of color images with values in {0, ..., 255}
dset_id: An identifying number of which dataset is given (1 or 2). Most likely
used to set different hyperparameters for different datasets
Returns
- a (# of training iterations, 3) numpy array of full negative ELBO, reconstruction loss E[-p(x|z)],
and KL term E[KL(q(z|x) | p(z))] evaluated every minibatch
- a (# of epochs + 1, 3) numpy array of full negative ELBO, reconstruciton loss E[-p(x|z)],
and KL term E[KL(q(z|x) | p(z))] evaluated once at initialization and after each epoch
- a (100, 32, 32, 3) numpy array of 100 samples from your VAE with values in {0, ..., 255}
- a (100, 32, 32, 3) numpy array of 50 real image / reconstruction pairs
FROM THE TEST SET with values in {0, ..., 255}
- a (100, 32, 32, 3) numpy array of 10 interpolations of length 10 between
pairs of test images. The output should be those 100 images flattened into
the specified shape with values in {0, ..., 255}
"""
""" YOUR CODE HERE """
###Output
_____no_output_____
###Markdown
ResultsOnce you've finished `q2_a`, execute the cells below to visualize and save your results.
###Code
import torch.nn.functional as F
def q2_results(dset_id, fn):
if dset_id.lower() == 'cifar':
train_data, test_data = get_cifar10()
elif dset_id.lower() == 'svhn':
train_data, test_data = get_svhn()
else:
raise ValueError("Unsupported dataset")
train_losses, test_losses, samples, reconstructions, interpolations = fn(train_data, test_data, dset_id)
samples, reconstructions, interpolations = samples.astype('float32'), reconstructions.astype('float32'), interpolations.astype('float32')
print(f'Final -ELBO: {test_losses[-1, 0]:.4f}, Recon Loss: {test_losses[-1, 1]:.4f}, '
f'KL Loss: {test_losses[-1, 2]:.4f}')
plot_vae_training_plot(train_losses, test_losses, f'Dataset {dset_id} Train Plot')
show_samples(samples, title=f'{dset_id} samples')
show_samples(reconstructions, title=f'{dset_id} Reconstructions')
show_samples(interpolations, title=f'{dset_id} Interpolations')
q2_results('cifar', q2_a)
q2_results('svhn', q2_a)
###Output
_____no_output_____
###Markdown
Part (b) VAE with AF Prior (10 points)In this part, implement a VAE with an Autoregressive Flow prior ([VLAE](https://arxiv.org/abs/1611.02731)) with the following characteristics:* 16-dim latent variables $z$ with a MADE prior, with $\epsilon \sim N(0, I)$* An approximate posterior $q_\theta(z|x) = N(z; \mu_\theta(x), \Sigma_\theta(x))$, where $\mu_\theta(x)$ is the mean vector, and $\Sigma_\theta(x)$ is a diagonal covariance matrix* A decoder $p(x|z) = N(x; \mu_\phi(z), I)$, where $\mu_\phi(z)$ is the mean vector. (We are not learning the covariance of the decoder)You can use the same encoder / decoder architectures and training hyperparameters as part (a). For your MADE prior, it would suffice to use two hidden layers of size $512$. More explicitly, your MADE AF (mapping from $z\rightarrow \epsilon$) should output location $\mu_\psi(z)$ and scale parameters $\sigma_\psi(z)$ and do the following transformation on $z$:$$\epsilon = z \odot \sigma_\psi(z) + \mu_\psi(z)$$where the $i$th element of $\sigma_\psi(z)$ is computed from $z_{<i}$ (same for $\mu_\psi(z)$) and optimize the objective$$-E_{z\sim q(z|x)}[\log{p(x|z)}] + E_{z\sim q(z|x)}[\log{q(z|x)} - \log{p(z)}]$$where $$\log{p(z)} = \log{p(\epsilon)} + \log{\det\left|\frac{d\epsilon}{dz}\right|}$$**You will provide the following deliverables**1. Over the course of training, record the average full negative ELBO, reconstruction loss, and KL term of the training data (per minibatch) and test data (for your entire test set). Code is provided that automatically plots the training curves. 2. Report the final test set performance of your final model3. 100 samples from your trained VAE4. 50 real-image / reconstruction pairs (for some $x$, encode and then decode)5. Interpolations of length 10 between 10 pairs of test images from your VAE (100 images total) SolutionFill out the function below and return the neccessary arguments. Feel free to create more cells if need be
###Code
def q2_b(train_data, test_data, dset_id):
"""
train_data: An (n_train, 32, 32, 3) uint8 numpy array of color images with values in {0, ..., 255}
test_data: An (n_test, 32, 32, 3) uint8 numpy array of color images with values in {0, ..., 255}
dset_id: An identifying number of which dataset is given (1 or 2). Most likely
used to set different hyperparameters for different datasets
Returns
- a (# of training iterations, 3) numpy array of full negative ELBO, reconstruction loss E[-log p(x|z)],
and KL term E[KL(q(z|x) | p(z))] evaluated every minibatch
- a (# of epochs + 1, 3) numpy array of full negative ELBO, reconstruciton loss E[-p(x|z)],
and KL term E[KL(q(z|x) | p(z))] evaluated once at initialization and after each epoch
- a (100, 32, 32, 3) numpy array of 100 samples from your VAE with values in {0, ..., 255}
- a (100, 32, 32, 3) numpy array of 50 real image / reconstruction pairs
FROM THE TEST SET with values in {0, ..., 255}
- a (100, 32, 32, 3) numpy array of 10 interpolations of length 10 between
pairs of test images. The output should be those 100 images flattened into
the specified shape with values in {0, ..., 255}
"""
""" YOUR CODE HERE """
###Output
_____no_output_____
###Markdown
ResultsOnce you've finished `q2_b`, execute the cells below to visualize and save your results.
###Code
q2_results('cifar', q2_b)
q2_results('svhn', q2_b)
###Output
_____no_output_____
###Markdown
Homework 2. Latent Variable Models- Part 1 (5 points): VAEs on 2D Data- Part 2 (20 points): VAEs on images - VAE - VAE with AF Prior- \*Bonus
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Part 1: VAEs on 2D Data (5 points)Here we will train a simple VAE on 2D data, and look at situations in which latents are being used or not being used (i.e. when posterior collapse occurs) DataWe will use 4 datasets, each sampled from some gaussian.
###Code
def sample_data_1_a(count):
rand = np.random.RandomState(0)
return [[1.0, 2.0]] + (rand.randn(count, 2) * [[5.0, 1.0]]).dot(
[[np.sqrt(2) / 2, np.sqrt(2) / 2], [-np.sqrt(2) / 2, np.sqrt(2) / 2]])
def sample_data_2_a(count):
rand = np.random.RandomState(0)
return [[-1.0, 2.0]] + (rand.randn(count, 2) * [[1.0, 5.0]]).dot(
[[np.sqrt(2) / 2, np.sqrt(2) / 2], [-np.sqrt(2) / 2, np.sqrt(2) / 2]])
def sample_data_1_b(count):
rand = np.random.RandomState(0)
return [[1.0, 2.0]] + rand.randn(count, 2) * [[5.0, 1.0]]
def sample_data_2_b(count):
rand = np.random.RandomState(0)
return [[-1.0, 2.0]] + rand.randn(count, 2) * [[1.0, 5.0]]
def q1_sample_data(part, dset_id):
assert dset_id in [1, 2]
assert part in ['a', 'b']
if part == 'a':
if dset_id == 1:
dset_fn = sample_data_1_a
else:
dset_fn = sample_data_2_a
else:
if dset_id == 1:
dset_fn = sample_data_1_b
else:
dset_fn = sample_data_2_b
train_data, test_data = dset_fn(10000), dset_fn(2500)
return train_data.astype('float32'), test_data.astype('float32')
def visualize_q1_data(part, dset_id):
train_data, test_data = q1_sample_data(part, dset_id)
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.set_title('Train Data')
ax1.scatter(train_data[:, 0], train_data[:, 1])
ax2.set_title('Test Data')
ax2.scatter(test_data[:, 0], test_data[:, 1])
print(f'Dataset {dset_id}{part}')
plt.show()
visualize_q1_data('a', 1)
visualize_q1_data('a', 2)
visualize_q1_data('b', 1)
visualize_q1_data('b', 2)
###Output
_____no_output_____
###Markdown
Construct and train a VAE with the following characteristics* 2D latent variables $z$ with a standard normal prior, $p(z) = N(0, I)$* An approximate posterior $q_\theta(z|x) = N(z; \mu_\theta(x), \Sigma_\theta(x))$, where $\mu_\theta(x)$ is the mean vector, and $\Sigma_\theta(x)$ is a diagonal covariance matrix* A decoder $p(x|z) = N(x; \mu_\phi(z), \Sigma_\phi(z))$, where $\mu_\phi(z)$ is the mean vector, and $\Sigma_\phi(z)$ is a diagonal covariance matrix**You will provide the following deliverables**1. Over the course of training, record the average full negative ELBO, reconstruction loss $E_xE_{z\sim q(z|x)}[-\log{p(x|z)}]$, and KL term $E_x[D_{KL}(q(z|x)||p(z))]$ of the training data (per minibatch) and test data (for your entire test set). Code is provided that automatically plots the training curves. 2. Report the final test set performance of your final model3. Samples of your trained VAE with ($z\sim p(z), x\sim N(x;\mu_\phi(z),\Sigma_\phi(z))$) and without ($z\sim p(z), x = \mu_\phi(z)$) decoder noise SolutionFill out the functions below, create additional classes/functions/cells if needed
###Code
from collections import OrderedDict, defaultdict
from tqdm import tqdm
import numpy as np
import torch
import torch.nn as nn
import torch.utils.data as data
import torch.optim as optim
class VAEModel(nn.Module):
def loss(self, x):
"""
returns dict with losses (loss_name -> loss_value)
"""
pass
def sample(self, n, noise=True):
"""
returns numpy array of n sampled points, shape=(n, 2)
"""
pass
class FullyConnectedVAE(VAEModel):
# YOUR CODE HERE (define encoder & decoder in __init__)
def loss(self, x):
mu_z, log_std_z = # YOUR CODE
z = # YOUR CODE
mu_x, log_std_x = # YOUR CODE
# Compute reconstruction loss
recon_loss = # YOUR CODE
# Compute KL
kl_loss = # YOUR CODE
loss = recon_loss + kl_loss
return {"loss": loss, "reconstruction_loss": recon_loss, "kl_loss": kl_loss}
def sample(self, n, noise=True):
# YOUR CODE
def train_epoch(model, train_loader, optimizer, epoch, grad_clip=None):
"""
train model on loader for single epoch
returns Dict[str, List[float]] - dict of losses on each training batch
"""
model.train()
losses = defaultdict(list)
# YOUR CODE
return losses
def valid_epoch(model, data_loader):
"""
evaluates model on dataset
returns Dict[str, float] - dict with average losses on entire dataset
"""
model.eval()
# YOUR CODE
return average_losses
def train_loop(model, train_loader, test_loader, epochs=10, lr=1e-3, grad_clip=None):
optimizer = optim.Adam(model.parameters(), lr=lr)
train_losses, test_losses = defaultdict(list), defaultdict(list)
for epoch in range(epochs):
train_loss = train_epoch(model, train_loader, optimizer, epoch, grad_clip)
test_loss = valid_epoch(model, test_loader)
for k in train_loss.keys():
train_losses[k].extend(train_loss[k])
test_losses[k].append(test_loss[k])
return train_losses, test_losses
def q1(train_data, test_data, part, dset_id):
"""
train_data: An (n_train, 2) numpy array of floats
test_data: An (n_test, 2) numpy array of floats
(You probably won't need to use the two inputs below, but they are there
if you want to use them)
part: An identifying string ('a' or 'b') of which part is being run. Most likely
used to set different hyperparameters for different datasets
dset_id: An identifying number of which dataset is given (1 or 2). Most likely
used to set different hyperparameters for different datasets
Returns
- a (# of training iterations, 3) numpy array of full negative ELBO, reconstruction loss E[-log p(x|z)],
and KL term E[KL(q(z|x) | p(z))] evaluated every minibatch
- a (# of epochs + 1, 3) numpy array of full negative ELBO, reconstruciton loss E[-p(x|z)],
and KL term E[KL(q(z|x) | p(z))] evaluated once at initialization and after each epoch
- a numpy array of size (1000, 2) of 1000 samples WITH decoder noise, i.e. sample z ~ p(z), x ~ p(x|z)
- a numpy array of size (1000, 2) of 1000 samples WITHOUT decoder noise, i.e. sample z ~ p(z), x = mu(z)
"""
""" YOUR CODE HERE """
model = # YOUR CODE HERE
train_loader = data.DataLoader(train_data, batch_size=128, shuffle=True)
test_loader = data.DataLoader(test_data, batch_size=128)
train_losses, test_losses = train_loop(model, train_loader, test_loader,
epochs=10, lr=1e-3)
train_losses = np.stack((train_losses['loss'], train_losses['reconstruction_loss'], train_losses['kl_loss']), axis=1)
test_losses = np.stack((test_losses['loss'], test_losses['reconstruction_loss'], test_losses['kl_loss']), axis=1)
samples_noise = model.sample(1000, noise=True)
samples_nonoise = model.sample(1000, noise=False)
return train_losses, test_losses, samples_noise, samples_nonoise
###Output
_____no_output_____
###Markdown
Results
###Code
def draw_2d(samples, title):
plt.scatter(samples[:, 0], samples[:, 1])
plt.title(title)
plt.show()
def plot_vae_training_plot(train_losses, test_losses, title):
elbo_train, recon_train, kl_train = train_losses[:, 0], train_losses[:, 1], train_losses[:, 2]
elbo_test, recon_test, kl_test = test_losses[:, 0], test_losses[:, 1], test_losses[:, 2]
plt.figure()
n_epochs = len(test_losses) - 1
x_train = np.linspace(0, n_epochs, len(train_losses))
x_test = np.arange(n_epochs + 1)
plt.plot(x_train, elbo_train, label='-elbo_train')
plt.plot(x_train, recon_train, label='recon_loss_train')
plt.plot(x_train, kl_train, label='kl_loss_train')
plt.plot(x_test, elbo_test, label='-elbo_test')
plt.plot(x_test, recon_test, label='recon_loss_test')
plt.plot(x_test, kl_test, label='kl_loss_test')
plt.legend()
plt.title(title)
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.show()
def q1_results(part, dset_id, fn):
print(f"Dataset {dset_id}{part}")
train_data, test_data = q1_sample_data(part, dset_id)
train_losses, test_losses, samples_noise, samples_nonoise = fn(train_data, test_data, part, dset_id)
print(f'Final -ELBO: {test_losses[-1, 0]:.4f}, Recon Loss: {test_losses[-1, 1]:.4f}, '
f'KL Loss: {test_losses[-1, 2]:.4f}')
plot_vae_training_plot(train_losses, test_losses, title=f'Dataset {dset_id}{part} Train Plot')
draw_2d(train_data, 'original data')
draw_2d(samples_noise, 'sampled with noise')
draw_2d(samples_nonoise, 'sampled without noise')
q1_results('a', 1, q1)
q1_results('a', 2, q1)
q1_results('b', 1, q1)
q1_results('b', 2, q1)
###Output
_____no_output_____
###Markdown
ReflectionCompare the sampled xs with and without decoder noise for datasets (a) and (b). For which datasets are the latents being used? Why is this happening (i.e. why are the latents being ignored in some cases)? **Write your answer (1-2 sentences):** Part 2: VAEs on ImagesAfter the previous exercise you should understand how to train simple VAE. Now let's move from 2D space to more complex image spaces. The training methodology is just the same, the only difference is if we want to have good results we should have better encoder and decoder models.In this section, you will train different VAE models on image datasets. Execute the cell below to visualize the two datasets (CIFAR10, and [SVHN](http://ufldl.stanford.edu/housenumbers/)).
###Code
from torchvision.datasets import SVHN, CIFAR10
from torchvision.utils import make_grid
def show_samples(samples, nrow=10, title='Samples'):
samples = (torch.FloatTensor(samples) / 255).permute(0, 3, 1, 2)
grid_img = make_grid(samples, nrow=nrow)
plt.figure()
plt.title(title)
plt.imshow(grid_img.permute(1, 2, 0))
plt.axis('off')
plt.show()
DATA_DIR = './data'
def get_cifar10():
train = CIFAR10(root=f'{DATA_DIR}/cifar10', train=True, download=True).data
test = CIFAR10(root=f'{DATA_DIR}/cifar10', train=False).data
return train, test
def get_svhn():
train = SVHN(root=f'{DATA_DIR}/svhn', split='train', download=True).data.transpose(0, 2, 3, 1)
test = SVHN(root=f'{DATA_DIR}/svhn', split='test', download=True).data.transpose(0, 2, 3, 1)
return train, test
def visualize_cifar10():
_, test = get_cifar10()
samples = test[np.random.choice(len(test), 100)]
show_samples(samples, title="CIFAR10 samples")
def visualize_svhn():
_, test = get_svhn()
print(test.shape)
samples = test[np.random.choice(len(test), 100)]
show_samples(samples, title="SVHN samples")
visualize_cifar10()
visualize_svhn()
###Output
_____no_output_____
###Markdown
Part (a) VAE (10 points)In this part, implement a standard VAE with the following characteristics:* 16-dim latent variables $z$ with standard normal prior $p(z) = N(0,I)$* An approximate posterior $q_\theta(z|x) = N(z; \mu_\theta(x), \Sigma_\theta(x))$, where $\mu_\theta(x)$ is the mean vector, and $\Sigma_\theta(x)$ is a diagonal covariance matrix* A decoder $p(x|z) = N(x; \mu_\phi(z), I)$, where $\mu_\phi(z)$ is the mean vector. (We are not learning the covariance of the decoder)You can play around with different architectures and try for better results, but the following encoder / decoder architecture below suffices (Note that image input is always $32\times 32$.```conv2d(in_channels, out_channels, kernel_size, stride, padding)transpose_conv2d(in_channels, out_channels, kernel_size, stride, padding)linear(in_dim, out_dim)Encoder conv2d(3, 32, 3, 1, 1) relu() conv2d(32, 64, 3, 2, 1) 16 x 16 relu() conv2d(64, 128, 3, 2, 1) 8 x 8 relu() conv2d(128, 256, 3, 2, 1) 4 x 4 relu() flatten() linear(4 * 4 * 256, 2 * latent_dim)Decoder linear(latent_dim, 4 * 4 * 128) relu() reshape(4, 4, 128) transpose_conv2d(128, 128, 4, 2, 1) 8 x 8 relu() transpose_conv2d(128, 64, 4, 2, 1) 16 x 16 relu() transpose_conv2d(64, 32, 4, 2, 1) 32 x 32 relu() conv2d(32, 3, 3, 1, 1)```You may find the following training tips helpful* When computing reconstruction loss and KL loss, average over the batch dimension and **sum** over the feature dimension* When computing reconstruction loss, it suffices to just compute MSE between the reconstructed $x$ and true $x$ (you can compute the extra constants if you want)* Use batch size 128, learning rate $10^{-3}$, and an Adam optimizer**You will provide the following deliverables**1. Over the course of training, record the average full negative ELBO, reconstruction loss, and KL term of the training data (per minibatch) and test data (for your entire test set). Code is provided that automatically plots the training curves. 2. Report the final test set performance of your final model3. 100 samples from your trained VAE4. 50 real-image / reconstruction pairs (for some $x$, encode and then decode)5. Interpolations of length 10 between 10 pairs of test images from your VAE (100 images total) SolutionFill out the function below and return the neccessary arguments. Feel free to create more cells if need be
###Code
def q2_a(train_data, test_data, dset_id):
"""
train_data: An (n_train, 32, 32, 3) uint8 numpy array of color images with values in {0, ..., 255}
test_data: An (n_test, 32, 32, 3) uint8 numpy array of color images with values in {0, ..., 255}
dset_id: An identifying number of which dataset is given (1 or 2). Most likely
used to set different hyperparameters for different datasets
Returns
- a (# of training iterations, 3) numpy array of full negative ELBO, reconstruction loss E[-p(x|z)],
and KL term E[KL(q(z|x) | p(z))] evaluated every minibatch
- a (# of epochs + 1, 3) numpy array of full negative ELBO, reconstruciton loss E[-p(x|z)],
and KL term E[KL(q(z|x) | p(z))] evaluated once at initialization and after each epoch
- a (100, 32, 32, 3) numpy array of 100 samples from your VAE with values in {0, ..., 255}
- a (100, 32, 32, 3) numpy array of 50 real image / reconstruction pairs
FROM THE TEST SET with values in {0, ..., 255}
- a (100, 32, 32, 3) numpy array of 10 interpolations of length 10 between
pairs of test images. The output should be those 100 images flattened into
the specified shape with values in {0, ..., 255}
"""
""" YOUR CODE HERE """
###Output
_____no_output_____
###Markdown
ResultsOnce you've finished `q2_a`, execute the cells below to visualize and save your results.
###Code
import torch.nn.functional as F
def q2_results(dset_id, fn):
if dset_id.lower() == 'cifar':
train_data, test_data = get_cifar10()
elif dset_id.lower() == 'svhn':
train_data, test_data = get_svhn()
else:
raise ValueError("Unsupported dataset")
train_losses, test_losses, samples, reconstructions, interpolations = fn(train_data, test_data, dset_id)
samples, reconstructions, interpolations = samples.astype('float32'), reconstructions.astype('float32'), interpolations.astype('float32')
print(f'Final -ELBO: {test_losses[-1, 0]:.4f}, Recon Loss: {test_losses[-1, 1]:.4f}, '
f'KL Loss: {test_losses[-1, 2]:.4f}')
plot_vae_training_plot(train_losses, test_losses, f'Dataset {dset_id} Train Plot')
show_samples(samples, title=f'{dset_id} samples')
show_samples(reconstructions, title=f'{dset_id} Reconstructions')
show_samples(interpolations, title=f'{dset_id} Interpolations')
q2_results('cifar', q2_a)
q2_results('svhn', q2_a)
###Output
_____no_output_____
###Markdown
Part (b) VAE with AF Prior (10 points)In this part, implement a VAE with an Autoregressive Flow prior ([VLAE](https://arxiv.org/abs/1611.02731)) with the following characteristics:* 16-dim latent variables $z$ with a MADE prior, with $\epsilon \sim N(0, I)$* An approximate posterior $q_\theta(z|x) = N(z; \mu_\theta(x), \Sigma_\theta(x))$, where $\mu_\theta(x)$ is the mean vector, and $\Sigma_\theta(x)$ is a diagonal covariance matrix* A decoder $p(x|z) = N(x; \mu_\phi(z), I)$, where $\mu_\phi(z)$ is the mean vector. (We are not learning the covariance of the decoder)You can use the same encoder / decoder architectures and training hyperparameters as part (a). For your MADE prior, it would suffice to use two hidden layers of size $512$. More explicitly, your MADE AF (mapping from $z\rightarrow \epsilon$) should output location $\mu_\psi(z)$ and scale parameters $\sigma_\psi(z)$ and do the following transformation on $z$:$$\epsilon = z \odot \sigma_\psi(z) + \mu_\psi(z)$$where the $i$th element of $\sigma_\psi(z)$ is computed from $z_{<i}$ (same for $\mu_\psi(z)$) and optimize the objective$$-E_{z\sim q(z|x)}[\log{p(x|z)}] + E_{z\sim q(z|x)}[\log{q(z|x)} - \log{p(z)}]$$where $$\log{p(z)} = \log{p(\epsilon)} + \log{\det\left|\frac{d\epsilon}{dz}\right|}$$**You will provide the following deliverables**1. Over the course of training, record the average full negative ELBO, reconstruction loss, and KL term of the training data (per minibatch) and test data (for your entire test set). Code is provided that automatically plots the training curves. 2. Report the final test set performance of your final model3. 100 samples from your trained VAE4. 50 real-image / reconstruction pairs (for some $x$, encode and then decode)5. Interpolations of length 10 between 10 pairs of test images from your VAE (100 images total) SolutionFill out the function below and return the neccessary arguments. Feel free to create more cells if need be
###Code
def q2_b(train_data, test_data, dset_id):
"""
train_data: An (n_train, 32, 32, 3) uint8 numpy array of color images with values in {0, ..., 255}
test_data: An (n_test, 32, 32, 3) uint8 numpy array of color images with values in {0, ..., 255}
dset_id: An identifying number of which dataset is given (1 or 2). Most likely
used to set different hyperparameters for different datasets
Returns
- a (# of training iterations, 3) numpy array of full negative ELBO, reconstruction loss E[-log p(x|z)],
and KL term E[KL(q(z|x) | p(z))] evaluated every minibatch
- a (# of epochs + 1, 3) numpy array of full negative ELBO, reconstruciton loss E[-p(x|z)],
and KL term E[KL(q(z|x) | p(z))] evaluated once at initialization and after each epoch
- a (100, 32, 32, 3) numpy array of 100 samples from your VAE with values in {0, ..., 255}
- a (100, 32, 32, 3) numpy array of 50 real image / reconstruction pairs
FROM THE TEST SET with values in {0, ..., 255}
- a (100, 32, 32, 3) numpy array of 10 interpolations of length 10 between
pairs of test images. The output should be those 100 images flattened into
the specified shape with values in {0, ..., 255}
"""
""" YOUR CODE HERE """
###Output
_____no_output_____
###Markdown
ResultsOnce you've finished `q2_b`, execute the cells below to visualize and save your results.
###Code
q2_results('cifar', q2_b)
q2_results('svhn', q2_b)
###Output
_____no_output_____ |
HeroesOfPymoli/HeroesOfPymoli.ipynb | ###Markdown
Heroes Of Pymoli Data Analysis* Of the 1163 active players, the vast majority are male (84%). There also exists, a smaller, but notable proportion of female players (14%).* Our peak age demographic falls between 20-24 (44.8%) with secondary groups falling between 15-19 (18.60%) and 25-29 (13.4%). Heroes Of Pymoli Data Conclusion* Data Analytics Boot Camp* Assignment - Heroes Of Pymoli -Pandas* Heroes Of Pymoli Project Report* Tanvir Khan* March 28, 20201. Female players spent more (purchased more) on average at 5.44% higher than male players even though males outnumber them by at least 6 - 1 ratio. Males only spend, on average, 0.19 dollars more than females on the game. 2. While the total purchase count is 780, the number of unique players is only 576. There is a good amount of repeat purchasers for the Heroes of Pymoli game.2. A considerable about 75% of the players falls under 15-29 years of age group and they average spend about 3.02 dollars however remaining 25% player falls on less than 15 and more than 29 years of age interval spend at 3.15 dollars.3. The age demographic that has the highest average purchases per person is 35-39 years of age group at 4.76 dollars and they make up to only about 5 percent in total compared to 15-29 years of age group that makes upto about 26.54 percent in total and average per person spent is 4.00 dollars.The best represented age group in the data is 20-24 years, accounting for about 45% of the entire sample and this age group makes up the largest portion of Total Purchase Value, but while Average Purchase Price is middle about 4.05 dollars and hence Players in this age group spend 4.32 dollars less than other age groups.The majority of purchases are bought by people aged 15-24, who make up nearly 50% of the player base. However, they do not have the highest purchasing value (avg about 4.02 dollars). This may be explained by their purchasing style of buying more of the elss expensive items.The majority of purchases are bought by player in age group 15-24 years, who make up nearly 50% of the player base. However, they do not have the highest purchasing value. This may be explained by their purchasing style of buying more of the less expensive items.4. The Top 5 Spenders bought at least three items, a majority of players bought only one item. The top 5 spenders spent on average 4.01 dollars per purchase with a total purchase value of 20.03 dollars.Hence, players do not spend more than 20.00 dollars in the game.The list of most profitable items is controlled by some of the most expensive items, rather than those items that were purchased the most often.6. Final Critic, Oathbreaker (Last Hope of the Breaking Storm)and Fiert Glass Crusader are the top 3 most Profitable Items and are also of the top 4 most Popular Items.7. The age group that spends the most money is the 20-24 with 1,114.06 dollars as total purchase value and an average purchase of 4.32 dollars. In contrast, the demographic group that has the highest average purchase is the 35-39 with 4.76 and a total purchase value of 147.67 dollars.8. Of the 780 total purchases made by our 576 players to date, the majority (84%) of purchases were made by male players, compared to only 14% of total purchases that were made by female players.In conclusion, across all age and gender demographics, players are putting in significant amount of cash during the lifetime of their gameplay. Since the majority users are male, and the substantial amount of purchases comes from users aged 15-29, so it would be wise to market heavily towards the college/young-adult male crowd. The most popular item - Final Critic only has a purchase count of 13 out of 576 players which means that people are interested in a variety of items. The Top 5 Sellers are mostly under three dollars, while the Top 5 Most Profitable items include three over four dollars. So producers should continue offering a selection of high-cost/high-value items for those who will pay, along with a large selection of low-cost items for rest of the players. Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
purchase_data.head()
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
# purchase_data["SN"].unique()
# Total Number of Players
# Using unique function since file is too big to check if there is any multipe same values
# purchase_data["SN"].count() # give us count of duplicates values too & does not match the reqd output
#player_count = len(purchase_data["SN"].unique()) Method - 1
player_count = purchase_data["SN"].nunique() # Method - 2
player_count
# Create Summary DataFrame
player_count_df = pd.DataFrame({"Total Players": [player_count]})
# Display
player_count_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Get the total number of Unique items, use unique function since it is asked in the output
num_unique_item = len(purchase_data["Item ID"].unique()) #
num_unique_item
# Calculate average purchase price
avg_purchase_price = purchase_data["Price"].mean()
avg_purchase_price
# Get the total number of purchases
# num_purchases = len(purchase_data["Purchase ID"].unique()) # Method - 1
num_purchases = purchase_data["Purchase ID"].nunique() # Method - 2
num_purchases
# Get total revenue = sum of all prices
total_revenue = purchase_data["Price"].sum()
total_revenue
# Create a summary data frame to hold the results into the new dataframe "purchasing_analysis"
purchasing_analysis_df = pd.DataFrame([{
"Number of Unique Items": num_unique_item,
"Average Price": avg_purchase_price,
"Number of Purchases": num_purchases,
"Total Revenue": total_revenue, }],
columns=["Number of Unique Items", "Average Price", "Number of Purchases", "Total Revenue"])
purchasing_analysis_df
# Formatting
# Convert to currency format
# Optional: give the displayed data cleaner formatting, format to $0.00 - .map("${0:.2f}".format
purchasing_analysis_df["Average Price"] = purchasing_analysis_df["Average Price"].map("${0:.2f}".format)
purchasing_analysis_df["Total Revenue"] = purchasing_analysis_df["Total Revenue"].map("${0:,.2f}".format)
# Display
purchasing_analysis_df
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
# Percentage and Count of Male Players
# Percentage and Count of Female Players
# Percentage and Count of Other / Non Disclosed
player_totals = purchase_data.loc[:,["Gender", "SN", "Age"]]
player_totals = player_totals.drop_duplicates()
player_count = player_totals.count()[0]
gender_totals = player_totals["Gender"].value_counts()
gender_percents = ((gender_totals/player_count)*100).map("{0:,.2f}%".format)
### Gender Demographics DataFrame ###
gender_demographics_df = pd.DataFrame({"Total Count": gender_totals, "Percentage of Players": gender_percents})
# Formatting
# Rounding to 2 decimal points
gender_demographics_df = gender_demographics_df.round(2)
#Display
gender_demographics_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender
#### GRUOP BY METHOD(short & easily comprehensible) VS LOC Method(Lenthy & tedious) ###
# Purchasing Analysis (Gender) #
# Group By on "Gender"
# Purchase count
gender_purchase_count = purchase_data.groupby(["Gender"])["Price"].count().rename("Purchase Count")
# Average purchase price
average_gender_purchase_price = purchase_data.groupby(["Gender"])["Price"].mean().rename("Average Purchase Price")
# Total Purchase Value
total_gender_purchase_value = purchase_data.groupby(["Gender"])["Price"].sum().rename("Total Purchase Value")
# Average total purchase per person
average_purchase_total_per_gender = total_gender_purchase_value / gender_demographics_df["Total Count"]
### Purchasing Analysis (Gender) DataFrame ###
gender_purchasing_analysis_df = pd.DataFrame({"Purchase Count": gender_purchase_count,
"Average Purchase Price": average_gender_purchase_price,
"Total Purchase Value": total_gender_purchase_value,
"Avg Total Purchase Per Person": average_purchase_total_per_gender})
# Formatting
# format to currency formatting $0.00 - .map("${0:.2f}".format
gender_purchasing_analysis_df["Average Purchase Price"] = gender_purchasing_analysis_df["Average Purchase Price"].map("${:.2f}".format)
gender_purchasing_analysis_df["Total Purchase Value"] = gender_purchasing_analysis_df["Total Purchase Value"].map("${:.2f}".format)
gender_purchasing_analysis_df["Avg Total Purchase Per Person"] = gender_purchasing_analysis_df["Avg Total Purchase Per Person"].map("${:.2f}".format)
gender_purchasing_analysis_df = gender_purchasing_analysis_df.loc[:, ["Purchase Count", "Average Purchase Price", "Total Purchase Value", "Avg Total Purchase Per Person"]]
# Display
gender_purchasing_analysis_df
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
# Step: find out ninimum and maximum ages: Check to get the age range
# print(purchase_data["Age"].max()) # Max age 45
# print(purchase_data["Age"].min()) # Min age 7
# Establish bins for ages
age_bins = [0, 9, 14, 19, 24, 29, 34, 39, 46]
# Create corresponding names for bins respectively
group_names = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
# Categorize the existing players using the age bins. Hint: use pd.cut()
# Place group_names into new column inside DataFrame
purchase_data["Age Ranges"] = pd.cut(purchase_data["Age"], bins=age_bins, labels=group_names)
purchase_data
# Calculate the column values
# Calculate the numbers and percentages by age group
# Create a GroupBy Object on "Age Range"
age_range = purchase_data.groupby("Age Ranges") # Method 2 to set index name
age_range
# Count total number of players by Age Demographics
total_count_age = age_range["SN"].nunique()
total_count_age
# Output: Age Group = group_names = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"}
# Total num of players in corresponding age group= Count = [17, 22, 107, 258, 77, 52, 31, 12]
# Calculate total percentages by Age Demographics
# Percent Format to 0.00% - .map("{0:.2f}".format
percentage_by_age = (round((total_count_age / player_count * 100), 2)).map("{0:,.2f}%".format)
percentage_by_age
# Create Summary DataFrame to hold the above results & to match output = renaming columns
age_demographics_df = pd.DataFrame({
"Total Count": total_count_age,
"Percentage of Players": percentage_by_age})
age_demographics_df
# Clean up and remove "Age Group" category name from DataFrame
age_demographics_df.index.name = None
# Display
age_demographics_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Establish Bins for Ages & Create Corresponding Names For The Bins
bins = [0, 9, 14, 19, 24, 29, 34, 39, 46]
group_names = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
# Categorize the existing players using the age bins. Hint: use pd.cut()
# Place Data Series Into New Column Inside DataFrame
purchase_data["Age Ranges"] = pd.cut(purchase_data["Age"], bins=age_bins, labels=group_names)
age_range = purchase_data.groupby("Age Ranges") # Method#2 - Set index name
age_range
# Calculate the column values
# Calculate "Purchase Count"
age_purchase_count = age_range["SN"].count()
age_purchase_count
# Calculate "Average Purchase Price"
avg_age_purchase_price = round(age_range["Price"].mean(),2)
avg_age_purchase_price
# Calculate "Total Purchase Value"
total_age_purchase_value = round(age_range["Price"].sum(),2)
total_age_purchase_value
# Calculate "Avg Total Purchase per Person"
avg_total_age_purchase_person = round(total_age_purchase_value / total_count_age,2)
avg_total_age_purchase_person
# Create Summary DataFrame
age_purchasing_analysis_df = pd.DataFrame({
"Purchase Count": age_purchase_count,
"Average Purchase Price": avg_age_purchase_price,
"Total Purchase Value": total_age_purchase_value,
"Avg Total Purchase per Person": avg_total_age_purchase_person
})
# Formatting the columns into currency $0.00
# Function format to currency formatting $0.00 - '.map("${0:.2f}".format'
age_purchasing_analysis_df["Average Purchase Price"] = age_purchasing_analysis_df["Average Purchase Price"].map("${0:,.2f}".format)
age_purchasing_analysis_df["Total Purchase Value"] = age_purchasing_analysis_df["Total Purchase Value"].map("${0:,.2f}".format)
age_purchasing_analysis_df["Avg Total Purchase per Person"] = age_purchasing_analysis_df["Avg Total Purchase per Person"].map("${0:,.2f}".format)
# Display summary DataFrame
# age_purchasing_analysis_df.index.name = ("Age Ranges") Method#3 - Set index name
age_purchasing_analysis_df
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# Identify the top 5 spenders in the game by total purchase value & GroupBy "SN" .head()
top_spenders = purchase_data.groupby("SN")
top_spenders
# Calculate "Purchase Count"
spender_purchase_count = top_spenders["Purchase ID"].count()
spender_purchase_count
# Calculate the column values
# Calculate "Average Purchase Price" for each screen name
average_spender_purchase_price = round(top_spenders["Price"].mean(),2)
average_spender_purchase_price
# Calculate "Total Purchase Value" for each screen name
total_spender_purchase_value = top_spenders["Price"].sum()
total_spender_purchase_value
# Create Summary DataFrame
top_spenders_df= pd.DataFrame({
"Purchase Count": spender_purchase_count,
"Average Purchase Price": average_spender_purchase_price,
"Total Purchase Value": total_spender_purchase_value
})
top_spenders_df
# Set ascending = False = descending in Pandas
# format to currenct formatting $0.00 - .map("${0:.2f}".format
#sort the dataframe by total purchase values in descending order
sort_top_spenders = top_spenders_df.sort_values(["Total Purchase Value"], ascending=False)
# Formatting the average price and total price in the format in currency format $0.00
sort_top_spenders["Average Purchase Price"] = sort_top_spenders["Average Purchase Price"].map("${:,.2f}".format)
sort_top_spenders["Total Purchase Value"] = sort_top_spenders["Total Purchase Value"].map("${:,.2f}".format)
# Display top 5 spenders
sort_top_spenders.head()
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# Identify the top 5 Most Popular Items by Creating Newor is this DataFrame.head()
# Get the item id,item name,price and group by price
# then aggregate the data to get the total count for each group and the total sum for each group or
# do following steps
# Get the item id,item name,price and group by price columns
popular_items_list = purchase_data[["Item ID", "Item Name", "Price"]]
popular_items_list
# GroupBy "Item ID" & "Item Name"
popular_items = popular_items_list.groupby(["Item ID","Item Name"])
popular_items
# Calculate "Purchase Count"
item_purchase_count = popular_items["Price"].count()
item_purchase_count
# Calculate "Item Price"
item_price = popular_items["Price"].sum()
item_price
# Calculate "Total Purchase Value"
item_purchase_value = item_price / item_purchase_count
item_purchase_value
# Create Summary DataFrame
most_popular_items = pd.DataFrame({
"Purchase Count": item_purchase_count,
"Item Price": item_purchase_value,
"Total Purchase Value": item_price
})
most_popular_items.head()
# Sorting by most popular in descending order (Highest Purchase Count=most popular)
popular_items_formatted = most_popular_items.sort_values(["Purchase Count"], ascending=False)
popular_items_formatted
# Format to currenct formatting $0.00 - .map("${0:.2f}".format
# Format the price and total purchase value in the currency format
popular_items_formatted["Item Price"] = popular_items_formatted["Item Price"].map("${:,.2f}".format)
popular_items_formatted["Total Purchase Value"] = popular_items_formatted["Total Purchase Value"].map("${:,.2f}".format)
# Display 5 most popular items
popular_items_formatted.head()
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
# Sort the above table by total purchase value in descending order & store in in new dataframe
most_profitable_items = most_popular_items.sort_values(["Total Purchase Value"], ascending=False).head()
most_profitable_items
# Create Summary DataFrame & Formatting
# Optional: give the displayed data cleaner formatting
# format to currency formatting $0.00 - .map("${0:.2f}".format - for item price and total purchase value
most_profitable_items["Item Price"] = most_profitable_items["Item Price"].map("${:,.2f}".format)
most_profitable_items["Total Purchase Value"] = most_profitable_items["Total Purchase Value"].map("${:,.2f}".format)
# Dispay 5 most profitable games
most_profitable_items.head()
###Output
_____no_output_____
###Markdown
Pandas Challenge HeroesOfPymoli_starter
###Code
# Dependencies and Set Up
import pandas as pd
import os
#Importing File
pd.read_csv("purchase_data.csv")
#Set Up purchase_data to store data
purchase_data_pd = pd.read_csv("purchase_data.csv")
purchase_data_pd.head()
# Review Dataframe format
purchase_data_pd.dtypes
#Format Purchase ID to Integer
purchase_id_pd =purchase_data_pd["Purchase ID"].astype("int")
purchase_id_pd.head()
###Output
_____no_output_____
###Markdown
Player Count - Display the total number of players
###Code
#Display total player
total_player = purchase_data_pd["SN"].value_counts().count()
players_df = pd.DataFrame({"Total Players": [total_player]})
players_df
#Purchasing Analysis (Total)
purchase_data_pd.describe()
# Calculate unique items
unique_items = purchase_data_pd["Item ID"].value_counts().count()
unique_items
# Calculate total revenue
total_revenue = purchase_data_pd["Price"].sum()
total_revenue
#Run Basic Calculation
number_purchases = purchase_data_pd["Purchase ID"].value_counts().sum()
number_purchases
#Calculate average price
average_price = purchase_data_pd["Price"].mean()
average_price
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) - Run basic calculations to obtain number of unique items, average price, etc.- Create a summary data frame to hold the results- Displayed data cleaner formatting- Display the summary data frame
###Code
#Create Summary Frame to hold results
basic_calculations_df = pd.DataFrame({"Number Unique Items": [unique_items],"Average Price": [average_price],"Number of Purchases":[number_purchases], "Total Revenue":[total_revenue]})
basic_calculations_df.head().style.format({"Average Price": "${:,.2f}","Total Revenue": "${:,.2f}"})
# Display Gender and Player Columns
purchase_data_pd.columns
#Create a new dataframe to hold gender and students
gender_students_df = purchase_data_pd.loc[:, ["SN", "Gender"]]
gender_students_df.head()
#Check for duplicates
duplicate_students = gender_students_df[gender_students_df.duplicated()].count()
duplicate_students
###Output
_____no_output_____
###Markdown
Gender Demographics - Percentage and Count of Male Players- Percentage and Count of Female Players- Percentage and Count of Other / Non-Disclosed
###Code
# Use Gender as index and extract unique values
grouped_gender_df = gender_students_df.groupby(["Gender"])
unique_gender= grouped_gender_df.nunique()
unique_gender["SN"]
# create a new column and set it equal the the output of the calculation that we'd like to perform
unique_gender['Percent of Players'] = (unique_gender["SN"]/ total_player*100).map("{:,.2f}%".format)
unique_gender.head()
#Rename SN Column
unique_gender = unique_gender.rename(columns={"SN": "Total Count"})
unique_gender[["Total Count", "Percent of Players"]].head()
# Extract purchase count per gender
grouped_gender_df = purchase_data_pd.groupby(["Gender"])
purchase_gender = grouped_gender_df["Purchase ID"].nunique()
purchase_gender
#Create Purchase Gender into Dataframe
purchase_gender_df = pd.DataFrame(purchase_gender)
purchase_gender_df = purchase_gender_df.rename(columns={"Purchase ID": "Purchase Count"})
purchase_gender_df
#Calculate Total Purchase Value
value_gender = grouped_gender_df["Price"].sum()
value_gender
#Calculate Average Purchase Price
average_price = value_gender / purchase_gender
average_price.head()
# Add Average Purchase Price to Dataframe
purchase_gender_df['Average Purchase Price'] = average_price.map("${:,.2f}".format)
purchase_gender_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) - Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender- Create a summary data frame to hold the results
###Code
# Add Total Purchase Value to Dataframe
purchase_gender_df['Total Purchase Value'] = value_gender.map("${:,.2f}".format)
purchase_gender_df
#Create variable for numbers per gender
grouped_gender_df = gender_students_df.groupby(["Gender"])
unique_gender= grouped_gender_df.nunique()
gender_numbers = unique_gender["SN"]
gender_numbers
# Add Total Purchase Value to Dataframe
purchase_gender_df['Avg Total Purchase per Person'] = (value_gender/gender_numbers).map("${:,.2f}".format)
purchase_gender_df
###Output
_____no_output_____
###Markdown
Age Demographics - Establish bins for ages- Categorize the existing players using the age bins. Hint: use pd.cut()- Calculate the numbers and percentages by age group- Create a summary data frame to hold the results
###Code
#Create Data frame for existing players by age
age_players_df = purchase_data_pd.loc[:, ["SN", "Age"]]
age_players_df.head()
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) - Bin the purchase_data data frame by age- Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below- Create a summary data frame to hold the results
###Code
#Print min and max
print(age_players_df["Age"].max())
print(age_players_df["Age"].min())
#Create bins for ages
bins = [0,9,14,19,24,29,34,39,45]
#Create the name fo the bins
label_names = ["<10", "10-14", "15-19", "20-24", "25-29","30-34","35-39","40+"]
#Categorize the existing players using the age bins
age_players_df["Age Group"] = pd.cut(age_players_df["Age"], bins, labels=label_names)
age_players_df.head
#Group by group age
age_group = age_players_df.groupby("Age Group")
age_count = age_group['SN'].nunique()
age_demographics_df = pd.DataFrame(age_count)
age_demographics_df.head()
#Rename SN Column
age_demographics_df= age_demographics_df.rename(columns={"SN": "Total Count"})
age_demographics_df.head()
#Calculate Percentage of Players
age_demographics_df['Percent of Players']= (age_count/total_player*100).map("{:,.2f}%".format)
age_demographics_df.head(8)
#Create reduce dataframe
reduce_players_df = purchase_data_pd.loc[:, ["SN", "Age","Price", "Item ID", "Purchase ID"]]
reduce_players_df.head()
#Create bins for ages
bins = [0,9,14,19,24,29,34,39,45]
#Create the name fo the bins
label_names = ["<10", "10-14", "15-19", "20-24", "25-29","30-34","35-39","40+"]
#Categorize the existing players using the age bins
reduce_players_df["Age Group"] = pd.cut(reduce_players_df["Age"], bins, labels=label_names)
reduce_players_df
#Age Group by Purchase ID
age_price_group = reduce_players_df.groupby("Age Group")
item_count = age_price_group['Purchase ID'].nunique()
item_count
#Create Dataframe for Purchase Analysis
purchase_analysis_df = pd.DataFrame(item_count)
purchase_analysis_df.head()
#Calculate Total Purchase Value
value_age = age_price_group["Price"].sum()
value_age
#Calculate Average Purchase Price
average_age = value_age / item_count
average_age.head()
# Add Average Purchase Price to Dataframe
purchase_analysis_df['Average Purchase Price'] = average_age.map("${:,.2f}".format)
purchase_analysis_df
# Add Total Purchase Price to Dataframe
purchase_analysis_df['Total Purchase Price'] = value_age.map("${:,.2f}".format)
purchase_analysis_df
# Add Avg Total Purchase per Person to Dataframe
purchase_analysis_df['Total Purchase per Person'] = (value_age/age_count).map("${:,.2f}".format)
purchase_analysis_df
###Output
_____no_output_____
###Markdown
Run basic calculations to obtain the results in the table below- Create a summary data frame to hold the results- Sort the total purchase value column in descending order Calculations
###Code
# Total Price per SN
sn_total= reduce_players_df.groupby("SN").sum()["Price"].rename("Total Purchase Value")
# Average Price per SN
sn_average= reduce_players_df.groupby("SN").mean()["Price"].rename("Average Purchase Price")
# Count Price per SN
sn_count= reduce_players_df.groupby("SN").count()["Price"].rename("Purchase Count")
###Output
_____no_output_____
###Markdown
Create DataFrame
###Code
sn_df= pd.DataFrame({"Total Purchase Value": sn_total, "Average Purchase Price": sn_average, "Purchase Count": sn_count})
###Output
_____no_output_____
###Markdown
Sort the total purchase value column in descending order
###Code
sorted_sn= sn_df.sort_values("Total Purchase Value", ascending= False)
sorted_sn["Total Purchase Value"].max
###Output
_____no_output_____
###Markdown
Map Total Purchase Value and Average Purchase Price to Currency
###Code
sorted_sn["Total Purchase Value"]= sorted_sn["Total Purchase Value"].map("${:,.2f}".format)
sorted_sn["Average Purchase Price"]= sorted_sn["Average Purchase Price"].map("${:,.2f}".format)
###Output
_____no_output_____
###Markdown
Display only top 5
###Code
sorted_sn.head()
###Output
_____no_output_____
###Markdown
Most Popular Items - Retrieve the Item ID, Item Name, and Item Price columns- Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value- Create a summary data frame to hold the results- Sort the purchase count column in descending order Retrieve the Item ID, Item Name, and Item Price columns
###Code
items_pd= purchase_data_pd.loc[:,["Item ID", "Item Name", "Price"]]
###Output
_____no_output_____
###Markdown
Calculations
###Code
## Total Price per id
id_total= items_pd.groupby(["Item ID", "Item Name"]).sum()["Price"].rename("Total Purchase Value")
## Average Price per id
id_average= items_pd.groupby(["Item ID", "Item Name"]).mean()["Price"].rename("Item Price")
## Number of purchases per id
id_count= items_pd.groupby(["Item ID", "Item Name"]).count()["Price"].rename("Purchase Count")
###Output
_____no_output_____
###Markdown
Create DataFrame
###Code
id_df= pd.DataFrame({"Purchase Count": id_count,"Total Purchase Value":id_total, "Item Price": id_average })
###Output
_____no_output_____
###Markdown
Sort Values by Purchase Count
###Code
sorted_id_df= id_df.sort_values("Purchase Count", ascending= False)
sorted_id_df.head()
###Output
_____no_output_____
###Markdown
Map Total Purchase Value and Average Purchase Price to currency
###Code
sorted_id_df["Total Purchase Value"]= sorted_id_df["Total Purchase Value"].map("${:,.2f}".format)
sorted_id_df["Item Price"]= sorted_id_df["Item Price"].map("${:,.2f}".format)
###Output
_____no_output_____
###Markdown
Display top 5 Items
###Code
top_5= sorted_id_df[:5]
top_5
###Output
_____no_output_____
###Markdown
Most Profitable Items - Sort the above table by total purchase value in descending order
###Code
prof_items= id_df.sort_values("Total Purchase Value", ascending= False)
prof_items["Total Purchase Value"]= prof_items["Total Purchase Value"].map("${:,.2f}".format)
prof_items["Item Price"]= prof_items["Item Price"].map("${:,.2f}".format)
top_5_prof = prof_items.head()
top_5_prof
###Output
_____no_output_____
###Markdown
Pandas Homework - Heroes of PymoliYacub Bholat Data Analysis and Visualization Boot Camp Due: 14 December 2019 Load file and display head to see how table is set up
###Code
import os
import pandas as pd
path = os.path.join("Resources", "purchase_data.csv")
df = pd.read_csv(path)
df_grouped_sn = df.groupby(
"SN"
) # Prepared in advance for a couple of the question sets
df_unique = df[["SN", "Age", "Gender"]].drop_duplicates(
"SN"
) # df with unique players and attributes
# format floats to display in dollars
pd.options.display.float_format = "${:,.2f}".format
df.head()
###Output
_____no_output_____
###Markdown
Player Count* Display the total number of players
###Code
## Player Count
players_total = df_unique.shape[0]
players_total
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total)* Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
purchasing_analysis_df = pd.DataFrame(
{
"Number of Unique Items": df["Item ID"].unique().size,
"Average Price": df["Price"].mean(),
"Number of Purchases": df["Purchase ID"].count(),
"Total Revenue": df["Price"].sum(),
},
index=[0],
)
# format floats to display in dollars
pd.options.display.float_format = "${:,.2f}".format
purchasing_analysis_df
###Output
_____no_output_____
###Markdown
Gender Demographics* Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
genders = ["Male", "Female", "Other / Non-Disclosed"]
gender_series = df_grouped_sn["Gender"].min()
# use dict comprehension to get player count for each gender
gender_stats = {
gender: [gender_series[gender_series == gender].count()] for gender in genders
}
# use list comprehension to append percentage values into list
[
gender_stats[gender].append(gender_stats[gender][0] / players_total * 100)
for gender in genders
]
# reformat floats to display as percentages
pd.options.display.float_format = "{:,.2f}%".format
# create and display dataframe
gender_dem_df = pd.DataFrame.from_dict(
gender_stats, orient="index", columns=["Total Count", "Percentage of Players"]
)
gender_dem_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender)* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
df_gender_price_group = df.groupby("Gender")["Price"]
purchasing_analysis_gender_df = pd.DataFrame(
{
"Purchase Count": df_gender_price_group.count(),
"Average Purchase Price": df_gender_price_group.mean(),
"Total Purchase Value": df_gender_price_group.sum(),
"Avg Total Purchase per Person": df_gender_price_group.sum()
/ gender_dem_df["Total Count"],
}
)
# reformat floats to display in dollars
pd.options.display.float_format = "${:,.2f}".format
purchasing_analysis_gender_df
###Output
_____no_output_____
###Markdown
Age Demographics* Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
age_bins = [0, 10, 15, 20, 25, 30, 35, 40, float("inf")]
# create series with age group for unique players
x = pd.cut(
x=df_unique["Age"],
bins=age_bins,
labels=["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"],
right=False,
)
# create new column with age_groups
df_unique = df_unique.assign(age_group=x)
age_group_counts = df_unique.groupby("age_group")["Age"].count()
age_demographics_df = pd.DataFrame(
{
"Total Count": age_group_counts,
"Percentage of Players": age_group_counts / players_total * 100,
}
)
# reformat floats to display as percentages
pd.options.display.float_format = "{:,.2f}%".format
age_demographics_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age)* Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# create series with age group for each purchase
x = pd.cut(
x=df["Age"],
bins=age_bins,
labels=["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"],
right=False,
)
# create new column with age_groups
df = df.assign(age_group=x)
age_price_df_grouped = df[["age_group", "Price"]].groupby("age_group")["Price"]
purchasing_analysis_age_df = pd.DataFrame(
{
"Purchase Count": age_price_df_grouped.count(),
"Average Purchase Price": age_price_df_grouped.mean(),
"Total Purchase Value": age_price_df_grouped.sum(),
"Avg Total Purchase per Person": age_price_df_grouped.sum() / age_group_counts,
},
)
# reformat floats to display in dollars
pd.options.display.float_format = "${:,.2f}".format
purchasing_analysis_age_df
###Output
_____no_output_____
###Markdown
Top Spenders* Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# get top 5 spenders
top_sn_sales = df_grouped_sn["Price"].sum().sort_values(ascending=False)[:5]
top_sn = top_sn_sales.index.values
top_sales = top_sn_sales.values
# pull records of top 5 spenders
df_top_sales = df[df["SN"].isin(top_sn)]
df_top_sales_grouped = df_top_sales.groupby("SN")["Price"]
top_spenders_df = pd.DataFrame(
{
"Purchase Count": df_top_sales_grouped.count(),
"Average Purchase Price": df_top_sales_grouped.mean(),
"Total Purchase Value": df_top_sales_grouped.sum(),
}
).sort_values("Total Purchase Value", ascending=False)
# reformat floats to display in dollars
pd.options.display.float_format = "${:,.2f}".format
top_spenders_df
###Output
_____no_output_____
###Markdown
Most Popular Items* Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame NOTE TO GRADERS: I sorted by Purchase Count and by Total Purchase Value to more logically sort items that have the same number of purchases.
###Code
most_popular_df = df[["Item ID", "Item Name", "Price"]]
most_popular_df_grouped = (
most_popular_df.groupby(["Item ID", "Item Name"])
.count()
.sort_values("Price", ascending=False)
)
most_popular_purchase_item_id = most_popular_df_grouped.reset_index()["Item ID"]
most_popular_df = most_popular_df[
most_popular_df["Item ID"].isin(most_popular_purchase_item_id)
]
most_popular_df_grouped = most_popular_df.groupby(["Item ID", "Item Name"])["Price"]
most_popular_df = pd.DataFrame(
{
"Purchase Count": most_popular_df_grouped.count(),
"Item Price": most_popular_df_grouped.min(),
"Total Purchase Value": most_popular_df_grouped.sum(),
}
).sort_values(["Purchase Count", "Total Purchase Value"], ascending=False)
# reformat floats to display in dollars
pd.options.display.float_format = "${:,.2f}".format
most_popular_df.head(13)
###Output
_____no_output_____
###Markdown
Most Profitable Items* Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
# reformat floats to display in dollars
pd.options.display.float_format = "${:,.2f}".format
most_popular_df = most_popular_df.sort_values("Total Purchase Value", ascending=False)
most_popular_df.head(10)
###Output
_____no_output_____
###Markdown
Observations:1. Gender Demographics: Based on the gender demographics data, 84% of the players are male. The company has steady sales for the male demographics and hence this ia a good candidate for a targeted marketing campaign to further increase sales. However, since only 14% of the customers are female, the company should look into what factors are not working and furthermore look into strategies to improve their sale for the female customers.2. Age Demographics: Heroes of Pymoli has the highest sale percentage for customers with ages of 20 - 24 which is 44.7%. 15 - 19 and 25 - 29 age groups are 2nd and 3rd in the list with sales between 13% - 19%. Conversely the 2 age groups with the worst sales track are ages less than 10 and greater than 40.3. Most Popular and Most Expensive: A cross reference between most popular and most expensive items analysis shows that 'Final Critic', 'Oathbreaker, Last Hope of the Breaking Storm' and 'Fiery Glass Crusader' are in top 5 lists for both the categories. Thus, the company could generate more sales revenue if it has marketing strageries in place to push for increased sales of these items.Conclusion:Heroes of Pymoli is perfoming the best amoung the male demographics and between ages 20 - 24. The top 2 best selling items are 'Final Critic', 'Oathbreaker, Last Hope of the Breaking Storm'.
###Code
# Dependencies and Setup
import pandas as pd
import numpy as np
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
#purchase_date.head()
# Use len/value count on player ID to find unique palyers
players_count = len(purchase_data["SN"].value_counts())
# Create a data frame with total players named player count
total_players_df = pd.DataFrame({"Total Players":[players_count]})
total_players_df
#Run basic calculations to obtain number of unique items, average price, etc.
#Unique Items
unique_items_count = len(purchase_data["Item ID"].value_counts())
# Average Price
avg_price=purchase_data["Price"].mean()
# Number of purchases
total_purchase=purchase_data["Purchase ID"].count()
#Total Revenue
tot_revenue=purchase_data["Price"].sum()
#Create a summary data frame to hold the results
sales_summary_df=pd.DataFrame({"Number of Unique Items":[unique_items_count],
"Average Price":[avg_price],
"Number of Purchases":[total_purchase] ,
"Total Revenue":[tot_revenue]
})
#Optional: give the displayed data cleaner formatting
#Display the summary data frame
sales_summary_df.style.format({"Average Price":"${:,.2f}",
"Total Revenue": '${:,.2f}'})
#Percentage and Count of Male Players
#Percentage and Count of Female Players
#Percentage and Count of Other / Non-Disclosed
#group by gender
gender_group_df=purchase_data.groupby("Gender")
#count per group
count_gender=gender_group_df["SN"].nunique()
#count_gender
percentage_gender=(count_gender/players_count)*100
#Gender Demographics
gender_demographics_df=pd.DataFrame({"Total Count":count_gender, "Percentage of Players":percentage_gender})
# Format the values sorted by total count in descending order, and two decimal places for the percentage
gender_demographics_df.sort_values(["Total Count"], ascending = False).style.format({"Percentage of Players":"{:.2f}"})
#Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender
#Total number of purchases per gender
Purchase_Count=gender_group_df["Purchase ID"].count()
#Average purchase price calculation per gender
Average_Purchase_Price=gender_group_df["Price"].mean()
#Total value of purchase per gender
Total_Purchase_Value=gender_group_df["Price"].sum()
#Average Total Purchase per person per gender
Avg_Total_Purchase_per_Person=Total_Purchase_Value/count_gender
#Create a summary data frame to hold the results
Purchasing_Analysis_df=pd.DataFrame({"Purchase Count":Purchase_Count, "Average Purchase Price":Average_Purchase_Price
,"Total Purchase Value":Total_Purchase_Value, "Avg Total Purchase per Person":Avg_Total_Purchase_per_Person})
#Optional: give the displayed data cleaner formatting
#Display the summary data frame
Purchasing_Analysis_df.style.format({"Average Purchase Price":"${:,.2f}",
"Total Purchase Value":"${:,.2f}",
"Avg Total Purchase per Person":"${:,.2f}"})
#add max age
max_age=max(purchase_data["Age"])+1
#Bin the purchase_data data frame by age
age_bins=[0, 9.90, 14.90, 19.90, 24.90, 29.90, 34.90, 39.90, max_age]
age_labels = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
#Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below
purchase_data["Age Group"] = pd.cut(purchase_data["Age"],age_bins, labels=age_labels)
#purchase_data calculation
age_grouped_df = purchase_data.groupby("Age Group")
Total_Count_by_age=age_grouped_df["SN"].nunique()
Percentage_of_Players=(Total_Count_by_age/players_count) * 100
#Create a summary data frame to hold the results
Age_Demographics_df=pd.DataFrame({"Total Count": Total_Count_by_age
,"Percentage of Players": Percentage_of_Players})
#Optional: give the displayed data cleaner formatting
#Display the summary data frame
Age_Demographics_df.style.format({"Percentage of Players":"{:,.2f}"})
#Bin the purchase_data data frame by age
#Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below
Purchase_Count= age_grouped_df["Purchase ID"].count()
# Obtain average purchase price by age group
Average_Purchase_Price = age_grouped_df["Price"].mean()
# Calculate total purchase value by age group
Total_Purchase_Value = age_grouped_df["Price"].sum()
# Calculate the average purchase per person in the age group
Avg_Total_Purchase_per_Person= Total_Purchase_Value/Total_Count_by_age
#Create a summary data frame to hold the results
Purchasing_Analysis_by_Age_df = pd.DataFrame({"Purchase Count": Purchase_Count,
"Average Purchase Price": Average_Purchase_Price,
"Total Purchase Value":Total_Purchase_Value,
"Avg Total Purchase per Person": Avg_Total_Purchase_per_Person})
#Optional: give the displayed data cleaner formatting
#Display the summary data frame
Purchasing_Analysis_by_Age_df.style.format({"Average Purchase Price":"${:,.2f}",
"Total Purchase Value":"${:,.2f}",
"Avg Total Purchase per Person":"${:,.2f}"})
#Run basic calculations to obtain the results in the table below
spenders_grouped_df = purchase_data.groupby("SN")
# Count the total purchases by ID
Purchase_Count_by_spend = spenders_grouped_df["Purchase ID"].count()
# Calculate the average purchase by ID
Average_Purchase_Price_spender = spenders_grouped_df["Price"].mean()
# Calculate total purchase
Total_Purchase_Value_spender = spenders_grouped_df["Price"].sum()
#Create a summary data frame to hold the results
top_spenders_df = pd.DataFrame({"Purchase Count": Purchase_Count_by_spend,
"Average Purchase Price": Average_Purchase_Price_spender,
"Total Purchase Value":Total_Purchase_Value_spender})
#Sort the total purchase value column in descending order
sorted_spenders_df = top_spenders_df.sort_values(["Total Purchase Value"], ascending=False).head()
#Optional: give the displayed data cleaner formatting
#Display a preview of the summary data frame
sorted_spenders_df.style.format({"Average Purchase Total":"${:,.2f}",
"Average Purchase Price":"${:,.2f}",
"Total Purchase Value":"${:,.2f}"})
#Retrieve the Item ID, Item Name, and Item Price columns
get_items_df = purchase_data[["Item ID", "Item Name", "Price"]]
#Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value
group_items_df=get_items_df.groupby(["Item ID","Item Name"])
purchase_count=group_items_df["Price"].count()
total_purchase_value=group_items_df["Price"].sum()
item_price=total_purchase_value/purchase_count
#Create a summary data frame to hold the results
most_popular_items_df = pd.DataFrame({"Purchase Count": purchase_count,
"Item Price": item_price,
"Total Purchase Value":total_purchase_value})
#Sort the purchase count column in descending order
sorted_most_popular_items_df = most_popular_items_df.sort_values(["Purchase Count"], ascending=False).head()
#Optional: give the displayed data cleaner formatting
#Display a preview of the summary data frame
sorted_most_popular_items_df.style.format({"Item Price":"${:,.2f}",
"Total Purchase Value":"${:,.2f}"})
#Sort the above table by total purchase value in descending order
sorted_purchase_value_df = most_popular_items_df.sort_values(["Total Purchase Value"],
ascending=False).head()
#Optional: give the displayed data cleaner formatting
#Display a preview of the data frame
sorted_purchase_value_df.style.format({"Item Price":"${:,.2f}",
"Total Purchase Value":"${:,.2f}"})
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
import numpy as np
import os
#load csv file
purchase_data = os.path.join("Resources/purchase_data.csv")
#create dataframe
purchase_df = pd.read_csv(purchase_data)
#test print df
purchase_df.head()
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
#count
Total_players = purchase_df['SN'].nunique(dropna=True)
#test count to see if correct
#print(Total_players)
#show count
pd.DataFrame({'Total Players': [Total_players]})
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#run calculations
#number of Unique items
unique_items = purchase_df['Item ID'].nunique()
#average price
average_price = purchase_df['Price'].mean()
#number of purchases
total_purchases = purchase_df['Purchase ID'].count()
#total rev
total_revenue = purchase_df['Price'].sum()
#display summary data frame
purchase_analysis = {
'Unique Items': [purchase_df['Item ID'].nunique()],
'Average Price': [purchase_df['Price'].mean()],
'Total Purchases': [purchase_df['Purchase ID'].count()],
'Total Revenue': [purchase_df['Price'].sum()]
}
purchase_analysis_df = pd.DataFrame(purchase_analysis)
purchase_analysis_df
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
#gender demo
#had to go back and eliminate duplicates
unique_df = purchase_df[['Gender','SN']].drop_duplicates()
#count of male, female, other/non-disclosed
players_gender = unique_df['Gender'].value_counts()
player_gender_df = pd.DataFrame(players_gender)
#percent by gender
percent_players = (unique_df['Gender'].value_counts()/(unique_df['Gender'].count()))*100
#combine percent to table
player_gender_df['Percent of Players'] = percent_players
gender_demo_df = pd.DataFrame(player_gender_df)
gender_demo_df
#change gender to total count
new_gender_demo_df = gender_demo_df.rename(columns={
'Gender': 'Total Count',
})
new_gender_demo_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#get count by gender
Prch_Analysis = purchase_df.groupby(['Gender']).count()['Purchase ID']
prch_analysis_df = pd.DataFrame(Prch_Analysis)
#Prch_Analysis
#player count
#player_count = purchase_df['SN'].count()
#player_count_df = pd.DataFrame(player_count)
#get average purchase price
unique_avg_purch= purchase_df.groupby(['Gender']).mean()['Price']
#unique_avg_purch
#get total purchase price
unique_tot_purch_df = purchase_df.groupby(['Gender']).sum()['Price']
#unique_tot_purch_df
#calculate Avg Total Purchase per Person
unique_tot_prch_person_df = unique_tot_purch_df/new_gender_demo_df['Total Count']
#add avg prch price to table
prch_analysis_df['Average Purchase Price'] = unique_avg_purch.map('${:.2f}'.format)
prch_avg_prch_df = pd.DataFrame(prch_analysis_df)
prch_avg_prch_df
#add total purchase price to table
prch_avg_prch_df['Total Purchase Value'] = unique_tot_purch_df.map('${:.2f}'.format)
prch_tot_purch_df = pd.DataFrame(prch_avg_prch_df)
prch_tot_purch_df
#add avg total purchase per person to table
prch_tot_purch_df['Avg Total Purchase per Person']=unique_tot_prch_person_df.map('${:.2f}'.format)
Purchasing_Analysis_Gender = pd.DataFrame(prch_tot_purch_df)
Purchasing_Analysis_Gender
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
#create age demo dataframe
age_demographics = purchase_df[['Age','SN']].drop_duplicates()
#create bins
age_bins = [0, 9.9, 14.9, 19.9, 24.9, 29.9, 34.9, 39.9, 100]
#create group names
group_names = ['<10', '10-14', '15-19', '20-24', '25-29','30-34','35-39','40+']
#set up players by group
age_demographics['Age Group'] = pd.cut(age_demographics['Age'], age_bins, labels=group_names)
#calculations
age_count = age_demographics['Age Group'].value_counts()
#percentage
age_percentage = np.round(age_count / Total_players *100, decimals=2)
#display results
age_demographics = pd.DataFrame({'Total Count': age_count,
'Percentage of Players': age_percentage})
age_demographics.sort_index()
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#purchase Date data frame by age
purchase_df['Age Group']= pd.cut(purchase_df['Age'], age_bins, labels=group_names)
#calculations
purchase_count = purchase_df.groupby(['Age Group']).count()['Purchase ID']
#avg price
avg_price = purchase_df.groupby(['Age Group']).mean()['Price']
#total purchase
total_purchase_value = purchase_df.groupby(['Age Group']).sum()['Price']
#avg per person
avg_per_person = total_purchase_value / age_demographics['Total Count']
#display results
purchasing_analysis_age_df = pd.DataFrame({'Purchase Count': purchase_count,
'Average Purchase Price': avg_price.map('${:.2f}'.format),
'Total Purchase Value': total_purchase_value.map('${:.2f}'.format),
'Avg Total Purchase Per Person': avg_per_person.map('${:.2f}'.format)})
purchasing_analysis_age_df
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#calculations for top spenders
#calculate total purchase count per player
count_player=purchase_df.groupby('SN')['Purchase ID'].count()
count_player_df = pd.DataFrame(count_player)
#avg purchase price
avg_price = purchase_df.groupby(['SN']).mean()['Price']
#total purchase
total_purchase = purchase_df.groupby(['SN']).sum()['Price']
#add avg price to table
count_player_df['Average Purchase Price'] = avg_price.map('${:.2f}'.format)
new_count_player_df=pd.DataFrame(count_player_df)
new_count_player_df
#add total purhase price to table
new_count_player_df['Total Purchase Value']= total_purchase.map('${:.2f}'.format)
Top_Spender_df = pd.DataFrame(new_count_player_df)
Top_Spender_df
#rename purchase id to purchase count
new_top_spender_df=Top_Spender_df.rename(columns={'Purchase ID':'Purchase Count',})
new_top_spender_df
#sort values
top_spender_final_df = new_top_spender_df.sort_values(by=['Total Purchase Value'], ascending=False)
top_spender_final_df.head()
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, average item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#define items
most_items_df=purchase_df[['Item ID', 'Item Name', 'Price']]
#group items
grouped_most_items_df = most_items_df.groupby(['Item ID', 'Item Name'])
#calc
purchase_count = grouped_most_items_df['Item ID'].count()
item_price = grouped_most_items_df['Price'].mean()
total_purchase_value = grouped_most_items_df['Price'].sum()
#create data frame
most_items_df = pd.DataFrame({'Purchase Count': purchase_count,
'Item Price': item_price.map("${:.2f}".format),
'Total Purchase Value': total_purchase_value.map("${:,.2f}".format)})
#sort
most_items_df = most_items_df.sort_values(['Purchase Count'], ascending=False)
#results
most_items_df.head()
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
# sort in desceding order
most_items_df = most_items_df.sort_values(['Total Purchase Value'], ascending = False)
most_items_df.head()
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup.
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "./Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas DataFrame.
purchase_data = pd.read_csv(file_to_load)
purchase_data
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
#Sum of unique player names.
players = purchase_data["SN"].unique()
total_players = len(players)
pd.DataFrame([{"Total Players": total_players}])
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Calculation of individual values.
unique_items = len(purchase_data["Item Name"].unique())
mean_price = purchase_data["Price"].mean()
purchase_total = purchase_data["Purchase ID"].count()
revenue = purchase_data["Price"].sum()
#Organized into a DataFrame.
pd.DataFrame([{"Number of Unique Items": unique_items,
"Average Price": f"${round(mean_price, 2)}",
"Number of Purchases": purchase_total,
"Total Revenue": f"${revenue}"}])
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
#Removes duplicate player names, matches one player to one gender.
player_names = purchase_data.drop_duplicates(subset = "SN", keep = "first")
#Grouping data.
group_gender = player_names.groupby("Gender")
count_gender = group_gender.count()
gender_df = pd.DataFrame(count_gender)
#Renaming and formatting.
gender_df["Total Count"] = gender_df["Purchase ID"]
gender_df["Percent of Players"] = gender_df['Purchase ID'] / total_players
gender_df["Percent of Players"] = pd.Series(["{0:.2f}%".format(val * 100) for val in gender_df['Percent of Players']], index = gender_df.index)
gender_df[["Total Count", "Percent of Players"]].loc[["Male", "Female", "Other / Non-Disclosed"]]
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Grouping data.
gender_group = purchase_data.groupby("Gender")
#Creating first three columns.
gender_purchase_count = gender_group.size()
gender_price_avg = gender_group["Price"].mean()
gender_price_total = gender_group["Price"].sum()
#Advanced calculations for "Average Total Purchase Per Person" column.
player_group = purchase_data.groupby("SN")
player_sum = player_group["Price"].sum()
player_with_gender = pd.merge(player_sum, player_names, on = "SN")
player_gender_group = player_with_gender.groupby("Gender")
player_gender_avg = player_gender_group["Price_x"].mean()
#Create DataFrame.
gender_analysis = pd.DataFrame({
"Purchase Count": gender_purchase_count,
"Average Purchase Price": gender_price_avg,
"Total Purchase Value": gender_price_total,
"Average Total Purchase Per Person": player_gender_avg
})
#Formatting.
gender_analysis["Average Purchase Price"] = pd.Series(["${0:.2f}".format(val) for val in gender_analysis["Average Purchase Price"]], index = gender_analysis.index)
gender_analysis["Total Purchase Value"] = pd.Series(["${0:.2f}".format(val) for val in gender_analysis["Total Purchase Value"]], index = gender_analysis.index)
gender_analysis["Average Total Purchase Per Person"] = pd.Series(["${0:.2f}".format(val) for val in gender_analysis["Average Total Purchase Per Person"]], index = gender_analysis.index)
gender_analysis
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
#Establish bins and group labels.
bins = [0, 9, 14, 19, 24, 29, 34, 39, 50]
group_labels = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
#Binning and creating groups.
player_names["Age Group"] = pd.cut(player_names["Age"], bins, labels=group_labels, include_lowest = True)
age_group = player_names.groupby("Age Group")
#Finding data.
age_range = age_group["Age Group"].count()
age_percent = age_range / total_players
age_percent = pd.Series(["{0:.2f}%".format(val * 100) for val in age_percent], index = age_percent.index)
#Create DataFrame.
age_stats = pd.DataFrame({
"Total Players": age_range,
"Percentage of Players": age_percent
})
age_stats
###Output
<ipython-input-6-989b58a33f1a>:6: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
player_names["Age Group"] = pd.cut(player_names["Age"], bins, labels=group_labels, include_lowest = True)
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Establish bins and group labels.
bins = [0, 9, 14, 19, 24, 29, 34, 39, 50]
group_labels = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
purchase_data["Age Group"] = pd.cut(purchase_data["Age"], bins, labels=group_labels, include_lowest = True)
#Grouping and finding data.
purchase_group = purchase_data.groupby("Age Group")
age_purchase_count = purchase_group.size()
age_price_avg = purchase_group["Price"].mean()
age_price_total = purchase_group["Price"].sum()
#Advanced calculations for "Average Total Purchase Per Person"
age_group = purchase_data.groupby("SN")
age_sum = age_group["Price"].sum()
player_with_age = pd.merge(age_sum, player_names, on = "SN")
player_age_group = player_with_age.groupby("Age Group")
player_age_avg = player_age_group["Price_x"].mean()
#Create DataFrame
purchase_analysis = pd.DataFrame({
"Purchase Count": age_purchase_count,
"Average Purchase Price": age_price_avg,
"Total Purchase Value": age_price_total,
"Average Total Purchase Per Person": player_age_avg
})
#Formatting.
purchase_analysis["Average Purchase Price"] = pd.Series(["${0:.2f}".format(val) for val in purchase_analysis["Average Purchase Price"]], index = purchase_analysis.index)
purchase_analysis["Total Purchase Value"] = pd.Series(["${0:.2f}".format(val) for val in purchase_analysis["Total Purchase Value"]], index = purchase_analysis.index)
purchase_analysis["Average Total Purchase Per Person"] = pd.Series(["${0:.2f}".format(val) for val in purchase_analysis["Average Total Purchase Per Person"]], index = purchase_analysis.index)
purchase_analysis
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#Reusing 'age_group' and 'age_sum' from previous cell.
#Finding data.
purchase_size = age_group.size()
purchase_avg = age_group["Price"].mean()
#Create DataFrame.
sn_purchase = pd.DataFrame({
"Purchase Count":purchase_size,
"Average Purchase Price": purchase_avg,
"Total Purchase Value": age_sum
})
#Sorting values BEFORE formatting, so the sort is not affected by string format.
sn_purchase_sort = sn_purchase.sort_values("Total Purchase Value", ascending = False)
#Formatting.
sn_purchase_sort["Average Purchase Price"] = pd.Series(["${0:.2f}".format(val) for val in sn_purchase_sort["Average Purchase Price"]], index = sn_purchase_sort.index)
sn_purchase_sort["Total Purchase Value"] = pd.Series(["${0:.2f}".format(val) for val in sn_purchase_sort["Total Purchase Value"]], index = sn_purchase_sort.index)
sn_purchase_sort.head()
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, average item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#Grouping by Item ID, finding Data.
game_group = purchase_data.groupby(["Item ID","Item Name"])
game_count = game_group.size()
game_price = game_group["Price"].first()
game_sum = game_group["Price"].sum()
#Create DataFrame.
game_purchase = pd.DataFrame({
"Purchase Count":game_count,
"Item Price": game_price,
"Total Purchase Value": game_sum
})
#Formatting
game_purchase["Total Purchase Value"] = pd.Series(["${0:.2f}".format(val) for val in game_purchase["Total Purchase Value"]], index = game_purchase.index)
game_purchase["Item Price"] = pd.Series(["${0:.2f}".format(val) for val in game_purchase["Item Price"]], index = game_purchase.index)
#Sorting.
game_purchase_sort = game_purchase.sort_values("Purchase Count", ascending = False)
game_purchase_sort.head()
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
#Formatting strings to floats for sorting purposes.
game_purchase["Total Purchase Value"] = game_purchase["Total Purchase Value"].str.replace("$", "").astype(float)
#Sorting.
game_purchase_sort_2 = game_purchase.sort_values("Total Purchase Value", ascending = False)
#Formatting
game_purchase_sort_2["Total Purchase Value"] = pd.Series(["${0:.2f}".format(val) for val in game_purchase_sort_2["Total Purchase Value"]], index = game_purchase_sort_2.index)
game_purchase_sort_2.head()
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
import numpy as np
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data_df = pd.read_csv(file_to_load)
purchase_data_df
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
total_players_unique = purchase_data_df["SN"].nunique()
total_players_df = pd.DataFrame([{"Total players": total_players_unique}])
total_players_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
items_unique = purchase_data_df["Item Name"].nunique()
average_price = purchase_data_df["Price"].mean()
number_of_purchases = purchase_data_df["Purchase ID"].count()
Total_Revenue = purchase_data_df["Price"].sum()
Summary_df = pd.DataFrame({"Number of Unique items": [items_unique],
"Average Price": [average_price],
"Number of Purchases": [number_of_purchases],
"Total Revenue": Total_Revenue})
Summary_df["Average Price"] = Summary_df["Average Price"].map('${:,.2f}'.format)
Summary_df["Total Revenue"] = Summary_df["Total Revenue"].map('${:,.2f}'.format)
Summary_df
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
grouped_df_unique = purchase_data_df.groupby(["Gender"]).nunique()
count = grouped_df_unique["SN"].unique()
percentage = (grouped_df_unique["SN"]/total_players_unique)*100
Gender_df = pd.DataFrame({"Percentage of Players": percentage,
"Count":count})
Gender_df.sort_values("Count", ascending = False)
Gender_df["Percentage of Players"] = Gender_df["Percentage of Players"].map('{:.2f}%'.format)
Gender_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
grouped_df = purchase_data_df.groupby(["Gender"])
purchase_count = grouped_df["Purchase ID"].count()
average_pp = grouped_df["Price"].mean()
total_purchase = grouped_df["Price"].sum()
avg_purchase_per_person = total_purchase/count
Purchase_analysis_df = pd.DataFrame({"Purchase Count": purchase_count,
"Average Purchase Price": average_pp,
"Total Purchase Value": total_purchase,
"Avg Total Purchase per Person":avg_purchase_per_person})
Purchase_analysis_df["Average Purchase Price"] = Purchase_analysis_df["Average Purchase Price"].map('${:,.2f}'.format)
Purchase_analysis_df["Total Purchase Value"] = Purchase_analysis_df["Total Purchase Value"].map('${:,.2f}'.format)
Purchase_analysis_df["Avg Total Purchase per Person"] = Purchase_analysis_df["Avg Total Purchase per Person"].map('${:,.2f}'.format)
Purchase_analysis_df
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
age_bins = [0,9,14,19,24,29,34,39,70]
age_labels = ["<10","10-14","15-19","20-24","25-29","30-34","35-39","40+"]
unique_df = purchase_data_df.drop_duplicates("SN")
unique_df["Category"] = pd.cut(unique_df["Age"],age_bins, labels=age_labels)
age_total_count = unique_df["Category"].value_counts()
age_percentage = age_total_count/total_players_unique * 100
# Creating DataFrame
age_demographics_df = pd.DataFrame({"Total Count": age_total_count,
"Percentage of Players": age_percentage})
age_demographics_df.sort_index
# Formatting
age_demographics_df["Percentage of Players"] = age_demographics_df["Percentage of Players"].map('{:.2f}%'.format)
age_demographics_df
###Output
/Users/farihasiddiqui/opt/anaconda3/envs/PythonData/lib/python3.6/site-packages/ipykernel_launcher.py:6: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
purchase_data_df1 = purchase_data_df.copy()
age_bins = [0,9,14,19,24,29,34,39,70]
age_labels = ["<10","10-14","15-19","20-24","25-29","30-34","35-39","40+"]
purchase_data_df1["Age Ranges"] = pd.cut(purchase_data_df1["Age"],age_bins, labels=age_labels)
grouped_age_df = purchase_data_df1.groupby("Age Ranges")
purchase_count = grouped_age_df["Purchase ID"].count()
average_pp = grouped_age_df["Price"].mean()
total_purchase = grouped_age_df["Price"].sum()
avg_purchase_per_person = total_purchase/age_total_count
purchase_age_df = pd.DataFrame({"Purchase Count": purchase_count,
"Average Purchase Price": average_pp,
"Total Purchase Value": total_purchase,
"Avg Total Purchase per Person":avg_purchase_per_person})
purchase_age_df["Average Purchase Price"] = purchase_age_df["Average Purchase Price"].map('${:,.2f}'.format)
purchase_age_df["Total Purchase Value"] = purchase_age_df["Total Purchase Value"].map('${:,.2f}'.format)
purchase_age_df["Avg Total Purchase per Person"] = purchase_age_df["Avg Total Purchase per Person"].map('${:,.2f}'.format)
purchase_age_df
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
grouped_SN_df = purchase_data_df.groupby("SN")
p_count = grouped_SN_df["Purchase ID"].count()
avg_price = grouped_SN_df["Price"].mean()
total_value = grouped_SN_df["Price"].sum()
top_spenders_df = pd.DataFrame({"Purchase Count": p_count,
"Average Purchase Price": avg_price,
"Total Purchase Value": total_value})
top_spenders_df = top_spenders_df.sort_values("Total Purchase Value",ascending=False).head()
top_spenders_df["Average Purchase Price"] = top_spenders_df["Average Purchase Price"].map('${:,.2f}'.format)
top_spenders_df["Total Purchase Value"] = top_spenders_df["Total Purchase Value"].map('${:,.2f}'.format)
top_spenders_df
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
most_popular_df = purchase_data_df.groupby(["Item ID","Item Name"])
most_count = most_popular_df["Purchase ID"].count()
item_price = most_popular_df["Price"].mean()
most_total = most_popular_df["Price"].sum()
most_popular_df = pd.DataFrame({"Purchase Count": most_count,
"Item Price": item_price,
"Total Purchase Value": most_total})
most_popular_df = most_popular_df.sort_values("Purchase Count", ascending=False)
formatted_df = most_popular_df.copy()
formatted_df["Item Price"] = formatted_df["Item Price"].map('${:,.2f}'.format)
formatted_df["Total Purchase Value"] = formatted_df["Total Purchase Value"].map('${:,.2f}'.format)
formatted_df.head()
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
sorted_df = most_popular_df.sort_values("Total Purchase Value", ascending=False)
sorted_df["Item Price"] = sorted_df["Item Price"].map('${:,.2f}'.format)
sorted_df["Total Purchase Value"] = sorted_df["Total Purchase Value"].map('${:,.2f}'.format)
sorted_df.head()
###Output
_____no_output_____
###Markdown
Player CountDisplay Total Number of Players
###Code
Player_count = purchase_data["SN"].nunique()
Player_count
###Output
_____no_output_____
###Markdown
Purchasing Analysis(Total)
###Code
unique_items = purchase_data["Item ID"].nunique()
unique_items
average_price = purchase_data["Price"].mean()
average_price
Total_purchase_number = purchase_data["Price"].count()
Total_purchase_number
Total_revenue = average_price*Total_purchase_number
Total_revenue
data = {'Number of Unique Items':[unique_items],'Average Purchase Price':[average_price],'Total Number of Purchases':[Total_purchase_number],'Total Revenue':[Total_revenue]}
purchasing_analysis_df = pd.DataFrame(data)
purchasing_analysis_df['Average Purchase Price'] = purchasing_analysis_df['Average Purchase Price'].map("${:.2f}".format)
purchasing_analysis_df['Total Revenue'] = purchasing_analysis_df['Total Revenue'].map("${:.2f}".format)
purchasing_analysis_df
###Output
_____no_output_____
###Markdown
Gender Demographics
###Code
male_df = purchase_data.loc[purchase_data["Gender"] == "Male",:]
male_df.head()
male_count = male_df["SN"].nunique()
male_count
female_df = purchase_data.loc[purchase_data["Gender"] == "Female",:]
female_df.head()
female_count = female_df["SN"].nunique()
female_count
other_df = purchase_data.loc[purchase_data["Gender"] == "Other / Non-Disclosed",:]
other_df.head()
other_count = other_df["SN"].nunique()
other_count
male_percent = (male_count/(male_count+female_count+other_count))*100
female_percent = (female_count/(male_count+female_count+other_count))*100
other_percent = (other_count/(male_count+female_count+other_count))*100
gender_data = {"":["Male","Female","Other / Non-Disclosed"],"Total Count":[male_count,female_count,other_count],"Percentage of Players":[male_percent,female_percent,other_percent]}
gender_df = pd.DataFrame(gender_data)
gender_df
gender_df = gender_df.set_index("")
gender_df["Percentage of Players"] = gender_df["Percentage of Players"].astype(float).map("{:.2f}%".format)
gender_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis(Gender)
###Code
gender_group = purchase_data.groupby(['Gender'])
gender_count = gender_group["Purchase ID"].count()
average_purchase_price = gender_group['Price'].mean()
Total_purchase_value = gender_group['Price'].sum()
average_purchase_price_per_person_male = Total_purchase_value['Male']/male_count
average_purchase_price_per_person_female = Total_purchase_value['Female']/female_count
average_purchase_price_per_person_other = Total_purchase_value['Other / Non-Disclosed']/other_count
purchase_analysis_gender_df = pd.DataFrame({
"Purchase Count": gender_count,"Average Purchase Price": average_purchase_price,"Total Purchase Value": Total_purchase_value,"Avg Total Purchase per Person": [average_purchase_price_per_person_female,average_purchase_price_per_person_male,average_purchase_price_per_person_other]
})
purchase_analysis_gender_df["Average Purchase Price"] = purchase_analysis_gender_df["Average Purchase Price"].astype(float).map("${:.2f}".format)
purchase_analysis_gender_df["Total Purchase Value"] = purchase_analysis_gender_df["Total Purchase Value"].astype(float).map("${:,.2f}".format)
purchase_analysis_gender_df["Avg Total Purchase per Person"] = purchase_analysis_gender_df["Avg Total Purchase per Person"].astype(float).map("${:.2f}".format)
purchase_analysis_gender_df
###Output
_____no_output_____
###Markdown
Age Demographics
###Code
purchase_data.head()
purchase_unique = purchase_data.drop_duplicates('SN',keep='first')
purchase_unique
bins = [0,9,14,19,24,29,34,39,47]
group_labels = ["<10","10-14","15-19","20-24","25-29","30-34","35-39","40+"]
purchase_unique["Age group"] = pd.cut(purchase_unique["Age"],bins,labels = group_labels)
purchase_unique.head()
age_group = purchase_unique.groupby("Age group")["Age"].count().reset_index()
age_group = age_group.rename(columns = {"Age":"Total count"})
age_group["Percentage of Players"] = 100*age_group["Total count"]/age_group["Total count"].sum()
age_group["Percentage of Players"] = age_group["Percentage of Players"].astype(float).map("{:.2f}%".format)
age_group.set_index("Age group")
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age)
###Code
purchase_data.head()
bins = [0,9,14,19,24,29,34,39,47]
group_labels = ["<10","10-14","15-19","20-24","25-29","30-34","35-39","40+"]
purchase_data["Age group"] = pd.cut(purchase_data["Age"],bins,labels = group_labels)
purchase_data.head()
pdc = purchase_data.groupby("Age group")["SN"].count()
pdc = pdc.reset_index()
pdc = pdc.rename(columns = {"SN":"Purchase Count"})
pdc
app = purchase_data.groupby("Age group")["Price"].mean()
app = app.reset_index()
# app["Price"] = app["Price"].map("${:.2f}".format)
app = app.rename(columns={"Price":"Average Purchase Price"})
app
tpv = purchase_data.groupby("Age group")["Price"].sum()
tpv = tpv.reset_index()
# tpv["Price"] = tpv["Price"].map("${:.2f}".format).astype(float)
tpv = tpv.rename(columns = {"Price":"Total Purchase Value"})
tpv
tpv["Avg Total Purchase per Person"] = tpv["Total Purchase Value"]/age_group["Total count"]
tpv
combine = pdc.merge(app,on='Age group')
combine = combine.merge(tpv,on='Age group')
combine["Average Purchase Price"] = combine["Average Purchase Price"].map("${:.2f}".format)
combine["Total Purchase Value"] = combine["Total Purchase Value"].map("${:.2f}".format)
combine["Avg Total Purchase per Person"] = combine["Avg Total Purchase per Person"].map("${:.2f}".format)
combine = combine.rename(columns={'Age group':'Age Ranges'})
combine.set_index("Age Ranges")
###Output
_____no_output_____
###Markdown
Top Spenders
###Code
purchase_data['Total Purchase Value'] = purchase_data.groupby(['SN'])['Price'].transform('sum')
purchase_data['Purchase Count'] = purchase_data.groupby(['SN'])['Price'].transform('count')
purchase_data['Average Purchase Price'] = purchase_data['Total Purchase Value']/purchase_data['Purchase Count']
purchase_data
purchase_unique = purchase_data.drop_duplicates(subset=['SN'])
purchase_unique
top_5 = purchase_unique.nlargest(5,'Total Purchase Value')
top_5
top_spenders = top_5[['SN','Purchase Count','Average Purchase Price','Total Purchase Value']]
top_spenders['Average Purchase Price'] = top_spenders['Average Purchase Price'].map("${:.2f}".format)
top_spenders['Total Purchase Value'] = top_spenders['Total Purchase Value'].map("${:.2f}".format)
top_spenders.set_index('SN')
top_spenders = top_5.loc[:,['SN','Purchase Count','Average Purchase Price','Total Purchase Value']]
top_spenders['Average Purchase Price'] = top_spenders['Average Purchase Price'].map("${:.2f}".format)
top_spenders['Total Purchase Value'] = top_spenders['Total Purchase Value'].map("${:.2f}".format)
top_spenders.set_index('SN')
###Output
_____no_output_____
###Markdown
Most Popular Items
###Code
purchase_data['Purchase Count'] = purchase_data.groupby(['Item ID'])['Price'].transform('count')
purchase_data
purchase_data['Total Purchase Value'] = purchase_data.groupby(['Item ID'])['Price'].transform('sum')
purchase_data
Item_count = purchase_data.drop_duplicates(subset=['Item ID'])
Item_count
sort_item = Item_count.sort_values('Purchase Count',ascending=False)
sort_item.head(6)
popular = Item_count.nlargest(5,'Purchase Count')
popular['Price'] = popular['Price'].map("${:.2f}".format)
popular['Total Purchase Value'] = popular['Total Purchase Value'].map("${:.2f}".format)
popular = popular.rename(columns={'Price':'Item Price'})
popular
Most_Popular_Items = popular[['Item ID','Item Name','Purchase Count','Item Price','Total Purchase Value']]
Most_Popular_Items.set_index('Item ID','Item Name')
###Output
_____no_output_____
###Markdown
Most Profitable Items
###Code
most_profitable = Item_count.nlargest(5,'Total Purchase Value')
most_profitable
Most_Profitable_Items = most_profitable[['Item ID','Item Name','Purchase Count','Price','Total Purchase Value']]
Most_Profitable_Items = Most_Profitable_Items.rename(columns={'Price':'Item Price'})
Most_Profitable_Items['Item Price'] = Most_Profitable_Items['Item Price'].map("${:.2f}".format)
Most_Profitable_Items['Total Purchase Value'] = Most_Profitable_Items['Total Purchase Value'].map("${:.2f}".format)
Most_Profitable_Items.set_index('Item ID')
###Output
_____no_output_____
###Markdown
Observable Trends
###Code
1. Majority of gamers are male, comprising more than 84% (484 out of 576 total unique players).
81 female players (or 14%) and rest are other/undisclosed.
2. However, average purchase is higher for both females and other/undisclosed compared to male.
Average female purchase value is $4.47, other/undisclosed average purchase value $4.56 versus $4.07 for males.
One possible explanation is that female/other players tend to show higher engagement compared to male players.
3. Majority of players are in the age range of 15 - 29, totaling 442 out of 576 (more than 75%).
One reason could be lower commitments in terms of work/family for this age cohort with more free time to engage in video games.
Total purchase value highest for the 20-24 age range.
Average purchase price highest for 35-39 age group, indicating higher discretionary income.
###Output
_____no_output_____
###Markdown
Player Count* Total Number of Players
###Code
player_count = len(purchase_data_df["SN"].unique())
player_count
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total)* Number of Unique Items* Average Purchase Price* Total Number of Purchases* Total Revenue
###Code
#Number of unique items.
unique_item = len(purchase_data_df["Item Name"].unique())
unique_item
#Average Purchase Price
average_purchase = purchase_data_df["Price"].mean()
average_purchase
#Total number of purchases
total_purchases = len(purchase_data_df)
total_purchases
# Total Revenue.
total_revenue= purchase_data_df["Price"].sum()
total_revenue.round(2)
#Make a data frame out of a dictionary of the new values
purchasing_analysis = pd.DataFrame({"Number of Unique Items":[unique_item],
"Average Purchase Price":[average_purchase],
"Total Number of Purchases":[total_purchases],
"Total Revenue":[total_revenue]})
purchasing_analysis
###Output
_____no_output_____
###Markdown
Gender Demographics* Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
#Count of Male Players, Female Players and Other/Non-Disclosed
gender_count = purchase_data_df.groupby("Gender")["SN"].nunique()
gender_count.head()
#Percentage of Male Players,Female Players and Other/Non-Disclosed
gender_percent = (gender_count/player_count)*100
gender_percent.round(2)
##Make a data frame out of a dictionary of the new values
gender_demographics = pd.DataFrame({"Gender Count": gender_count,
"Gender Percentage":gender_percent})
gender_demographics
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender)* The below each broken by gender * Purchase Count * Average Purchase Price * Total Purchase Value * Average Purchase Total per Person by Gender
###Code
#Purchase Count by gender
gender_purchases = purchase_data_df.groupby("Gender")["Item Name"]
gender_p_count = gender_purchases.count()
gender_p_count
#Average Purchase Price by gender
gender_average_p = purchase_data_df.groupby("Gender")["Price"].mean()
gender_average_p.round(2)
#Total Purchase Value by gender
gender_total_p = purchase_data_df.groupby("Gender")["Price"].sum()
gender_total_p
#Average Purchase Total Per Person by gender
aptpp= gender_total_p/gender_count
aptpp.round(2)
# Purchasing analysis DataFrame by gender.
gender_analysis = pd.DataFrame({"Purchase Count":gender_p_count,
"Average Purchase Price":gender_average_p,
"Total Purchase Value":gender_total_p,
"Average Purchase Total Per Person":aptpp})
gender_analysis
###Output
_____no_output_____
###Markdown
Age Demographics* The below each broken into bins of 4 years (i.e. <10, 10-14, 15-19, etc.) * Purchase Count * Average Purchase Price * Total Purchase Value * Average Purchase Total per Person by Age Group
###Code
#Create bins: <10, 10-14, 15-19, 20-24, 25-29, 30-34, 35-39 >39.
bins = [0,10,15,20,25,30,35,40, 45]
age_ranges = ["<10", "10-14","15-19", "20-24", "25-29", "30-34", "35-39", ">=40"]
# Cut purchase data and place the ages into bins
purchase_data_df["Age Range"] = pd.cut(purchase_data_df["Age"], bins, labels=age_ranges, include_lowest=True)
purchase_data_df
# Purchase count by age range.
age_purchase_count = purchase_data_df.groupby("Age Range")["Item Name"]
age_p_count = age_purchase_count.count()
age_p_count
# Average purchase price by age range.
age_purchase_price= purchase_data_df.groupby("Age Range")["Price"].mean()
age_purchase_price.round(2)
#Total purchase value by age range.
age_total_purchase = purchase_data_df.groupby("Age Range")["Price"].sum()
age_total_purchase
#Average Purchase Total Per Person by Age Group
age_aptpp = age_total_purchase/player_count
age_aptpp.round(2)
# Age Demographics DataFrame
age_demographics = pd.DataFrame({"Purchase Count":age_p_count,
"Average Purchase Price":age_purchase_price,
"Total Purchase Value": age_total_purchase,
"Average Purchase Total Per Person": age_aptpp})
age_demographics
###Output
_____no_output_____
###Markdown
Top Spenders* Identify the the top 5 spenders in the game by total purchase value, then list (in a table): * SN * Purchase Count * Average Purchase Price * Total Purchase Value
###Code
players_purchase_count = purchase_data_df.groupby("SN").count()["Price"].rename("Purchase Count")
players_average_price = purchase_data_df.groupby("SN").mean()["Price"].rename("Average Purchase Price")
players_total = purchase_data_df.groupby("SN").sum()["Price"].rename("Total Purchase Value")
#Convert to DataFrame.
total_users = pd.DataFrame({"Purchase Count":players_purchase_count,
"Average Purchase Price": players_average_price,
"Total Purchase Value": players_total})
total_users.head()
# Sort table to show the top five spenders.
top_five = total_users.sort_values("Total Purchase Value", ascending=False)
top_five.head(5)
###Output
_____no_output_____
###Markdown
Most Popular Items* Identify the 5 most popular items by purchase count, then list (in a table): * Item ID * Item Name * Purchase Count * Item Price * Total Purchase Value
###Code
# Total items purchases analysis.
items_purchase_count = purchase_data_df.groupby(["Item ID", "Item Name"]).count()["Price"].rename("Purchase Count")
items_average_price = purchase_data_df.groupby(["Item ID", "Item Name"]).mean()["Price"].rename("Average Purchase Price")
items_value_total = purchase_data_df.groupby(["Item ID", "Item Name"]).sum()["Price"].rename("Total Purchase Value")
# Convert to DataFrame
items_purchased = pd.DataFrame({"Purchase Count":items_purchase_count,
"Item Price":items_average_price,
"Total Purchase Value":items_value_total})
items_purchased.head()
# Sort table to show the five the most popular items.
popular_items = items_purchased.sort_values("Purchase Count", ascending=False)
popular_items.head()
###Output
_____no_output_____
###Markdown
Most Profitable Items* Identify the 5 most profitable items by total purchase value, then list (in a table): * Item ID * Item Name * Purchase Count * Item Price * Total Purchase Value
###Code
# Sort table to show the five the most profitable items.
profitable_items = items_purchased.sort_values("Total Purchase Value", ascending=False)
profitable_items.head()
###Output
_____no_output_____
###Markdown
Player Count Display the total number of players
###Code
unique_players = len(purchase_data['SN'].unique())
player_demo = purchase_data.loc[:, ["Gender", "SN", "Age"]]
player_demo = player_demo.drop_duplicates()
unique_players_df = pd.DataFrame({"Total Players" : [unique_players]})
unique_players_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total)Run basic calculations to obtain number of unique items, average price, etc.Create a summary data frame to hold the resultsOptional: give the displayed data cleaner formattingDisplay the summary data frame
###Code
purchase_data.head()
unique_items = len(purchase_data["Item Name"].unique())
average_price = round(purchase_data["Price"].mean(),2)
number_of_purchases = purchase_data["Price"].count()
total_revenue = purchase_data["Price"].sum()
# Purchase summary DataFrame
purchase_df = pd.DataFrame({"Number of Unique Items" : [unique_items],
"Average Price" : [average_price],
"Number of Purchases" : [number_of_purchases],
"Total Revenue" : [total_revenue]
})
purchase_df["Average Price"] = purchase_df["Average Price"].map("${:,.2f}".format)
purchase_df["Total Revenue"] = purchase_df["Total Revenue"].map("${:,.2f}".format)
purchase_df
###Output
_____no_output_____
###Markdown
Gender DemographicsPercentage and Count of Male PlayersPercentage and Count of Female PlayersPercentage and Count of Other / Non-Disclosed
###Code
gender_demo_total = player_demo["Gender"].value_counts()
gender_demo_percentage = gender_demo_total/unique_players * 100
gender_demo_df = pd.DataFrame({"Total Count": gender_demo_total,
"Percentage of Players":gender_demo_percentage
})
gender_demo_df["Percentage of Players"] = gender_demo_df["Percentage of Players"].map("{:,.2f}%".format)
gender_demo_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender)¶Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by genderCreate a summary data frame to hold the resultsOptional: give the displayed data cleaner formattingDisplay the summary data frame
###Code
avg_purchase_price = purchase_data.groupby(["Gender"]).mean()["Price"]
purchase_total = purchase_data.groupby(["Gender"]).sum()["Price"]
purchase_count = purchase_data.groupby(["Gender"]).count()["Purchase ID"]
# Avg Total Purchase per Person
avg_total_purchase_per_person = purchase_total / gender_demo_df["Total Count"]
# avg_total_purchase_per_person
summary_gender_data = pd.DataFrame({"Purchase Count" : purchase_count,
"Average Purchase Price" : avg_purchase_price,
"Total Purchase Value" : purchase_total,
"Avg Total Purchase per Person": avg_total_purchase_per_person
})
summary_gender_data["Average Purchase Price"] = summary_gender_data["Average Purchase Price"].map("${:,.2f}".format)
summary_gender_data["Total Purchase Value"] = summary_gender_data["Total Purchase Value"].map("${:,.2f}".format)
summary_gender_data["Avg Total Purchase per Person"] = summary_gender_data["Avg Total Purchase per Person"].map("${:,.2f}".format)
summary_gender_data
###Output
_____no_output_____
###Markdown
Age DemographicsEstablish bins for agesCategorize the existing players using the age bins. Hint: use pd.cut()Calculate the numbers and percentages by age groupCreate a summary data frame to hold the resultsOptional: round the percentage column to two decimal pointsDisplay Age Demographics Table
###Code
# Establish bins for ages
bins = [0, 9.90, 14.90, 19.90, 24.90, 29.90, 34.90, 39.90, 999]
# Create the names for the five bins
group_names = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
# Categorize the existing players using the age bins.
player_demo["Age Range"] = pd.cut(player_demo["Age"], bins, labels=group_names)
player_demo
# Calculate the numbers and percentages by age group
age_demo_total = player_demo["Age Range"].value_counts()
age_demo_total
age_demo_percent = age_demo_total / unique_players * 100
age_demo_percent
# Create a summary data frame to hold the results
age_demo = pd.DataFrame({"Total Count": age_demo_total,
"Percentage of Players": age_demo_percent
})
# round the percentage column to two decimal points
age_demo["Percentage of Players"] = age_demo["Percentage of Players"].map("{:,.2f}%".format)
age_demo = age_demo.sort_index()
age_demo
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age)¶Bin the purchase_data frame by ageRun basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table belowCreate a summary data frame to hold the resultsOptional: give the displayed data cleaner formattingDisplay the summary data frame
###Code
# Bin the purchase_data frame by age
purchase_data["Age Range"] = pd.cut(purchase_data["Age"], bins, labels=group_names)
# Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below
avg_purchase_price_by_age = purchase_data.groupby(["Age Range"]).mean()["Price"]
purchase_total_by_age = purchase_data.groupby(["Age Range"]).sum()["Price"]
purchase_count_by_age = purchase_data.groupby(["Age Range"]).count()["Purchase ID"]
# Avg Total Purchase per Person by age
avg_total_purchase_per_person_by_age = purchase_total_by_age / age_demo["Total Count"]
avg_total_purchase_per_person_by_age
# Create a summary data frame to hold the results
summary_age_data = pd.DataFrame({"Purchase Count" : purchase_count_by_age,
"Average Purchase Price" : avg_purchase_price_by_age,
"Total Purchase Value" : purchase_total_by_age,
"Avg Total Purchase per Person": avg_total_purchase_per_person_by_age
})
#clean formatting
summary_age_data["Average Purchase Price"] = summary_age_data["Average Purchase Price"].map("${:,.2f}".format)
summary_age_data["Total Purchase Value"] = summary_age_data["Total Purchase Value"].map("${:,.2f}".format)
summary_age_data["Avg Total Purchase per Person"] = summary_age_data["Avg Total Purchase per Person"].map("${:,.2f}".format)
summary_age_data
###Output
_____no_output_____
###Markdown
Top Spenders¶Run basic calculations to obtain the results in the table belowCreate a summary data frame to hold the resultsSort the total purchase value column in descending orderOptional: give the displayed data cleaner formattingDisplay a preview of the summary data frame
###Code
# Top Spenders¶
# Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total
avg_purchase_price_by_name = purchase_data.groupby(["SN"]).mean()["Price"]
purchase_total_by_name = purchase_data.groupby(["SN"]).sum()["Price"]
purchase_count_by_name = purchase_data.groupby(["SN"]).count()["Purchase ID"]
# Create a summary data frame to hold the results
spender_data_summary = pd.DataFrame({"Purchase Count" : purchase_count_by_name,
"Average Purchase Price" : avg_purchase_price_by_name,
"Total Purchase Value" : purchase_total_by_name
})
# Sort the total purchase value column in descending order
spender_data_summary_sorted = spender_data_summary.sort_values("Total Purchase Value", ascending = False)
spender_data_summary_sorted
# Clean formatting
spender_data_summary_sorted["Average Purchase Price"] = spender_data_summary_sorted["Average Purchase Price"].map("${:,.2f}".format)
spender_data_summary_sorted["Total Purchase Value"] = spender_data_summary_sorted["Total Purchase Value"].map("${:,.2f}".format)
# Display a preview of the summary data frame - top 5 rows
spender_data_summary_sorted.head()
###Output
_____no_output_____
###Markdown
Most Popular Items¶Retrieve the Item ID, Item Name, and Item Price columnsGroup by Item ID and Item Name. Perform calculations to obtain purchase count, average item price, and total purchase valueCreate a summary data frame to hold the resultsSort the purchase count column in descending orderOptional: give the displayed data cleaner formattingDisplay a preview of the summary data frame
###Code
# Most Popular Items
# Retrieve the Item ID, Item Name, and Item Price columns
purchase_data[["Item ID", "Item Name", "Price"]]
# Group by Item ID and Item Name.
purchase_data_grouped = purchase_data.groupby(["Item ID", "Item Name"])
# purchase_data_grouped
# Perform calculations to obtain purchase count, average item price, and total purchase value
item_purchase_count = purchase_data.groupby(["Item ID", "Item Name"]).count()["Purchase ID"]
# item_purchase_count
item_average_price = purchase_data.groupby(["Item ID", "Item Name"]).mean()["Price"]
# item_average_price
item_total_purchase_value = purchase_data.groupby(["Item ID", "Item Name"]).sum()["Price"]
# item_total_purchase_value
# Create a summary data frame to hold the results
popular_items_summary = pd.DataFrame({"Purchase Count" : item_purchase_count,
"Item Price" : item_average_price,
"Total Purchase Value" : item_total_purchase_value
})
popular_items_summary
# Sort the purchase count column in descending order
popular_items_summary_sorted_by_count = popular_items_summary.sort_values("Purchase Count", ascending = False)
# Clean formatting
popular_items_summary_sorted_by_count["Item Price"] = popular_items_summary_sorted_by_count["Item Price"].map("${:,.2f}".format)
popular_items_summary_sorted_by_count["Total Purchase Value"] = popular_items_summary_sorted_by_count["Total Purchase Value"].map("${:,.2f}".format)
# Display a preview of the summary data frame
popular_items_summary_sorted_by_count.head()
###Output
_____no_output_____
###Markdown
Most Profitable ItemsSort the above table by total purchase value in descending orderOptional: give the displayed data cleaner formattingDisplay a preview of the data frame
###Code
# Most Profitable Items
# Sort the above table by total purchase value in descending order
items_sorted_by_purchase_value = popular_items_summary.sort_values("Total Purchase Value", ascending = False)
items_sorted_by_purchase_value
# cleanup formatting
items_sorted_by_purchase_value["Item Price"] = items_sorted_by_purchase_value["Item Price"].map("${:,.2f}".format)
items_sorted_by_purchase_value["Total Purchase Value"] = items_sorted_by_purchase_value["Total Purchase Value"].map("${:,.2f}".format)
# Display a preview of the data frame
items_sorted_by_purchase_value.head()
###Output
_____no_output_____
###Markdown
Player Count* Total Number of Players
###Code
# Finding the total number of players and displaying in a data frame
player_count = len(purchase_data["SN"].unique())
player_df = pd.DataFrame({"Total Players": [player_count]})
player_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Finding the number of unique items
unique_items = len(purchase_data["Item ID"].unique())
# Finding the average price of these items
avg_price = round(purchase_data["Price"].mean(),2)
# Finding the number of purchases
total_purchases = purchase_data["Purchase ID"].count()
# Finding total revenue
total_revenue = purchase_data["Price"].sum()
# Creating a summary data frame
purchase_analysis = pd.DataFrame({"Number of Unique Items": [unique_items],
"Average Price": "${:,}".format(avg_price),
"Number of Purchases": total_purchases,
"Total Revenue": "${:,}".format(total_revenue)})
# Displaying summary of data frame
purchase_analysis
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
# Locating and Creating a data frame for male players
male_df = purchase_data.loc[purchase_data["Gender"] == "Male",:]
# Counting unique males in male data frame
male_count = len(male_df["SN"].unique())
male_percentage = "{:.2%}".format(male_count/player_count)
# Locating and Creating a data frame for female players
female_df = purchase_data.loc[purchase_data["Gender"] == "Female",:]
# Counting unique females in female data frame
female_count = len(female_df["SN"].unique())
female_percentage = "{:.2%}".format(female_count/player_count)
# Locating and Creating a data frame for other / non-disclosed players
other_df = purchase_data.loc[purchase_data["Gender"] == "Other / Non-Disclosed",:]
# Counting unique others in other data frame
other_count = len(other_df["SN"].unique())
other_percentage = "{:.2%}".format(other_count/player_count)
# Creating summary data frame
demographics_summary = pd.DataFrame({"Total Count": {"Male": male_count,
"Female":female_count,
"Other / Non-Disclosed":other_count},
"Percentage of Players": {"Male":male_percentage,
"Female":female_percentage,
"Other / Non-Disclosed":other_percentage}})
# Written Description for the observable trend of the Data
print('Looking at the gender demographics alone, a great majority of the players that make up the game are male –– totaling 84.03%.')
# Displaying summary
demographics_summary
###Output
Looking at the gender demographics alone, a great majority of the players that make up the game are male –– totaling 84.03%.
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Grouping by gender
grouped_purchases_gender = purchase_data.groupby("Gender")
# Counting purchases
purchase_count = grouped_purchases_gender["Purchase ID"].count()
# Averaging purchase price
avg_purchases_price = round(grouped_purchases_gender["Price"].mean(),2)
# Total Purchase Value
total_purchase_value = grouped_purchases_gender["Price"].sum()
# Average Total Purchase Per Person
avg_per_person = total_purchase_value/demographics_summary["Total Count"]
# Creating a summary data frame
purchasing_analysis_summary = pd.DataFrame({"Purchase Count": purchase_count,
"Average Purchase Price": avg_purchases_price.apply(lambda x: '${:,.2f}'.format(x)),
"Total Purchase Value": total_purchase_value.apply(lambda x: '${:,.2f}'.format(x)),
"Avg Total Purchase Per Person": avg_per_person.apply(lambda x: '${:,.2f}'.format(x))})
# Interesting observable trend
print('Although males make up a huge portion of the purchasing value, it is interesting to see that both female and other spend much more on average')
# Displaying Summary
purchasing_analysis_summary
###Output
Although males make up a huge portion of the purchasing value, it is interesting to see that both female and other spend much more on average
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
# Age bins and labels
age_bins = [0,9.99,14.99,19.99,24.99,29.99,34.99,39,200]
age_labels = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
# Adding a new column that includes the bins and labels to the current data set
purchase_data["Age Ranges"] = pd.cut(purchase_data["Age"], age_bins, labels = age_labels)
# Deleting and duplicate gaming accounts from the data set
unduped_purchase_data = purchase_data.drop_duplicates('SN')
# Aggregating total players in each age range category from the unduped data set
total_players = unduped_purchase_data.groupby(unduped_purchase_data["Age Ranges"]).count()['SN']
# Calculating the percentage of players that make up each category
percentage_of_players_dem = total_players/player_count
# Creating a summary data frame
purchasing_analysis_age_dem = pd.DataFrame({"Total Count": total_players,
"Percentage of Players": percentage_of_players_dem.apply(lambda x: "{:.2%}".format(x))})
# Written description for the observable trend of the data
print('The majority of players are between the ages 15-29 –– totaling over 76%.')
purchasing_analysis_age_dem
###Output
The majority of players are between the ages 15-29 –– totaling over 76%.
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Reusing code from earlier to again create age range bins
age_bins = [0,9.99,14.99,19.99,24.99,29.99,34.99,39,200]
age_labels = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
purchase_data["Age Ranges"] = pd.cut(purchase_data["Age"], age_bins, labels = age_labels)
# Counting the number of purchases made amongst each age range
purchase_count_age = purchase_data.groupby([purchase_data["Age Ranges"]]).count()['Purchase ID']
# Calculating the average purchase price amongst each age range
avg_purchase_price_age = purchase_data.groupby(purchase_data["Age Ranges"]).mean()['Price']
# Calculating the total sum of purchases for each ge range
total_purchase_value = purchase_data.groupby(purchase_data["Age Ranges"]).sum()['Price']
# Calculating the average purchase per person amongst each age range
avg_total_purchase_per_person = total_purchase_value/total_players
# Creating a summary data frame
purchasing_analysis_age_summary = pd.DataFrame({"Purchase Count": purchase_count_age,
"Average Purchase Price": avg_purchase_price_age.apply(lambda x: '${:,.2f}'.format(x)),
"Total Purchase Value": total_purchase_value.apply(lambda x: '${:,.2f}'.format(x)),
"Avg Total Purchase Per Person": avg_total_purchase_per_person.apply(lambda x: '${:,.2f}'.format(x))})
# Displaying summary of data frame
purchasing_analysis_age_summary
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# Counting the number of purchases for each player
purchase_count_spender = purchase_data.groupby('SN').count()['Purchase ID']
# Calculating the average purchase for specific players
avg_purchase_price_spender = purchase_data.groupby('SN').mean()['Price']
# Calculating total purchases for specific players
total_purchase_value_spender = purchase_data.groupby('SN').sum()['Price']
# Creating a summary data frame
purchasing_analysis_spending_summary = pd.DataFrame({"Purchase Count": purchase_count_spender,
"Average Purchase Price": avg_purchase_price_spender.apply(lambda x: '${:,.2f}'.format(x)),
"Total Purchase Value": total_purchase_value_spender})
# Sorting the data frame by the Total Purchase Value column in descending order
top_spenders = purchasing_analysis_spending_summary.sort_values('Total Purchase Value', ascending = False)
# Displaying the summary with the top 5 spenders
top_spenders.head()
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# Grouping by Item ID and Item Name and counting the number of purchases
item_group_count = purchase_data.groupby(['Item ID', 'Item Name']).count()['Purchase ID']
# Calculating the total amount spent on an item
total_item_price = purchase_data.groupby(['Item ID', 'Item Name']).sum()['Price']
# Calculating the price of each item
item_price = total_item_price/item_group_count
# Creating a summary data frame
item_popularity_summary = pd.DataFrame({"Purchase Count": item_group_count,
"Item Price": item_price.apply(lambda x: '${:,.2f}'.format(x)),
"Total Purchase Value": total_item_price})
# Sorting the summary data frame by Purchase Count in descending orer
item_popularity = item_popularity_summary.sort_values('Purchase Count', ascending = False)
# Displaying the top 5 most popular items
item_popularity.head()
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
# Sorting the item popularity summary data frame by Total Purchase Value in descending orer
item_popularity_value = item_popularity_summary.sort_values('Total Purchase Value', ascending = False)
# Observable trend
print('Oatbreaker, Fiery Glass Crusader, and Nirvana are by far the most popular and profitable items. Future items should use those three as an example.')
# Displaying the top 5 most profitable items
item_popularity_value.head()
###Output
Oatbreaker, Fiery Glass Crusader, and Nirvana are by far the most popular and profitable items. Future items should use those three as an example.
###Markdown
Player Count * Display the total number of players
###Code
total_players = purchase_data["SN"].nunique()
player_count = pd.DataFrame({"Total Players": [total_players]})
player_count
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Display the summary data frame
###Code
unique_items = purchase_data["Item ID"].nunique()
average_price = purchase_data["Price"].mean()
total_purchases = purchase_data["Purchase ID"].nunique()
total_revenue = purchase_data["Price"].sum()
purchasing_analysis = pd.DataFrame({"Number of Unique Items": [unique_items],
"Average Price": [average_price],
"Number of Purchases": [total_purchases],
"Total Revenue": [total_revenue]})
purchasing_analysis.style.format({"Average Price":"${:,.2f}",
"Total Revenue":"${:,.2f}"})
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
gender_groups = purchase_data.groupby("Gender")
gender_count = gender_groups.nunique()["SN"]
gender_percentage = gender_count/total_players * 100
gender_demographics = pd.DataFrame({"Total Count": gender_count, "Percentage of Players": gender_percentage})
gender_demographics.index.name= None
gender_demographics.sort_values(["Total Count"], ascending=False).style.format({"Percentage of Players": "{:.2f}%"})
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Display the summary data frame
###Code
purchase_count = gender_groups["Purchase ID"].count()
gender_average_price = gender_groups["Price"].mean()
gender_total_value = gender_groups["Price"].sum()
gender_average_total = gender_total_value/gender_count
purchasing_analysis_gender = pd.DataFrame({"Purchase Count": purchase_count,
"Average Purchase Price": gender_average_price,
"Total Purchase Value": gender_total_value,
"Avg Total Purchase per Person": gender_average_total})
purchasing_analysis_gender.style.format({"Average Purchase Price": "${:.2f}",
"Total Purchase Value": "${:.2f}",
"Avg Total Purchase per Person": "${:.2f}"})
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. * Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Display Age Demographics Table
###Code
bins = [0, 9.9, 14.9, 19.9, 24.9, 29.9, 34.9, 39.9, 49]
bin_names = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
purchase_data["Age Bins"] = pd.cut(purchase_data["Age"], bins, labels= bin_names)
age_groups = purchase_data.groupby("Age Bins")
total_age_count = age_groups["SN"].nunique()
percentage_players = total_age_count/total_players*100
age_demographics = pd.DataFrame({"Total Count": total_age_count, "Percentage of Players": percentage_players})
age_demographics.index.name = None
age_demographics.style.format({"Percentage of Players": "{:.2f}%"})
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Display the summary data frame
###Code
purchase_count_age = age_groups["Purchase ID"].count()
average_price_age = age_groups["Price"].mean()
total_purchase_age = age_groups["Price"].sum()
average_total_age = total_purchase_age/total_age_count
purchasing_analysis_age = pd.DataFrame({"Purchase Count": purchase_count_age,
"Average Purchase Price": average_price_age,
"Total Purchase Value": total_purchase_age,
"Avg Total Purchase Per Person": average_total_age})
purchasing_analysis_age.index.name= "Age Ranges"
purchasing_analysis_age.style.format({"Average Purchase Price": "${:.2f}",
"Total Purchase Value": "${:.2f}",
"Avg Total Purchase Per Person": "${:.2f}"})
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Display a preview of the summary data frame
###Code
spenders = purchase_data.groupby("SN")
purchase_count_spender = spenders["Purchase ID"].count()
average_price_spender = spenders["Price"].mean()
total_purchase_spender = spenders["Price"].sum()
top_spenders = pd.DataFrame({"Purchase Count":purchase_count_spender,
"Average Purchase Price": average_price_spender,
"Total Purchase Value": total_purchase_spender})
top_five = top_spenders.sort_values(["Total Purchase Value"], ascending= False).head()
top_five.style.format({"Average Purchase Price": "${:.2f}",
"Total Purchase Value": "${:.2f}"})
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Display a preview of the summary data frame
###Code
popular_items = purchase_data[["Item ID", "Item Name", "Price"]]
items_grouped = popular_items.groupby(["Item ID", "Item Name"])
popular_purchase_count = items_grouped["Price"].count()
popular_total_value = items_grouped["Price"].sum()
popular_items_price = popular_total_value/popular_purchase_count
most_popular_items = pd.DataFrame({"Purchase Count": popular_purchase_count,
"Item Price": popular_items_price,
"Total Purchase Value": popular_total_value})
most_popular_summary = most_popular_items.sort_values(["Purchase Count"], ascending= False).head()
most_popular_summary.style.format({"Item Price": "${:.2f}", "Total Purchase Value": "${:.2f}"})
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Display a preview of the data frame
###Code
most_popular_summary = most_popular_items.sort_values(["Total Purchase Value"], ascending= False).head()
most_popular_summary.style.format({"Item Price": "${:.2f}", "Total Purchase Value": "${:.2f}"})
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
data_df = pd.read_csv(purchase_data)
#Create a data frame displaying the total number of players
total_players = [{'Total Players': str(len(data_df['SN'].value_counts()))}]
data_df = pd.DataFrame(total_players, index =['0'])
data_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Create a summary data frame of unique items, average price, etc.
groupby_gender_df = pd.read_csv(purchase_data, encoding="utf-8")
items_count = purchase_data_pd["Item ID"].nunique()
gender_df = groupby_gender_df.groupby(["Gender"])
average_prices = purchase_data_pd["Price"].mean()
average_price = average_prices
total_orders = purchase_data_pd["Purchase ID"].count()
total_revenue = average_prices * total_orders
#Display the summary data frame
summary_result = pd.DataFrame({"Number of Unique Items":[items_count],
"Average Price": [average_price],
"Number of Purchases": [total_orders],
"Total Revenue": [total_revenue]})
summary_result["Total Revenue"] = summary_result["Total Revenue"].astype(float).map("${:,.2f}".format)
summary_result["Average Price"] = summary_result["Average Price"].astype(float).map("${:.2f}".format)
summary_result
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
# Group purchase_data by Gender
index = ["Male", "Female", "Other / Non-Disclosed"]
groupby_df = pd.DataFrame({"Total Count" : groupby_gender_df["Gender"].value_counts(),
"Percentage of Players" : (groupby_gender_df["Gender"].value_counts()/groupby_gender_df["Gender"].count())/1})
# Format with currency style
groupby_df["Percentage of Players"] = groupby_df["Percentage of Players"].astype(float).map("{:.2%}".format)
groupby_df.style
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
total_gender = gender_df.nunique()["SN"]
purchase_count = groupby_gender_df["Gender"].value_counts()
avg_price = gender_df["Price"].mean()
total_value = avg_price * purchase_count
average_purch = total_value / total_gender
purchasing_analysis_df = pd.DataFrame({"Purchase Count": purchase_count,
"Average Purchase Price": avg_price,
"Total Purchase Value": total_value,
"Avg Total Purchase per Person": average_purch})
purchasing_analysis_df.index.name = "Gender"
index = ["Male", "Female", "Other / Non-Disclosed"]
purchasing_analysis_df.style.format({"Total Purchase Value":"${:,.2f}",
"Average Purchase Price":"${:,.2f}",
"Avg Total Purchase per Person":"${:,.2f}"})
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
data_pd = pd.read_csv(purchase_data)
# Establish bins for ages
age_bins = [0, 9.90, 14.90, 19.90, 24.90, 29.90, 34.90, 39.90, 99999]
age_ranger = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
data_pd["AgeG"] = pd.cut(data_pd["Age"],age_bins, labels=age_ranger)
group_age = data_pd.groupby("AgeG")
total_age = group_age["SN"].nunique()
total_player = len(data_pd["SN"].value_counts())
# Calculate percentages by age category
percent_by_age = (total_age / total_player) * 100
age_demographics = pd.DataFrame({"Total Count": total_age,
"Percentage of Players": percent_by_age})
# Format the data frame with no index name in the corner
age_demographics.index.name = None
# Format percentage with two decimal places
age_demographics.style.format({"Percentage of Players":"{:,.2f}%"})
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
purchase_age = group_age["Purchase ID"].count()
avg_pur_price = group_age["Price"].mean()
avg_purch_price = avg_pur_price.round(2)
purch_value = group_age["Price"].sum()
total_per_persons = purch_value/total_age
total_per_person = total_per_persons.round(2)
age_analysis= pd.DataFrame({"Purchase Counts": purchase_age,
"Average Purchase Price": avg_purch_price,
"Total Purchase Value": purch_value,
"Avg Total Purchase per Person": total_per_person})
age_analysis["Average Purchase Price"] = age_analysis["Average Purchase Price"].astype(float).map("${:,.2f}".format)
age_analysis["Total Purchase Value"] = age_analysis["Total Purchase Value"].astype(float).map("${:,.2f}".format)
age_analysis["Avg Total Purchase per Person"] = age_analysis["Avg Total Purchase per Person"].astype(float).map("${:,.2f}".format)
age_analysis.index.name = "Age Ranges"
age_analysis.style
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
top_spenders = pd.read_csv(purchase_data)
screen_name = top_spenders.groupby("SN")
pur_count = screen_name["Purchase ID"].nunique()
pur_price = screen_name["Price"].mean()
purchase_price = pur_price.round(2)
purchase_value = screen_name["Price"].sum()
spenders_name = pd.DataFrame({"Purchase Count" : pur_count,
"Average Purchase Price" : purchase_price,
"Total Purchase Value" : purchase_value,})
spenders_name = spenders_name.sort_values("Total Purchase Value", ascending=False).head()
spenders_name["Average Purchase Price"] = spenders_name["Average Purchase Price"].astype(float).map("${:,.2f}".format)
spenders_name["Total Purchase Value"] = spenders_name["Total Purchase Value"].astype(float).map("${:,.2f}".format)
spenders_name.index.name = "SN"
spenders_name.style
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
popular_items = top_spenders.groupby(["Item ID", "Item Name"])
pur_count = popular_items["Purchase ID"].nunique()
total_pur = popular_items["Price"].sum()
purchase_value = popular_items["Price"].count()
item_price = total_pur/purchase_value
popular_items = pd.DataFrame({"Purchase Count" : pur_count,
"Item Price": item_price,
"Total Purchase Value" : total_pur})
popular_items = popular_items.sort_values("Purchase Count", ascending=False).head()
popular_items.style.format({"Item Price":"${:,.2f}", "Total Purchase Value":"${:,.2f}"})
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
#Sort data frame by total purchase value
popular_items = popular_items.sort_values("Total Purchase Value", ascending=False).head()
popular_items.style.format({"Item Price":"${:,.2f}", "Total Purchase Value":"${:,.2f}"})
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_df = pd.read_csv(file_to_load)
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
#get names of columns
purchase_df.columns
purchase_df.head()
#find count from sn
player_count = purchase_df["SN"].value_counts()
#code to get count to total players column
total_players = len(purchase_df['SN'].value_counts())
total_players_df = pd.DataFrame({"Total Players": total_players}, index=[0])
total_players_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#get info on average and uniques
average_price = purchase_df['Price'].mean()
unique_items = purchase_df['Item ID'].count()
total_price = purchase_df['Price'].sum()
#make table
summary_df = pd.DataFrame({"Average" : average_price,
"Uniques" : unique_items,
"Total Cost" : [total_price]})
summary_df.index.name = "Table"
format_dict = {'Total Cost':'${0:,.0f}','Average Price' :'${0:,.0f}',}
summary_df.style.format(format_dict)
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
gender = purchase_df.groupby("Gender")
counts = gender.nunique()["SN"]
percentage = (counts/total_players*100)
gender_final = pd.DataFrame({ "Percentage of Players": percentage,"Total Count": counts
})
gender_final.index.name = 'Gender'
format_dict = {"Percentage of Players" :'{0:.0f}%'}
gender_final.style.format(format_dict)
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
gender_grouping = purchase_df.groupby("Gender")
purchase_count = counts
average_purchase = gender_grouping["Price"].mean()
avg_total = gender_grouping["Price"].sum()
avg_gender = avg_total/counts
demo_df = pd.DataFrame({"Purchase Count" : purchase_count,
"Avg Purchase Price" : average_purchase,
"Avg Purchase Total" : avg_total,
"Avg Purchase per person" : avg_gender})
demo_df.index.name= "Gender"
format_dict = {'Avg Purchase Price':'${0:,.0f}', 'Avg Purchase Total' :'${0:,.0f}', 'Avg Purchase per person' :'${0:,.0f}',}
demo_df.style.format(format_dict)
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
data_df = purchase_df.copy()
age_num = [0, 10, 15, 20, 25, 30, 35, 40, 100]
age_groups = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
data_df["Age Group"] = pd.cut(data_df["Age"], age_num, labels=age_groups)
grouped_data = data_df.groupby(["Age Group"])
total_count = grouped_data["SN"].nunique()
percentage = (grouped_data["Price"].sum()/data_df["SN"].nunique())
percentage_df = pd.DataFrame({"Total Count" : total_count,
"Percentage" : percentage})
format_dict = {'Percentage':'{:.2f}%'}
percentage_df.style.format(format_dict)
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
data_df = purchase_df
age_num = [0, 10, 15, 20, 25, 30, 35, 40, 400]
age_groups = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
data_df["Age Group"] = pd.cut(data_df["Age"], age_num, labels=age_groups)
grouped_data = data_df.drop_duplicates(subset ="SN",
keep = False, inplace = True)
grouped = data_df.groupby(["Age Group"])
count = grouped["Age"].count()
mean = grouped["Price"].mean()
total = grouped["Price"].sum()
users = grouped ["SN"].count()
avg_purchase = total/total_count
final_df = pd.DataFrame({"Purchase Count" : count,
"Avg Purchase Price" : mean,
"Avg Purchase per person" : avg_purchase,
"Total Purchase Value": total})
format_dict = {'Avg Purchase per person':'${0:,.0f}','Avg Purchase Price':'${0:,.0f}', 'Total Purchase Value'
:'${0:,.0f}'}
final_df.style.format(format_dict)
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#I could not figure how to fix this. Sorry
grouped = purchase_df.groupby("SN")
itemcount = grouped['Item ID'].count()
prices = grouped.sum()["Price"]
average = grouped.mean()["Price"]
users = grouped["SN"].count()
averageper = itemcount/users
prices_df = pd.DataFrame({"Purchase Count" : itemcount,
"Total Purchase Value" : prices,
"Avg Purchase" : average,
"Total Value" : prices,
"Avg Total Purchase per person" : averageper})
format_price = prices_df.sort_values(["Total Purchase Value"], ascending=False)
format_price
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
pop_data = purchase_df.groupby(["Item Name", "Item ID"])
item = pop_data["Item ID"]
itemname = pop_data["Item Name"]
itemprice = pop_data["Price"].count()
itemvalue = pop_data["Price"].sum()
price = itemvalue/itemprice
pop_summary = pd.DataFrame({"Purchase Count": itemprice,
"Item Price": price,
"Total Purchase Value": itemvalue})
pop_summary = pop_summary.sort_values(["Total Purchase Value"], ascending=False)
pop_summary.head()
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
pop_summary = pop_summary.sort_values(["Total Purchase Value"], ascending=False)
pop_summary.head()
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "/Users/madhuvenkidusamy/Documents/Data Science Bootcamp/Homeworks/pandas-challenge/HeroesOfPymoli/Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
purchase_data.head()
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
#purchase_data.head()
player_count = len(purchase_data["SN"].unique())
player_count_df = pd.DataFrame({"Total Players": [player_count]})
player_count_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#purchase_data.head()
unique_items = len(purchase_data["Item ID"].unique())
avg_price = round(purchase_data["Price"].mean(),2)
total_purchases = len(purchase_data["Item ID"])
total_revenue = purchase_data["Price"].sum()
purchasing_analysis_df = pd.DataFrame({"Number of Unique Items": unique_items,
"Average Price ($)": [avg_price],
"Number of Purchases": total_purchases,
"Total Revenue ($)": [total_revenue]})
purchasing_analysis_df
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
#purchase_data.head()
gender_df = purchase_data.drop_duplicates(subset=['SN'])
gender_df = gender_df[['SN','Gender']]
gender_df = gender_df.groupby(["Gender"])
gender_df = gender_df.count()
gender_df = gender_df.rename(columns={"SN":"Total Count" })
percent = round(100*gender_df["Total Count"]/sum(gender_df["Total Count"]),2)
gender_df["Percentage of Players"] = percent.map('{:,.2f}%'.format)
gender_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
purchase_count = purchase_data.groupby(["Gender"]).Gender.count()
avg_purchase_price = round(purchase_data.groupby(["Gender"]).Price.mean(),2)
total_purchase_value = purchase_data.groupby(["Gender"]).Price.sum()
avg_total_purchase_person = round(purchase_data.groupby(['SN','Gender']).sum().groupby('Gender')['Price'].mean(),2)
purchasing_analysis_gender = pd.DataFrame({
"Purchase Count": purchase_count,
"Average Purchase Price ($)": avg_purchase_price,
"Total Purchase Value ($)": total_purchase_value,
"Avg Total Purchase per Person ($)": avg_total_purchase_person })
purchasing_analysis_gender
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
unique_players = purchase_data.drop_duplicates(subset=['SN'])
bins = [0, 9, 14, 19, 24, 29, 34, 39, 100]
group_labels = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
unique_players["Age Group"] = pd.cut(unique_players["Age"], bins, labels=group_labels, include_lowest=True)
age_df = unique_players.groupby('Age Group').count()
percent_age = round(100*age_df['Age']/sum(age_df['Age']),2)
age_df["Percentage of Players"] = percent_age.map('{:,.2f}%'.format)
age_df = age_df[['Age','Percentage of Players']]
age_df = age_df.rename(columns={"Age":"Total Count" })
age_df
###Output
/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:6: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
purchase_data["Age Group"] = pd.cut(purchase_data["Age"], bins, labels=group_labels, include_lowest=True)
purchase_count = purchase_data.groupby(["Age Group"]).Age.count()
avg_purchase_price = round(purchase_data.groupby(["Age Group"]).Price.mean(),2)
total_purchase_value = purchase_data.groupby(["Age Group"]).Price.sum()
avg_total_purchase_person = round(purchase_data.groupby(['SN','Age Group']).sum().groupby('Age Group')['Price'].mean(),2)
purchasing_analysis_age = pd.DataFrame({
"Purchase Count": purchase_count,
"Average Purchase Price ($)": avg_purchase_price,
"Total Purchase Value ($)": total_purchase_value,
"Avg Total Purchase per Person ($)": avg_total_purchase_person })
purchasing_analysis_age
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
purchase_count = purchase_data.groupby(["SN"])['Price'].count()
avg_purchase_price = round(purchase_data.groupby(["SN"]).Price.mean(),2)
total_purchase_value = purchase_data.groupby(["SN"]).Price.sum()
top_spenders = pd.DataFrame({
"Purchase Count": purchase_count,
"Average Purchase Price ($)": avg_purchase_price,
"Total Purchase Value ($)": total_purchase_value
})
top_spenders = top_spenders.sort_values("Total Purchase Value ($)", ascending=False)
top_spenders.head()
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, average item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
purchase_count = purchase_data.groupby(["Item ID",'Item Name'])['Item ID'].count()
item_price = round(purchase_data.groupby(["Item ID",'Item Name']).Price.mean(),2)
total_purchase_value = purchase_data.groupby(["Item ID",'Item Name']).Price.sum()
popular_items = pd.DataFrame({
"Purchase Count": purchase_count,
"Item Price ($)": item_price,
"Total Purchase Value ($)": total_purchase_value
})
popular_items = popular_items.sort_values(["Purchase Count",'Total Purchase Value ($)'], ascending=False)
popular_items.head()
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
profitable_items = popular_items.sort_values(['Total Purchase Value ($)'], ascending=False)
profitable_items.head()
###Output
_____no_output_____
###Markdown
Pandas Homework - Pandas, Pandas, Pandas BackgroundThe data dive continues!Now, it's time to take what you've learned about Python Pandas and apply it to new situations. For this assignment, you'll need to complete **one of two** (not both) Data Challenges. Once again, which challenge you take on is your choice. Just be sure to give it your all -- as the skills you hone will become powerful tools in your data analytics tool belt. Before You Begin1. Create a new repository for this project called `pandas-challenge`. **Do not add this homework to an existing repository**.2. Clone the new repository to your computer.3. Inside your local git repository, create a directory for the Pandas Challenge you choose. Use folder names corresponding to the challenges: **HeroesOfPymoli** or **PyCitySchools**.4. Add your Jupyter notebook to this folder. This will be the main script to run for analysis.5. Push the above changes to GitHub or GitLab. Option 1: Heroes of PymoliCongratulations! After a lot of hard work in the data munging mines, you've landed a job as Lead Analyst for an independent gaming company. You've been assigned the task of analyzing the data for their most recent fantasy game Heroes of Pymoli.Like many others in its genre, the game is free-to-play, but players are encouraged to purchase optional items that enhance their playing experience. As a first task, the company would like you to generate a report that breaks down the game's purchasing data into meaningful insights.Your final report should include each of the following: Player Count* Total Number of Players Purchasing Analysis (Total)* Number of Unique Items* Average Purchase Price* Total Number of Purchases* Total Revenue Gender Demographics* Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed Purchasing Analysis (Gender)* The below each broken by gender * Purchase Count * Average Purchase Price * Total Purchase Value * Average Purchase Total per Person by Gender Age Demographics* The below each broken into bins of 4 years (i.e. <10, 10-14, 15-19, etc.) * Purchase Count * Average Purchase Price * Total Purchase Value * Average Purchase Total per Person by Age Group Top Spenders* Identify the the top 5 spenders in the game by total purchase value, then list (in a table): * SN * Purchase Count * Average Purchase Price * Total Purchase Value Most Popular Items* Identify the 5 most popular items by purchase count, then list (in a table): * Item ID * Item Name * Purchase Count * Item Price * Total Purchase Value Most Profitable Items* Identify the 5 most profitable items by total purchase value, then list (in a table): * Item ID * Item Name * Purchase Count * Item Price * Total Purchase ValueAs final considerations:* You must use the Pandas Library and the Jupyter Notebook.* You must submit a link to your Jupyter Notebook with the viewable Data Frames.* You must include a written description of three observable trends based on the data.* See [Example Solution](HeroesOfPymoli/HeroesOfPymoli_starter.ipynb) for a reference on expected format. CopyrightTrilogy Education Services © 2019. All Rights Reserved.
###Code
import os, pandas as pd
INPUT_PATH = "Resources"
INPUT_FILENAME = "purchase_data.csv"
input_path = os.path.join('.', INPUT_PATH, INPUT_FILENAME)
players_df = pd.read_csv(input_path, encoding="ISO-8859-1")
players_df.head()
###Output
_____no_output_____
###Markdown
Player Count* Total Number of Players
###Code
player_cnt = len(players_df["SN"].unique())
print(f"Player Count: {player_cnt}")
###Output
Player Count: 576
###Markdown
Purchasing Analysis (Total)* Number of Unique Items* Average Purchase Price* Total Number of Purchases* Total Revenue
###Code
item_cnt = len(players_df["Item ID"].unique())
print(f"Item Count: {item_cnt}")
avg_price = players_df["Price"].mean()
print(f"Average Purchase Price: ${avg_price:.2f}")
purch_cnt = len(players_df["Purchase ID"].unique())
print(f"Number of Purchase: {purch_cnt}")
total_rev = players_df["Price"].sum()
print(f"Total Revenue: ${total_rev:,.2f}")
###Output
Item Count: 183
Average Purchase Price: $3.05
Number of Purchase: 780
Total Revenue: $2,379.77
###Markdown
Gender Demographics* Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
male_cnt = len(players_df.loc[players_df["Gender"]=="Male"])
female_cnt = len(players_df.loc[players_df["Gender"]=="Female"])
null_cnt = len(players_df.loc[players_df["Gender"].isnull() == True])
total_ond = len(players_df.loc[(players_df["Gender"]!="Male")&(players_df["Gender"]!="Female")])
total_cnt = len(players_df)
print(f"Number of males: {male_cnt}, {(male_cnt/total_cnt*100.0):.1f}%")
print(f"Number of females: {female_cnt}, {(female_cnt/total_cnt*100.0):.1f}%")
print(f"Number of nulls: {null_cnt}, {(null_cnt/total_cnt*100.0):.1f}%")
print(f"Number of 'Other/Non-Disclosed': {total_ond}, {(total_ond/total_cnt*100.0):.1f}%")
print(f"Number of Players: {total_cnt}")
###Output
Number of males: 652, 83.6%
Number of females: 113, 14.5%
Number of nulls: 0, 0.0%
Number of 'Other/Non-Disclosed': 15, 1.9%
Number of Players: 780
###Markdown
Purchasing Analysis (Gender)* The below each broken by gender * Purchase Count * Average Purchase Price * Total Purchase Value * Average Purchase Total per Person by Gender
###Code
# Purchase Count
purchasing_analysis_df = players_df.groupby(["Gender"])
purchase_count = purchasing_analysis_df['Purchase ID'].count()
print(f"Purchase Count: {purchase_count}")
print("")
# Average Purchase Price
avg_price = purchasing_analysis_df["Price"].mean()
print(f"Average Purchase Price: {(avg_price)}")
print("")
# Total Purchase Value
purchase_value = purchasing_analysis_df["Price"].sum()
print(f"Total Purchase Value: {(purchase_value)}")
print("")
# Average Purchase Total per Person by Gender
purchase_total_per_person = purchasing_analysis_df["Price"].sum()/purchasing_analysis_df["SN"].count()
print(f"Average Purchase Total per Person: {(purchase_total_per_person)}")
print("")
# Issue: Average Purchase Total per Person is not correct
# Issue: How to best combine these results into a single DF?
###Output
Purchase Count: Gender
Female 113
Male 652
Other / Non-Disclosed 15
Name: Purchase ID, dtype: int64
Average Purchase Price: Gender
Female 3.203009
Male 3.017853
Other / Non-Disclosed 3.346000
Name: Price, dtype: float64
Total Purchase Value: Gender
Female 361.94
Male 1967.64
Other / Non-Disclosed 50.19
Name: Price, dtype: float64
Average Purchase Total per Person: Gender
Female 3.203009
Male 3.017853
Other / Non-Disclosed 3.346000
dtype: float64
###Markdown
Top_ Spender = pd.DataFrame({'Purchase Count' :purchase_count,'Average Purchase Price':average_purchase_price, 'Total Purchase Value':purchase_value,'Avg Total Purchase per Person':average_total_purchases})Gender_Purchasing_Analysis['Total Purchase Value']=Gender_Purchasing_Analysis['Total Purchase Value'].map("${:.2f}".format)
###Code
#Most Popular Items
#Item ID
#Item Name
#Purchase Count
#Item Price
#Total Purchase Value
# create a new dataframe based on the Item name and ID , then will calculate based on them
items = purchased_data[["Item ID", "Item Name", "Price"]]
item_details = items.groupby(["Item ID","Item Name"])
purchase_count_item = item_details["Price"].count()
purchase_value = (item_details["Price"].sum())
item_price = purchase_value/purchase_count_item
most_popular_items_anaylsis = pd.DataFrame({"Purchase Count": purchase_count_item,"Item Price": item_price,"Total Purchase Value"
:purchase_value})
most_popular_items_anaylsis
# Sort in ascending order to get top 5 items
most_popular_items= most_popular_items_anaylsis.sort_values(["Purchase Count"], ascending=False).head()
most_popular_items
#Most Profitable Items
#Item ID
#Item Name
#Purchase Count
#Item Price
#Total Purchase Value
# arrange the items from highest to lowest
most_popular_items= most_popular_items_anaylsis.sort_values(["Purchase Count"], ascending=False).head()
most_popular_items
###Output
_____no_output_____
###Markdown
Player Count
###Code
# Counting the total number of players in the list of unique names for total player count
total_players = len(unique_players)
total_players = pd.DataFrame({"Total Players": [total_players]})
total_players
# Getting a list of the unique items by item name
unique_items = df["Item Name"].unique()
# Total number of unique items
total_unique_items = len(unique_items)
#Average Purchase Price
avg_purchase_price = df["Price"].mean()
# Total Purchases By Counting the Number of Purchase IDs
total_purchases = df["Purchase ID"].count()
total_purchases
# Total Revenue Retrieved by summing the purchase price column
total_revenue = df["Price"].sum()
total_revenue
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total)
###Code
#Putting Together Purchasing Analysis Dataframe
purchasing_df = {
"Number of Unique Items": [total_unique_items],
"Average Purchase Price": [avg_purchase_price],
"Total Number of Purchases": [total_purchases],
"Total Revenue": [total_revenue]}
purchasing_df = pd.DataFrame(purchasing_df)
purchasing_df["Average Purchase Price"] = purchasing_df["Average Purchase Price"].map("${:,.2f}".format)
purchasing_df["Total Revenue"] = purchasing_df["Total Revenue"].map("${:,.2f}".format)
#Printing out Purchasing Analysis (Total)
purchasing_df
###Output
_____no_output_____
###Markdown
Gender Demographics
###Code
# Group data by Gender and getting the number of unique screen names
gender_demo = df.groupby("Gender")["SN"].nunique()
# Adding to a dataframe
gender_demo = pd.DataFrame({"Total Count": gender_demo})
#calculating percentages and adding to the dataframe
gender_percentages = (gender_demo["Total Count"] / gender_demo["Total Count"].sum() * 100)
gender_demo["Percentage of Players"] = gender_percentages
#formatting
gender_demo["Percentage of Players"] = gender_demo["Percentage of Players"].map("{:.2f}%".format)
gender_demo
###Output
_____no_output_____
###Markdown
Purchase Analysis (Gender)
###Code
#Grab the count of purchase IDs for each gender
gender_purchase = df.groupby("Gender")["Purchase ID"].count()
#Create a new dataframe with the purchase count by gender
gender_purchase_df = pd.DataFrame({"Purchase Count": gender_purchase})
# Calculate the average purchase price for each gender
average_purchase = df.groupby("Gender")["Price"].mean()
# Map and format in a dataframe
gender_purchase_df["Average Purchase Price"] = average_purchase.map("${:.2f}".format)
# Calculate the total purchase price for each gender
total_purchase = df.groupby("Gender")["Price"].sum()
gender_purchase_df["Total Purchase Price"] = total_purchase.map("${:.2f}".format)
gender_purchase_df
#Calculate the Average Purchase Total per Person by Gender
per_person_purchase = df.groupby("SN")["Price"].sum()
per_person_purchase_df = pd.DataFrame({"Total Purchase Per Person": per_person_purchase})
#per_person_purchase_df = pd.DataFrame({"Average Purchase Total Per Person By Gender": per_person_purchase})
# Merge dataframes on "SN" field to get the gender for each person
gender_merge_df = pd.merge(df, per_person_purchase_df, on="SN")
#Drop the duplicate screen names, keeping the first instance
gender_merge_df = gender_merge_df.drop_duplicates(subset="SN", keep="first")
#Calcuage the average total purchase per person by gender
gender_per_person_purchase = gender_merge_df.groupby("Gender")["Total Purchase Per Person"].mean()
#Add the average to the gender purchase summary table
gender_purchase_df["Average Total Purchase Per Person"] = gender_per_person_purchase.map("${:.2f}".format)
gender_purchase_df
###Output
_____no_output_____
###Markdown
Age Demographics
###Code
# want to see the upper and lower levels of age in the original df
print(df["Age"].max())
print(df["Age"].min())
# Binning - creating the bins for the age ranges - max is 50 as 45 is the maximum age of a player
bins = [0, 9.90, 14.90, 19.90, 24.90, 29.90, 34.90, 39.90, 50]
#Assign names to bins
names = ["Less than 10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40 and Over"]
df["Age Range"] = pd.cut(df["Age"], bins, labels = names)
df
#Group by age range
age_range = df.groupby("Age Range")
# Get the number of unique screen names by age range
age_range_player = df.groupby("Age Range")["SN"].nunique()
#create a dataframe with the total count of players by age range
age_range_player_df = pd.DataFrame({"Total Count": age_range_player})
# calculate the percentage of players by age range
age_range_percentage = (age_range_player_df["Total Count"] / age_range_player_df["Total Count"].sum()) * 100
#add a colum of the percentage of players by age range to the age_range df and format it
age_range_player_df["Percentage of Players"] = age_range_percentage.map("{:.2f}%".format)
age_range_player_df
# Group the data by age range
age_range = df.groupby("Age Range")
# Get the total number of purchases by age range
age_range_purchases = age_range["Purchase ID"].count()
# Calculate average purchase price by age range
age_range_avg_price = age_range["Price"].mean()
age_range_avg_price = age_range_avg_price.map("${:.2f}".format)
# Calculate the total purchase value per age range
age_range_total_price = age_range["Price"].sum()
# can't format here
# age_range_total_price = age_range_total_price.map("${:.2f}".format)
# put all of the above into a summary table
age_range_df = pd.DataFrame({"Purchase Count": age_range_purchases,
"Average Purchase Price": age_range_avg_price,
"Total Purchase Value": age_range_total_price})
age_range_df
# Calcuate the average total purchases per person
age_range_avg_purchase_pp = age_range_df["Total Purchase Value"] / age_range_player_df["Total Count"]
age_range_avg_purchase_pp = age_range_avg_purchase_pp.map("${:.2f}".format)
total_purchase_value = age_range_df["Total Purchase Value"]
total_purchase_value = total_purchase_value.map("${:.2f}".format)
age_range_df["Average Purchase Total Per Person by Age Group"] = age_range_avg_purchase_pp
age_range_df["Total Purchase Value"] = total_purchase_value
age_range_df
###Output
_____no_output_____
###Markdown
Top Spenders
###Code
# calculate the total purchase value per screen name/person
pp_total_purchase_value = df.groupby("SN")["Price"].sum()
# calculate the average purchase price per screen name/person
pp_avg_purchase_price = df.groupby("SN")["Price"].mean()
# Count the number of purchase IDs per screen name/person
pp_purchase_ids = df.groupby("SN")["Purchase ID"].count()
# put all of the above into a dataframe
pp_purchase_df = pd.DataFrame({"Purchase Count": pp_purchase_ids,
"Average Purchase Price": pp_avg_purchase_price,
"Total Purchase Value": pp_total_purchase_value})
# Sort the values by total purchase value in descending order / formatting
pp_purchase_df_sorted = pp_purchase_df.sort_values("Total Purchase Value", ascending=False)
pp_purchase_df_sorted["Average Purchase Price"] = pp_purchase_df_sorted["Average Purchase Price"].map("${:.2f}".format)
pp_purchase_df_sorted["Total Purchase Value"] = pp_purchase_df_sorted["Total Purchase Value"].map("${:.2f}".format)
#print the top 5 spenders by total purchase value
pp_purchase_df_sorted.head(5)
###Output
_____no_output_____
###Markdown
Most Popular Items
###Code
# Group the original dataframe by Item ID and Name
items = df.groupby(["Item ID", "Item Name"])
# Get the count of purchases per item
item_purchase = items["Purchase ID"].count()
# put into a dataframe
item_purchase_df = pd.DataFrame({"Purchase Count": item_purchase})
# merge new dataframe with original on Item Name
merge_price = pd.merge(df, item_purchase_df, on="Item Name")
merge_price = merge_price[["Item ID", "Item Name", "Price", "Purchase Count"]]
merge_price = merge_price.drop_duplicates(subset="Item Name", keep="first")
merge_price
# Group items by price
item_price = merge_price.groupby(["Item ID", "Item Name"])["Price"].sum()
# Calculate total price value by item id and name
total_item_price = items["Price"].sum()
item_purchase_df["Item Price"] = item_price
item_purchase_df["Total Purchase Value"] = total_item_price
sort_by_purchase = item_purchase_df.sort_values("Purchase Count", ascending=False)
sort_by_purchase["Item Price"] = sort_by_purchase["Item Price"].map("${:.2f}".format)
sort_by_purchase["Total Purchase Value"] = sort_by_purchase["Total Purchase Value"].map("${:.2f}".format)
sort_by_purchase.head(5)
###Output
_____no_output_____
###Markdown
Most Profitable Items
###Code
# Sort item purchase df by total purchase value
total_value_sort = item_purchase_df.sort_values("Total Purchase Value", ascending=False)
#format the columns
total_value_sort["Total Purchase Value"] = total_value_sort["Total Purchase Value"].map("${:.2f}".format)
total_value_sort["Item Price"] = total_value_sort["Item Price"].map("${:.2f}".format)
#Assign top 5 to a variable
total_value_top_five = total_value_sort.head(5)
# Print the top 5
total_value_top_five
###Output
_____no_output_____
###Markdown
Heroes Of Pymoli - Analysis Problem Statment Congratulations! After a lot of hard work in the data munging mines, you've landed a job as Lead Analyst for an independent gaming company. You've been assigned the task of analyzing the data for their most recent fantasy game Heroes of Pymoli.Like many others in its genre, the game is free-to-play, but players are encouraged to purchase optional items that enhance their playing experience. As a first task, the company would like you to generate a report that breaks down the game's purchasing data into meaningful insights.Analysis Report (based on the analysis of purchase dataset provided) 1. Out of total players, 'Male' players form the majority of 84% whereas 'Female' players share a relatively small proportion of about 14%. 'Other/Non-Disclosed' gender holds less than 2% of total number of players.2. Majority of players fall between 20 - 24 years age group. Next highest is in age group 15 - 19. The lowest number of players is in below 10 and above 40 age groups (About 2%). 3. The 20 - 24 age group also has the highest total purchase value of 1,114.06 dollars with an average purchase price per person as 4.32. Interestingly, the age group with highest average purchase price per person of 4.76 is 35 - 39 with the total purchase price of 147.67 dollars.4. The top spender is Lisosia93 (total purchse count = 5, average purchase price = 3.79, total purchase price = 18.96).5. Most popular item is Item ID: 178, Item Name: Oathbreaker, Last Hope of the Breaking Storm (purchase count = 12, price = 4.23, total purchase price = 50.76)6. Most profitable item is also Item ID: 178 (Item Name: Oathbreaker, Last Hope of the Breaking Storm) having purchase count = 12, price = 4.23, total purchase price = 50.76).
###Code
# Dependencies and Setup
import pandas as pd
import os
import matplotlib.pyplot as plt
# File path
file_to_load = os.path.join('Resources', 'purchase_data.csv')
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
purchase_data.head(3)
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
# calculating a df that holds total number of unique players in purchase_data
total_players = pd.DataFrame({purchase_data['SN'].nunique()}, columns=['Total Players'])
total_players
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame Calculations for purchase analysis dataframe
###Code
# calculating the number of unique items in total
unique_items = purchase_data['Item ID'].nunique()
# calculating the average purchase price
avg_price = f"${purchase_data['Price'].mean():.2f}"
# calculating the number of purchases in total
num_of_purchases = purchase_data['Purchase ID'].count()
# calculating the total revenue (in total for purchase data)
total_revenue = f"${purchase_data['Price'].sum():,.2f}"
###Output
_____no_output_____
###Markdown
Creating the summary dataframe for purchase analysis using the values calculated
###Code
purchase_analysis_df = pd.DataFrame({'Number of Unique Items': [unique_items],
'Average Price': [avg_price],
'Number of Purchases': [num_of_purchases],
'Total Revenue': [total_revenue]})
purchase_analysis_df
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed Calculations for gender demographics dataframe
###Code
# counting total number of players
total_count = purchase_data['SN'].nunique()
# counting total number and percent of male players
male_count = purchase_data[purchase_data['Gender'] == 'Male']['SN'].nunique()
male_percent = f"{(male_count / total_count) * 100:.2f}%"
# counting total number and percent of female players
female_count = purchase_data[purchase_data['Gender'] == 'Female']['SN'].nunique()
female_percent = f"{(female_count / total_count) * 100:.2f}%"
# counting total number and percent of other/non-disclosed players
other_count = purchase_data[purchase_data['Gender'] == 'Other / Non-Disclosed']['SN'].nunique()
other_percent = f"{(other_count / total_count) * 100:.2f}%"
###Output
_____no_output_____
###Markdown
Creating gender demographics dataframe
###Code
gender_demographics_df = pd.DataFrame({'Total Count': [male_count, female_count, other_count],
'Percentage of Players': [male_percent, female_percent, other_percent],
'Gender': ['Male', 'Female', 'Other / Non-Disclosed']})\
.set_index('Gender')
gender_demographics_df.index.name=None
gender_demographics_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame Calculations for gender based purchase analysis
###Code
# grouping the purchase data based on the gender
gender_data = purchase_data.groupby('Gender')
# calculating purchase count based on gender
purchase_count_by_gender = gender_data['Purchase ID'].count()
# calculating total purchase price/value based on gender
tot_purchase_price_by_gender = gender_data['Price'].sum()
# calculating purchase count based on gender
avg_purchase_price_by_gender = gender_data['Price'].mean()
# calculating average total purchase per person based on gender
gender_count = gender_data['SN'].nunique()
avg_total_purchase_per_person_by_gender = tot_purchase_price_by_gender / gender_count
###Output
_____no_output_____
###Markdown
Creating and formatting the gender purchase analysis dataframe
###Code
gender_purchasing_analysis_df = pd.DataFrame({'Purchase Count': purchase_count_by_gender,
'Average Purchase Price': avg_purchase_price_by_gender,
'Total Purchase Value': tot_purchase_price_by_gender,
'Avg Total Purchase per Person': avg_total_purchase_per_person_by_gender})
gender_purchasing_analysis_df = gender_purchasing_analysis_df.style.format({'Average Purchase Price': "${:,.2f}".format,
'Total Purchase Value': "${:,.2f}".format,
'Avg Total Purchase per Person': "${:,.2f}".format
})
gender_purchasing_analysis_df
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table Using bins and doing calculations based on Age Demographics
###Code
# creating bins and bin labels
bins = [0, 10, 15, 20, 25, 30, 35, 40, 150]
bin_names = ['<10', '10-14', '15-19', '20-24', '25-29', '30-34', '35-39', '40+']
# creating new age_df using purchase_data df with an additional column for age groups (based on bins)
age_df = purchase_data.assign(Age_Groups = pd.cut(purchase_data["Age"], bins, labels=bin_names, right=False))
age_df.head()
# creating age_data that holds data grouped by age_groups columns
age_data = age_df.groupby('Age_Groups')
# calculating total count of players by age groups
total_count_by_age = age_data['SN'].nunique()
# calculating percent of players by age groups
total_players = purchase_data['SN'].nunique()
percent_of_players_by_age = total_count_by_age / total_players * 100
percent_of_players_by_age = percent_of_players_by_age.apply("{:.2f}%".format)
###Output
_____no_output_____
###Markdown
Creating the age demographics dataframe based on the values calculated
###Code
age_demographics_df = pd.DataFrame({'Total Count': total_count_by_age,
'Percentage of Players':percent_of_players_by_age
})
age_demographics_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame Calculations for purchasing analysis by age
###Code
# calculating purchase count by age
purchase_count_by_age = age_data['Purchase ID'].count()
# calculating purchase price by age
purchase_price_by_age = age_data['Price'].sum()
# calculating average purchase price by age
avg_purchase_price_by_age = age_data['Price'].mean()
# calculating average total purchaseprice per person by age
avg_total_purchase_per_person = purchase_price_by_age / age_data['SN'].nunique()
###Output
_____no_output_____
###Markdown
Creating & formatting the purchase_analysis_by_age df based on the values calculated
###Code
purchasing_analysis_age_df = pd.DataFrame({"Purchase Count": purchase_count_by_age,
"Average Purchase Price": avg_purchase_price_by_age,
"Total Purchase Value":purchase_price_by_age,
"Average Purchase Total per Person": avg_total_purchase_per_person})
purchasing_analysis_age_df["Average Purchase Price"] = purchasing_analysis_age_df["Average Purchase Price"].map("${:,.2f}".format)
purchasing_analysis_age_df["Total Purchase Value"] = purchasing_analysis_age_df["Total Purchase Value"].map("${:,.2f}".format)
purchasing_analysis_age_df["Purchase Count"] = purchasing_analysis_age_df["Purchase Count"].map("{:,}".format)
purchasing_analysis_age_df["Average Purchase Total per Person"] = purchasing_analysis_age_df["Average Purchase Total per Person"].map("${:,.2f}".format)
purchasing_analysis_age_df = purchasing_analysis_age_df.loc[:, ["Purchase Count", "Average Purchase Price", "Total Purchase Value", "Average Purchase Total per Person"]]
purchasing_analysis_age_df
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame Calculations for top spenders
###Code
# calculating purchase count by spenders
purchase_count_by_spender = purchase_data.groupby('SN')['Purchase ID'].count()
# calculating total purchase price/value by spenders
total_purchase_value_spender = purchase_data.groupby('SN')['Price'].sum()
# calculating average purchase price by spenders
avg_purchase_price_spender = purchase_data.groupby('SN')['Price'].mean()
###Output
_____no_output_____
###Markdown
Creating & formatting the top_spender_df based on the values calculated
###Code
top_spender_df = pd.DataFrame({'Purchase Count': purchase_count_by_spender,
'Average Purchase Price': avg_purchase_price_spender,
'Total Purchase Value': total_purchase_value_spender,
}).sort_values('Total Purchase Value', ascending=False).head()
top_spender_df = top_spender_df.style.format({'Average Purchase Price': "${:,.2f}".format,
'Total Purchase Value': "${:,.2f}".format,
})
top_spender_df
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame Calculations for most popular items
###Code
# creating items dataframe using purchase_data df
item_df = pd.DataFrame(purchase_data[['Item ID', 'Item Name', 'Price']])
item_df.head(2)
# creating item_data by grouping item id and item name for further calculations
item_data = item_df.groupby(['Item ID', 'Item Name'])
# calculating total purchase price based on items
total_purchase_value_item = item_data['Price'].sum()
# calculating purchase count based on items
purchase_count_by_item = item_data['Price'].count()
# calculating item price for unique items
item_price = total_purchase_value_item / purchase_count_by_item
###Output
_____no_output_____
###Markdown
Creating & formatting the most_popular_items_df based on the values calculated
###Code
most_popular_items_df = pd.DataFrame({'Purchase Count': purchase_count_by_item,
'Item Price': item_price,
'Total Purchase Value': total_purchase_value_item})
formatted_most_popular_items_df = most_popular_items_df.sort_values('Purchase Count', ascending=False).head()
formatted_most_popular_items_df = formatted_most_popular_items_df.style.format({'Item Price': "${:,.2f}".format,
'Total Purchase Value': "${:,.2f}".format
})
formatted_most_popular_items_df
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame Creating & formatting the most_profitable_items_df by sorting most_popular_items_df
###Code
most_profitable_items_df = most_popular_items_df.sort_values('Total Purchase Value', ascending=False).head()
most_profitable_items_df = most_profitable_items_df.style.format({'Item Price': "${:,.2f}".format,
'Total Purchase Value': "${:,.2f}".format
})
most_profitable_items_df
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
purchase_data.head()
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
total_players = len(purchase_data['SN'].unique())
print_total_players = pd.DataFrame({"Total Players": [total_players]})
print_total_players
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Found the nunique() function here https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.nunique.html
# Calculate each value
unique_items = purchase_data["Item Name"].nunique()
avg_price = round(purchase_data["Price"].mean(),2)
total_purchases = purchase_data['Purchase ID'].nunique()
total_revenue = purchase_data['Price'].sum()
#print the output table
print_purchasing_analysis = pd.DataFrame({
'Number of Unique Items': [unique_items],
'Average Purchase Price': [avg_price],
'Total Number of Purchases': [total_purchases],
'Total Revenue': [total_revenue]
})
# found how to format columns here: https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html
print_purchasing_analysis['Average Purchase Price'] = print_purchasing_analysis['Average Purchase Price'].map('${:,.2f}'.format)
print_purchasing_analysis['Total Revenue'] = print_purchasing_analysis['Total Revenue'].map('${:,.2f}'.format)
print_purchasing_analysis
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
# Calculate Percentage and Count of Male Players
male_data = purchase_data.loc[purchase_data["Gender"] == 'Male',:]
male_players = male_data['SN'].nunique()
male_percentage = round((male_players/total_players) * 100,2)
# Calculate Percentage and Count of Female Players
female_data = purchase_data.loc[purchase_data["Gender"] == 'Female',:]
female_players = female_data['SN'].nunique()
female_percentage = round((female_players/total_players) * 100,2)
# Percentage and Count of Female Players
other_data = purchase_data.loc[purchase_data["Gender"] == 'Other / Non-Disclosed',:]
other_players = other_data['SN'].nunique()
other_percentage = round((other_players/total_players) * 100,2)
# Print output
print_gender_demographics = pd.DataFrame({
"Gender": ['Male', 'Female', 'Other/Non-Disclosed'],
'Total Count': [male_players, female_players, other_players],
'Percentage of Players': [male_percentage, female_percentage, other_percentage]
})
# set index to genders to get rid of the index numbers
print_gender_demographics = print_gender_demographics.set_index("Gender")
# Format for percentages
print_gender_demographics['Percentage of Players'] = print_gender_demographics['Percentage of Players'].map('{:.2f}%'.format)
print_gender_demographics
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Male Purchase Count
male_purchase_count = male_data['Purchase ID'].nunique()
# Male Average Purchase Price
male_avg_price = round(male_data['Price'].mean(),2)
# Male Total Purchase Value
male_total_purchases = male_data['Price'].sum()
# Male Average Purchase total per Person by Gender
male_avg_per_person = round(male_total_purchases / male_players, 2)
# Female Purchase Count
female_purchase_count = female_data['Purchase ID'].nunique()
# Female Average Purchase Price
female_avg_price = round(female_data['Price'].mean(),2)
# Female Total Purchase Value
female_total_purchases = female_data['Price'].sum()
# Female Average Purchase total per person by gender
female_avg_per_person = round(female_total_purchases / female_players, 2)
# Other/Non-Disclosed Purchase Count
other_purchase_count = other_data['Purchase ID'].nunique()
# Other/Non-Disclosed Average Purchase Price
other_avg_price = round(other_data['Price'].mean(),2)
# Other/Non-Disclosed Total Purchase Value
other_total_purchases = other_data['Price'].sum()
# Other/Non-Disclosed Average Purchase Total per Person by Gender
other_avg_per_person = round(other_total_purchases / other_players, 2)
# Print output
print_gender_purchasing = pd.DataFrame({
"Gender": ['Male', 'Female', 'Other/Non-Disclosed'],
'Purchase Count': [male_purchase_count, female_purchase_count, other_purchase_count],
'Average Purchase Price': [male_avg_price, female_avg_price, other_avg_price],
'Total Purchase Value': [male_total_purchases, female_total_purchases, other_total_purchases],
'Avg Total Purchase per Person': [male_avg_per_person, female_avg_per_person, other_avg_per_person]
})
# set index to genders to get rid of the index numbers
print_gender_purchasing = print_gender_purchasing.set_index("Gender")
# format the price columns as currency
print_gender_purchasing['Average Purchase Price'] = print_gender_purchasing['Average Purchase Price'].map('${:,.2f}'.format)
print_gender_purchasing['Total Purchase Value'] = print_gender_purchasing['Total Purchase Value'].map('${:,.2f}'.format)
print_gender_purchasing['Avg Total Purchase per Person'] = print_gender_purchasing['Avg Total Purchase per Person'].map('${:,.2f}'.format)
print_gender_purchasing
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
# Found an article on pd.cut(): https://www.geeksforgeeks.org/pandas-cut-method-in-python/
# create list for age bins
bins = [0, 9, 14, 19, 24, 29, 34, 39, 100]
# Declare bin labels
bin_labels = ['<10', '10-14', '15-19', '20-24', '25-29', '30-34', '35-39', '40+']
# Use pd.cut() to split data into age bins
purchase_data['Age Bins'] = pd.cut(purchase_data['Age'], bins, labels=bin_labels, include_lowest=True)
# Group the purchase data by age group
pd_by_age = purchase_data.groupby(['Age Bins'])
# Create dataframe using count()
pd_by_age_df = pd.DataFrame(pd_by_age['SN'].nunique())
# Rename SN column to Total Count
pd_by_age_df.rename(columns={'SN':'Total Count'}, inplace=True)
# Calculate Percentage of Players
pd_by_age_df['Percentage of Players'] = round((pd_by_age_df['Total Count'] /pd_by_age_df['Total Count'].sum()) * 100, 2)
# Format Percentage of Players as a percent
pd_by_age_df['Percentage of Players'] = pd_by_age_df['Percentage of Players'].map("{:.2f}%".format)
pd_by_age_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Less than 10 years old
age_less_10 = purchase_data.loc[purchase_data["Age"] < 10 ,:]
# Less than 10 Purchase Count
purchase_count_less_10 = age_less_10['Purchase ID'].nunique()
# Less than 10 Average Purchase Price
avg_price_less_10 = round(age_less_10['Price'].mean(),2)
# Less than 10 Total Purchase Value
total_purchases_less_10 = age_less_10['Price'].sum()
# Calculate total players less than 10
less_10_players = age_less_10['SN'].nunique()
# Less than 10 Average Purchase Total per Person
less_10_avg_per_person = round(total_purchases_less_10 / less_10_players, 2)
# Age 10-14
age_10_14 = purchase_data.loc[(purchase_data["Age"] >= 10) & (purchase_data["Age"] <= 14),:]
# 10-14 Purchase Count
purchase_count_10_14 = age_10_14['Purchase ID'].nunique()
# 10-14 Average Purchase Price
avg_price_10_14 = round(age_10_14['Price'].mean(),2)
# 10-14 Total Purchase Value
total_purchases_10_14 = age_10_14['Price'].sum()
# Calculate total players between 10-14
age_10_14_players = age_10_14['SN'].nunique()
# 10-14 Average Purchase Total per Person
age_10_14_avg_per_person = round(total_purchases_10_14 / age_10_14_players, 2)
# Age 15-19
age_15_19 = purchase_data.loc[(purchase_data["Age"] >= 15) & (purchase_data["Age"] <= 19),:]
# 15-19 Purchase Count
purchase_count_15_19 = age_15_19['Purchase ID'].nunique()
# 15-19 Average Purchase Price
avg_price_15_19 = round(age_15_19['Price'].mean(),2)
# 15-19 Total Purchase Value
total_purchases_15_19 = age_15_19['Price'].sum()
# Calculate total players between 15-19
age_15_19_players = age_15_19['SN'].nunique()
# 15-19 Average Purchase Total per Person
age_15_19_avg_per_person = round(total_purchases_15_19 / age_15_19_players, 2)
# Age 20-24
age_20_24 = purchase_data.loc[(purchase_data["Age"] >= 20) & (purchase_data["Age"] <= 24),:]
# 20-24 Purchase Count
purchase_count_20_24 = age_20_24['Purchase ID'].nunique()
# 20-24 Average Purchase Price
avg_price_20_24 = round(age_20_24['Price'].mean(),2)
# 20-24 Total Purchase Value
total_purchases_20_24 = age_20_24['Price'].sum()
# Calculate total players between 20-24
age_20_24_players = age_20_24['SN'].nunique()
# 20-24 Average Purchase Total per Person
age_20_24_avg_per_person = round(total_purchases_20_24 / age_20_24_players, 2)
# Age 25-29
age_25_29 = purchase_data.loc[(purchase_data["Age"] >= 25) & (purchase_data["Age"] <= 29),:]
# 25-29 Purchase Count
purchase_count_25_29 = age_25_29['Purchase ID'].nunique()
# 25-29 Average Purchase Price
avg_price_25_29 = round(age_25_29['Price'].mean(),2)
# 25-29 Total Purchase Value
total_purchases_25_29 = age_25_29['Price'].sum()
# Calculate total players between 25-29
age_25_29_players = age_25_29['SN'].nunique()
# 25-29 Average Purchase Total per Person
age_25_29_avg_per_person = round(total_purchases_25_29 / age_25_29_players, 2)
# Age 30-34
age_30_34 = purchase_data.loc[(purchase_data["Age"] >= 30) & (purchase_data["Age"] <= 34),:]
# 30-34 Purchase Count
purchase_count_30_34 = age_30_34['Purchase ID'].nunique()
# 30-34 Average Purchase Price
avg_price_30_34 = round(age_30_34['Price'].mean(),2)
# 30-34 Total Purchase Value
total_purchases_30_34 = age_30_34['Price'].sum()
# Calculate total players between 30-34
age_30_34_players = age_30_34['SN'].nunique()
# 30-34 Average Purchase Total per Person
age_30_34_avg_per_person = round(total_purchases_30_34 / age_30_34_players, 2)
# Age 35-39
age_35_39 = purchase_data.loc[(purchase_data["Age"] >= 35) & (purchase_data["Age"] <= 39),:]
# 35-39 Purchase Count
purchase_count_35_39 = age_35_39['Purchase ID'].nunique()
# 35-39 Average Purchase Price
avg_price_35_39 = round(age_35_39['Price'].mean(),2)
# 35-39 Total Purchase Value
total_purchases_35_39 = age_35_39['Price'].sum()
# Calculate total players between 35-39
age_35_39_players = age_35_39['SN'].nunique()
# 35-39 Average Purchase Total per Person
age_35_39_avg_per_person = round(total_purchases_35_39 / age_35_39_players, 2)
# Age 40+
age_40_plus = purchase_data.loc[purchase_data["Age"] >= 40,:]
# 40+ Purchase Count
purchase_count_40_plus = age_40_plus['Purchase ID'].nunique()
# 40+ Average Purchase Price
avg_price_40_plus = round(age_40_plus['Price'].mean(),2)
# 40+ Total Purchase Value
total_purchases_40_plus = age_40_plus['Price'].sum()
# Calculate total players between 40+
age_40_plus_players = age_40_plus['SN'].nunique()
# 40+ Average Purchase Total per Person
age_40_plus_avg_per_person = round(total_purchases_40_plus / age_40_plus_players, 2)
# Print output
print_age_purchasing = pd.DataFrame({
"Age Ranges": bin_labels,
'Purchase Count': [purchase_count_less_10, purchase_count_10_14, purchase_count_15_19, purchase_count_20_24, purchase_count_25_29, purchase_count_30_34, purchase_count_35_39, purchase_count_40_plus],
'Average Purchase Price': [avg_price_less_10, avg_price_10_14, avg_price_15_19, avg_price_20_24, avg_price_25_29, avg_price_30_34, avg_price_35_39, avg_price_40_plus],
'Total Purchase Value': [total_purchases_less_10, total_purchases_10_14, total_purchases_15_19, total_purchases_20_24, total_purchases_25_29, total_purchases_30_34, total_purchases_35_39, total_purchases_40_plus],
'Avg Total Purchase per Person': [less_10_avg_per_person, age_10_14_avg_per_person, age_15_19_avg_per_person, age_20_24_avg_per_person, age_25_29_avg_per_person, age_30_34_avg_per_person, age_35_39_avg_per_person, age_40_plus_avg_per_person]
})
# set index to genders to get rid of the index numbers
print_age_purchasing = print_age_purchasing.set_index("Age Ranges")
# format the price columns as currency
print_age_purchasing['Average Purchase Price'] = print_age_purchasing['Average Purchase Price'].map('${:,.2f}'.format)
print_age_purchasing['Total Purchase Value'] = print_age_purchasing['Total Purchase Value'].map('${:,.2f}'.format)
print_age_purchasing['Avg Total Purchase per Person'] = print_age_purchasing['Avg Total Purchase per Person'].map('${:,.2f}'.format)
print_age_purchasing
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# Create dataframe to hold count of spenders
count_spenders = pd.DataFrame(purchase_data.groupby('SN').count())
# create dataframe to sum purchase data grouped by spender
top_spenders_df = pd.DataFrame(purchase_data.groupby('SN').sum())
# sort dataframe by Price desc
top_spenders_df = top_spenders_df.sort_values("Price", ascending=False)
# add purchase count column
top_spenders_df['Purchase Count'] = count_spenders['Item ID']
# Calculate average purchase price per spender
top_spenders_df['Average Purchase Price'] = round(top_spenders_df['Price'] / top_spenders_df['Purchase Count'],2)
# Rename price column to total purchase value
top_spenders_df.rename(columns={'Price': 'Total Purchase Value'}, inplace=True)
# remove unwanted columns
top_spenders_df = top_spenders_df.loc[:,['Purchase Count', 'Average Purchase Price', 'Total Purchase Value']]
# format columns as currency
top_spenders_df['Average Purchase Price'] = top_spenders_df['Average Purchase Price'].map('${:,.2f}'.format)
top_spenders_df['Total Purchase Value'] = top_spenders_df['Total Purchase Value'].map('${:,.2f}'.format)
# Print top 5
top_spenders_df.head()
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# Create new dataframe with the Item ID, Item Name, and Item Price columns
items = purchase_data[['Item ID', 'Item Name', 'Price']]
# create new dataframe to hold item counts
items_count_df = pd.DataFrame(items.groupby(['Item ID', 'Item Name']).count())
# create new dataframe to hold item sums
items_sum_df = pd.DataFrame(items.groupby(['Item ID', 'Item Name']).sum())
# Create Purchase Count column
items_count_df['Purchase Count'] = items_count_df['Price']
# Create Item Price column
items_count_df['Item Price'] = round(items_sum_df['Price'] / items_count_df['Price'], 2)
# Create Total Purchase Value
items_count_df['Total Purchase Value'] = items_sum_df['Price']
# Remove Price column
del items_count_df['Price']
# sort values based on Purchase count descending
popular_items = items_count_df.sort_values(by=['Purchase Count'], ascending=False)
# Format columns as currency
popular_items['Item Price'] = popular_items['Item Price'].map('${:,.2f}'.format)
popular_items['Total Purchase Value'] = popular_items['Total Purchase Value'].map('${:,.2f}'.format)
# Print top 5 most profitable items
popular_items.head()
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
profitable_items = items_count_df.sort_values(by=['Total Purchase Value'], ascending=False)
# Format columns as currency
profitable_items['Item Price'] = profitable_items['Item Price'].map('${:,.2f}'.format)
profitable_items['Total Purchase Value'] = profitable_items['Total Purchase Value'].map('${:,.2f}'.format)
# Print top 5 most profitable items
profitable_items.head()
# Print three observable trends based on the data
trend1 = 'Trend 1:\nHeroes of Pymoli is more than 5 times as popular for males compared to females. Males also brought in over $1,600 more for the company than females did. Thus, the company should work on some female directed marketing campaigns to help bring more females to buy the game.\n'
trend2 = "Trend 2:\nThe age group Purchase Count and Total Purchase Value data forms a bell curve, because the data increases up to age 20-24, then they both decrease from there as people get older. The company should market more to the younger and older crowds to help bring in more revenue.\n"
trend3 = 'Trend 3:\nThere is an obvious positive relationship between item popularity and item profitability. The items that are most popular amongst players will be the items that are most frequently bought. If there are items they are struggling to sell, they need to work with marketing to get the name out to get people talking about that item more. Once you get the popularity of the item up, it is bound to bring in more purchases.'
print(trend1)
print(trend2)
print(trend3)
###Output
Trend 1:
Heroes of Pymoli is more than 5 times as popular for males compared to females. Males also brought in over $1,600 more for the company than females did. Thus, the company should work on some female directed marketing campaigns to help bring more females to buy the game.
Trend 2:
The age group Purchase Count and Total Purchase Value data forms a bell curve, because the data increases up to age 20-24, then they both decrease from there as people get older. The company should market more to the younger and older crowds to help bring in more revenue.
Trend 3:
There is an obvious positive relationship between item popularity and item profitability. The items that are most popular amongst players will be the items that are most frequently bought. If there are items they are struggling to sell, they need to work with marketing to get the name out to get people talking about that item more. Once you get the popularity of the item up, it is bound to bring in more purchases.
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
purchase_data.head()
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
unique_df = purchase_data.loc[:,["Gender","SN","Age"]]
unique_df = unique_df.drop_duplicates()
unique_df.head()
#total_players = unique_df.count()["Gender"]
#total_players
total_players = unique_df.count()["Gender"]
pd.DataFrame({'Total Players': [total_players]})
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
unique_items = len(purchase_data["Item ID"].unique())
unique_items
average_price = purchase_data["Price"].mean()
average_price
num_purchases = purchase_data["Price"].count()
num_purchases
total_rev = purchase_data["Price"].sum()
total_rev
sum_table = pd.DataFrame({
"Number of Unique Items": unique_items,
"Average Price": [average_price],
"Number of Purchases": [num_purchases],
"Total Revenue": [total_rev]
})
sum_table["Average Price"] = sum_table["Average Price"].map("${:,.2f}".format)
sum_table["Total Revenue"] = sum_table["Total Revenue"].map("${:,.2f}".format)
sum_table
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
purchase_data.head()
unique_df.head()
gender_data = purchase_data[["Gender"]].value_counts()
gender_data
gender_df = unique_df["Gender"].value_counts()
gender_df
gender_series = unique_df["Gender"].value_counts()
type(gender_series) # DO NOT USE THIS METHOD # THIS IS A SERIES
gender_series
gender_df = pd.DataFrame(gender_series)
type(gender_df) # USE THIS METHOD INSTEAD # IT IS A DATAFRAME
gender_df.rename(columns = {'Gender':'Total Count'}, inplace = True)
gender_df
###gender_summary = pd.DataFrame({“Total Count”: [gender_df], “Percentage of Players”: [percentage_of_players]})
gender_total_series = unique_df["Gender"].value_counts().sum()
gender_total_series
total_players = unique_df.count()["Gender"]
pd.DataFrame({'Total Players': [total_players]})
percent_series = ((gender_series/gender_total_series) * 100).map("{:,.2f}%".format)
percent_series
#percent_df = (((gender_df)/(gender_total_series)) * 100)
#gender_df.rename(columns = {'Total Count':'Percentage of Players'}, inplace = True)
#percent_df
percent_df = pd.DataFrame(percent_series)
percent_df
merge_df = pd.merge(gender_df, percent_df, left_index=True, right_index=True)
merge_df
merge_df.rename(columns = {'Percentage of Players_x':'Total Count',"Gender":"Percentage of Players"}, inplace = True)
merge_df
# Merge two dataframes using an inner join
#merge_df = pd.merge(info_df, items_df, on="customer_id")
#merge_df
# DOES NOT WORK
#gender_summary = pd.DataFrame({"Total Count":[gender_df], "Percentage of Players":[percent]})
#gender_summary.head()
#DOES NOT WORK
#gender_summary = pd.DataFrame({"": ["Male", "Female", "Other / Non-Disclosed"], "Percentage of Players": [Male, Female, Nondisclosed], "Total Count": [Males, Females, Nons]})
#DOES NOT WORK
#gender_df = pd.gender_df({“Total Count”: [gender_df], “Percentage of Players”: [percentage_of_players]})
#percentage_of_players = ((gender_df)/(gender_total_df)) * 100
#gender_df["Percentage of Players"] = percentage_of_players
#gender_df.head()
#sum_table["Average Price"] = sum_table["Average Price"].map("${:,.2f}".format)
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
purchase_analysis_df = pd.DataFrame(purchase_data)
#purchase_analysis_df = purchase_data.loc[:,["Purchase Count","Average Purchase Price","Total Purchase Value","Avg Total Purchase per Person"]]
#purchase_analysis_df = purchase_analysis_df.drop_duplicates()
#purchase_analysis_df.head()
purchase_analysis_df.head()
# Group purchase_data by Gender
gender_stats = purchase_data.groupby("Gender")
# Count Screen Names by Gender
total_count_gender = gender_stats.nunique()["SN"]
total_count_gender.to_frame()
purchase_count = purchase_data["Price"].count()
purchase_count
purchase_count_gender = gender_stats["Purchase ID"].count()
purchase_count_gender
purchase_count_gender.to_frame()
average_purchase_price_gender = purchase_data["Price"].mean()
average_purchase_price_gender
average_purchase_price_gender = gender_stats["Price"].mean()
average_purchase_price_gender
average_purchase_price_gender.to_frame()
total_purchase_value = purchase_data["Price"].sum()
total_purchase_value
total_purchase_value_gender = gender_stats["Price"].sum()
total_purchase_value_gender
total_purchase_value_gender.to_frame()
#Avg Total Purchase per Person
avg_total_per_person = total_purchase_value/total_players
avg_total_per_person
avg_total_per_person_gender = total_purchase_value_gender/total_count_gender
avg_total_per_person_gender
avg_total_per_person_gender.to_frame()
# merge_purchase_analysis = pd.merge(purchase_count_gender, average_purchase_price_gender, left_index=True, right_index=True)
# merge_purchase_analysis
purchase_analysis_df = pd.DataFrame({
"Purchase Count": purchase_count_gender,
"Average Purchase Price": average_purchase_price_gender,
"Total Purchase Value": total_purchase_value_gender,
"Avg Total Purchase per Person": avg_total_per_person_gender
})
purchase_analysis_df
purchase_analysis_df.style.format({"Average Purchase Price":"${:,.2f}",
"Total Purchase Value":"${:,.2f}",
"Avg Total Purchase per Person":"${:,.2f}"})
# sum_table = pd.DataFrame({
# "Number of Unique Items": unique_items,
# "Average Price": [average_price],
# "Number of Purchases": [num_purchases],
# "Total Revenue": [total_rev]
# })
# sum_table["Average Price"] = sum_table["Average Price"].map("${:,.2f}".format)
# sum_table["Total Revenue"] = sum_table["Total Revenue"].map("${:,.2f}".format)
# sum_table
#gender_data = purchase_data[["Gender"]].counts()
#gender_data
# mean = purchase_data.groupby("Gender")
# mean_df = mean["Price"].mean().to_frame()
# #mean_df.map({"${:,.2f}%"}).format()
# ############# STUCK HERE ##################
# mean_df["Price"].mean_df["Price"].map({"${:,.2f}%"}).format()
# mean_df
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
age_demo_df = pd.DataFrame(purchase_data)
age_demo_df.head()
age_bins = [0, 9.90, 14.90, 19.90, 24.90, 29.90, 34.90, 39.90, 99999]
group_names = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
#len(age_bins)
#len(group_names)
#purchase_data["Age Group"] = pd.cut(purchase_data["Age"]) #### NEEDS BINS
purchase_data["Age Group"] = pd.cut(purchase_data["Age"],age_bins)
purchase_data.head()
purchase_data["Age Group"] = pd.cut(purchase_data["Age"],age_bins, labels=group_names)
purchase_data.head()
age_group = purchase_data.groupby("Age Group")
age_group
total_count_age = age_group["SN"].nunique()
total_count_age
total_count_age.to_frame()
percentage_by_age = ((total_count_age/total_players) * 100).map("{:,.2f}%".format)
percentage_by_age
percentage_by_age.to_frame()
age_demographics = pd.DataFrame({"Percentage of Players": percentage_by_age, "Total Count": total_count_age})
age_demographics
#age_demographics.index.name = None
## Formatting and adding 2 decimals and the % sign to the DataFrame
#age_demographics.style.format({"Percentage of Players":"{:,.2f}%"})
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
purchase_data_age = age_group["Purchase ID"].count()
purchase_data_age
purchase_data_age.to_frame()
avg_purchase_price_age = age_group["Price"].mean()
avg_purchase_price_age
avg_purchase_price_age.to_frame()
total_purchase_value = age_group["Price"].sum()
total_purchase_value
total_purchase_value.to_frame()
avg_purchase_per_person_age = total_purchase_value/total_count_age
avg_purchase_per_person_age
avg_purchase_per_person_age.to_frame()
purchasing_analysis_df = pd.DataFrame({
"Purchase Count": purchase_data_age,
"Average Purchase Price": avg_purchase_price_age,
"Total Purchase Value":total_purchase_value,
"Average Purchase Total per Person": avg_purchase_per_person_age
})
purchasing_analysis_df
purchasing_analysis_df.style.format({"Average Purchase Price":"${:,.2f}",
"Total Purchase Value":"${:,.2f}",
"Average Purchase Total per Person":"${:,.2f}"})
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#use groupoby()
spender_data = purchase_data.groupby("SN")
spender_data
purchase_count_data = spender_data["Purchase ID"].count()
purchase_count_data
purchase_count_data.to_frame()
avg_purchase_price_data = spender_data["Price"].mean()
avg_purchase_price_data
avg_purchase_price_data.to_frame()
purchase_total_data = spender_data["Price"].sum()
purchase_total_data
purchase_total_data.to_frame()
top_spenders = pd.DataFrame({
"Purchase Count": purchase_count_data,
"Average Purchase Price": avg_purchase_price_data,
"Total Purchase Value": purchase_total_data
})
top_spenders
descending_order = top_spenders.sort_values(["Total Purchase Value"], ascending=False)
descending_order.head()
descending_order.head().style.format({"Average Purchase Total":"${:,.2f}",
"Average Purchase Price":"${:,.2f}",
"Total Purchase Value":"${:,.2f}"})
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, average item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
items = purchase_data[["Item ID", "Item Name", "Price"]]
items
item_data = items.groupby(["Item Name","Item ID"])
item_data
purchase_count_item = item_data["Price"].count()
purchase_count_item
purchase_count_item.to_frame()
total_purchase_value = (item_data["Price"].sum())
total_purchase_value
total_purchase_value.to_frame()
item_price = total_purchase_value/purchase_count_item
item_price
item_price.to_frame()
popular_items = pd.DataFrame({
"Purchase Count": purchase_count_item,
"Item Price": item_price,
"Total Purchase Value": total_purchase_value})
popular_items.style.format({
"Item Price":"${:,.2f}",
"Total Purchase Value":"${:,.2f}"})
popular_descending_order = popular_items.sort_values(["Purchase Count"], ascending=False).head()
popular_descending_order.head().style.format({
"Item Price":"${:,.2f}",
"Total Purchase Value":"${:,.2f}"})
# popular_descending_order = popular_items.sort_values(["Purchase Count"], ascending=False).head()
# popular_descending_order
# popular_items.head().style.format({
# "Item Price":"${:,.2f}",
# "Total Purchase Value":"${:,.2f}"})
# popular_items.head().style.format({
# "Item Price":"${:,.2f}",
# "Total Purchase Value":"${:,.2f}"})
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
popular_descending_order = popular_items.sort_values(["Total Purchase Value"], ascending=False).head()
popular_descending_order
popular_descending_order.style.format({
"Item Price":"${:,.2f}",
"Total Purchase Value":"${:,.2f}"})
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
import os
import numpy as np
# File to Load (Remember to Change These)
filepath = os.path.join("Resources", "purchase_data.csv")
# Read Purchasing File and store into Pandas data frame
purchase_data_df = pd.read_csv(filepath)
purchase_data_df.tail()
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
#counts the unique gamertags under "SN"
player_count_df = purchase_data_df["SN"].nunique()
#Creates a dataframe that has the header "Total" displaying the information retrieved from my player count, then visualizes it
total_player_count_df = pd.DataFrame({"Total" : [player_count_df]})
total_player_count_df.head()
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# makes a counter to count each unique item name
item_count_df = purchase_data_df["Item Name"].nunique()
#find the average price of all the items within the CVS
average_price_df = round(purchase_data_df["Price"].mean(), 2)
#counts the number of purchases by counting the item ID
total_purchase_df = purchase_data_df["Item ID"].count()
#Takes all the Prices in the "Price" column and adds them up
total_revenue_df = purchase_data_df["Price"].sum()
#Creates a summary data frame that includes all the variables above
total_unique_items_df = pd.DataFrame({"Number of Unique Items": [item_count_df],
"Average Item Price" : f"${average_price_df}",
"Number of Purchases" : [total_purchase_df],
"Total Revenue" : f"${total_revenue_df}"
})
#prints out the Dat Frame
total_unique_items_df.head()
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
# Locates male, female and Other / None-Disclosed respectively
male_df = purchase_data_df.loc[(purchase_data_df["Gender"] == "Male"), ["SN"] ]
female_df = purchase_data_df.loc[purchase_data_df["Gender"] == "Female", ["SN"] ]
other_df = purchase_data_df.loc[purchase_data_df["Gender"] == "Other / Non-Disclosed", ["SN"] ]
# Counts the number of unique names using the "SN" column
male_count_df = len(male_df.SN.unique())
female_count_df = len(female_df.SN.unique())
other_count_df = len(other_df.SN.unique())
# Takes the Counts above divide them by the total number of players
# Multiplys it by 100 and rounded to two decimal places to get a percentage for each gender
male_percent_df = (male_count_df/player_count_df)*100
female_percent_df = (female_count_df/player_count_df)*100
other_percent_df = (other_count_df/player_count_df)*100
# Creates a data frame table that compares the genders in player coutn and percentage
percent_table_df = pd.DataFrame({
"Gender" : ["Male", "Female", "Other / Non-Disclosed"],
"Total Count" : [(male_count_df), (female_count_df), (other_count_df)],
"Percentage of Players" : [male_percent_df, female_percent_df, other_percent_df]
})
# Formats the series to include a "%" at the end of each value in the column
percent_table_df["Percentage of Players"] = percent_table_df["Percentage of Players"].map("{:.2f}%".format)
# Showcases dataframe
percent_table_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Locates the Gender column and if it is "Male", will add a count to via "Purchase ID"
male_purchase_df = purchase_data_df.loc[(purchase_data_df["Gender"] == "Male"), ["Purchase ID"]]
# Gets the length of the above Dataframe
male_purchase_good_df = len(male_purchase_df)
# Locates the Gender column and if it is "Male", will add a count to via "Price"
male_average_purchase_df = purchase_data_df.loc[(purchase_data_df["Gender"] == "Male"), ("Price")]
# will take the length of the data fram above and find the average
true_male_average_purchase_good_df = male_average_purchase_df.mean()
# Locates the Gender column and if it is "Male", will add a count to via "Price"
male_total_purchase_df = purchase_data_df.loc[(purchase_data_df["Gender"] == "Male"), ("Price")]
# will take the length of the data fram above and find the sum of all purchases
true_male_total_purchase_good_df = male_total_purchase_df.sum()
#true_male_total_purchase_good_df
# MAthematically calculation to finds the average amount spent by each "Male" individual
male_average_spent_per_person_df = true_male_total_purchase_good_df/male_count_df
#Repeats steps above for the "Gender" == "Female" category
female_purchase_df = purchase_data_df.loc[(purchase_data_df["Gender"] == "Female"), ["Purchase ID"]]
female_purchase_good_df = len(female_purchase_df)
female_average_purchase_df = purchase_data_df.loc[(purchase_data_df["Gender"] == "Female"), ("Price")]
true_female_average_purchase_good_df = female_average_purchase_df.mean()
female_total_purchase_df = purchase_data_df.loc[(purchase_data_df["Gender"] == "Female"),("Price") ]
true_female_total_purchase_good_df = female_total_purchase_df.sum()
female_average_spent_per_person_df = true_female_total_purchase_good_df/female_count_df
#Repeats steps above for the "Gender" == "Other / Non-Disclosed" category
other_purchase_df = purchase_data_df.loc[(purchase_data_df["Gender"] == "Other / Non-Disclosed"), ["Purchase ID"]]
other_purchase_good_df = len(other_purchase_df)
other_average_purchase_df = purchase_data_df.loc[(purchase_data_df["Gender"] == "Other / Non-Disclosed"), ("Price")]
true_other_average_purchase_good_df = other_average_purchase_df.mean()
other_total_purchase_df = purchase_data_df.loc[(purchase_data_df["Gender"] == "Other / Non-Disclosed"),("Price") ]
true_other_total_purchase_good_df = other_total_purchase_df.sum()
other_average_spent_per_person_df = true_other_total_purchase_good_df/other_count_df
# Creates a data frame showcasing "Gender", "Purchase Count", "Average Purchase Price"
#"Total Purchase Value" and"Average Total Purchase Per Person"
gender_purchase_table_df = pd.DataFrame({
"Gender" : ["Male", "Female","Other / Non-Disclosed" ],
"Purchase Count" : [male_purchase_good_df,female_purchase_good_df, other_purchase_good_df],
"Average Purchase Price" : [true_male_average_purchase_good_df, true_female_average_purchase_good_df,
true_other_average_purchase_good_df],
"Total Purchase Value" : [true_male_total_purchase_good_df, true_female_total_purchase_good_df,
true_other_total_purchase_good_df],
"Average Total Purchase Per Person" : [male_average_spent_per_person_df, female_average_spent_per_person_df,
other_average_spent_per_person_df]
})
# Formats the column rolls that require "$" in front of the values and shows only 2 decimal points
gender_purchase_table_df["Total Purchase Value"] = gender_purchase_table_df["Total Purchase Value" ].map("${:.2f}".format)
gender_purchase_table_df["Average Purchase Price"] = gender_purchase_table_df["Average Purchase Price"].map("${:.2f}".format)
gender_purchase_table_df["Average Total Purchase Per Person"] = gender_purchase_table_df["Average Total Purchase Per Person"].map("${:.2f}".format)
# Showcases the dataframe
gender_purchase_table_df
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
# Makes a bin for the different age groups
age_bins = [0, 9.9, 14.9, 19.9, 24.9, 29.9, 34.9, 39.9, 50]
# Makes labels corresponding with the age groups above
age_group = ["<10", "10-14", "15-19", "20-24", "25-29",
"30-34", "35-39", "40+" ]
# Creates another column in the data set for age demographics
purchase_data_df["Age Demographic"] = pd.cut(purchase_data_df["Age"], age_bins, labels =age_group)
# Creates a dataframe that drops repeated names found in the "SN" column
no_dupes_df = purchase_data_df.drop_duplicates(subset=["SN"])
# Groups the filtered data frame above and groups them to the BIns created in the abvoe cell
age_demos = no_dupes_df.groupby("Age Demographic")
# Gives a count for each indvidual Bin label under the column "Age"
age_range_df = age_demos["Age"].count()
# Gives the percentage of the age gaps in percentages comapred to the all players
age_range_demos_df = (age_range_df / player_count_df)*100
# Puts the above information into a dataframe
age_range_and_demo_df = pd.DataFrame({
"Age Count" : age_range_df,
"Age Percentage" : age_range_demos_df})
# Formats the "Age Percentage Column" to include the "%" at the end and only include 2 decimal places
age_range_and_demo_df["Age Percentage"] = age_range_and_demo_df["Age Percentage"].map("{:.2f}%".format)
# Displays teh dataframe
age_range_and_demo_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Makes a bin for the different age groups
age_bins = [0, 9.9, 14.9, 19.9, 24.9, 29.9, 34.9, 39.9, 50]
# Makes labels corresponding with the age groups above
age_group = ["<10", "10-14", "15-19", "20-24", "25-29",
"30-34", "35-39", "40+" ]
# Creates another column in the data set for age demographics
purchase_data_df["Age Demographic Purchases"] = pd.cut(purchase_data_df["Age"], age_bins, labels =age_group)
# Make a counter for purchase counts for each of the bins
age_analysis_purchase = purchase_data_df.groupby(["Age Demographic Purchases"]).count()["Purchase ID"]
# Find the average purchase spent for each of the bins
age_analysis_purchase_average = purchase_data_df.groupby(["Age Demographic Purchases"]).mean()["Price"]
# Finds the total amount purchased for each bin
age_analysis_purchase_total_price = purchase_data_df.groupby(["Age Demographic Purchases"]).sum()["Price"]
# Finds the average total spent person for each bin
age_analysis_purchase_average_price = age_analysis_purchase_total_price/age_range_df
# Puts the above findings into a data
age_analysis_purchase_df = pd.DataFrame({"Purchase Count" : age_analysis_purchase,
"Average Purchase Price" : age_analysis_purchase_average,
"Total Purchase Value" : age_analysis_purchase_total_price,
"Avg Total Purchase Per Person" : age_analysis_purchase_average_price
})
# Formats the above columns to include a "$" and rounds the values to 2 decimal places
age_analysis_purchase_df["Average Purchase Price"] = age_analysis_purchase_df["Average Purchase Price"].map("${:.2f}".format)
age_analysis_purchase_df["Total Purchase Value"] = age_analysis_purchase_df["Total Purchase Value"].map("${:.2f}".format)
age_analysis_purchase_df ["Avg Total Purchase Per Person"] = age_analysis_purchase_df[ "Avg Total Purchase Per Person"].map("${:.2f}".format)
# Shpwcases the data frame
age_analysis_purchase_df
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# Groups dataframe by "SN" and counts how many purchases each unique "SN" has made
top_spenders_count = purchase_data_df.groupby(["SN"]).count()["Purchase ID"]
# Finds the average price each of the Top Spenders paid
top_spenders_average_price = purchase_data_df.groupby(["SN"]).mean()["Price"]
# Finds the total amount paid for each Top Spender
top_spenders_total_price = purchase_data_df.groupby(["SN"]).sum()["Price"]
# Puts the ablove findings intoa dataframe
top_spenders_df = pd.DataFrame({"Purchase Count" : top_spenders_count,
"Average Purchase Price" : top_spenders_average_price,
"Total Purchase Value" : top_spenders_total_price
})
# Formats the columsn to include "$" and show only 2 decimal points
top_spenders_df["Average Purchase Price"] =top_spenders_df["Average Purchase Price"].map("${:.2f}".format)
top_spenders_df["Total Purchase Value"] =top_spenders_df["Total Purchase Value"].map("${:.2f}".format)
# Displays the top 5 Spenders in the the game
top_spenders_df.sort_values(by='Purchase Count', ascending=False).head()
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, average item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# Indexed the datframe by "Item ID" and "Item Name" and makes a counter
most_popular = purchase_data_df.groupby(["Item ID", "Item Name"])
most_popular.count()
# Creates a new dataframe with the index above and counts the purchase ID
most_popular_df = pd.DataFrame(most_popular["Purchase ID"].count())
#most_popular_df
# Get Total purchase value by Item
most_popular_total = most_popular["Price"].sum()
most_popular_total
most_popular_total_formatted = most_popular_total.map("${:,.2f}".format)
most_popular_total_formatted
# Get purchase price by Item
most_popular_price = most_popular["Price"].mean()
most_popular_price_formatted = most_popular_price.map("${:,.2f}".format)
most_popular_price_formatted
# Add new columns and values into the dataframe
most_popular_df["Item Price"] = most_popular_price_formatted
most_popular_df["Total Purchase Value"] =most_popular_total
#most_popular_df.head()
# Makes a new dataframe, changes the name of "Purchase ID" to "Purchase Count"
most_popular_new_df = most_popular_df.rename(columns={"Purchase ID":"Purchase Count"})
most_popular_new_df
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
# Sorts the dataframe above from most profitabel to least profitable
Top_5_Items_df= most_popular_new_df.sort_values("Purchase Count", ascending=False)
Top_5_Items_df.head()
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
purchase_data.head()
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
unique_names = purchase_data['SN'].unique()
total_players = len(unique_names)
total_players_df = pd.DataFrame({"Total Players": [total_players]})
total_players_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
num_unique_items = len(purchase_data['Item Name'].unique())
total_revenue = purchase_data['Price'].sum()
num_purchases = len(purchase_data['SN'])
avg_price = total_revenue/num_purchases
summary_analysis_df = pd.DataFrame({
"Number of Unique Items": [num_unique_items],
"Average Price": [avg_price],
"Number of Purchases": [num_purchases],
"Total Revenue": [total_revenue]
})
summary_analysis_df["Average Price"] = summary_analysis_df["Average Price"].map("${:.2f}".format)
summary_analysis_df["Total Revenue"] = summary_analysis_df["Total Revenue"].map("${:,.2f}".format)
summary_analysis_df
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
#Non Pandas solution (or using Pandas minimally)
# total_male_players = len(purchase_data['SN'].loc[purchase_data['Gender'] == 'Male'].unique())
# total_female_players = len(purchase_data['SN'].loc[purchase_data['Gender'] == 'Female'].unique())
# total_other_players = len(purchase_data['SN'].loc[purchase_data['Gender'] == 'Other / Non-Disclosed'].unique())
# perc_male_players = (total_male_players *100) / total_players
# perc_female_players = (total_female_players *100) / total_players
# perc_other_players = (total_other_players *100) / total_players
# gender_demographics_df = pd.DataFrame({
# "Total Count": [total_male_players, total_female_players, total_other_players],
# "Percentage of Players": [perc_male_players, perc_female_players, perc_other_players]
# })
# gender_demographics_df["Percentage of Players"] = \
# gender_demographics_df["Percentage of Players"].map("{:.2f}%".format)
# gender_demographics_df
#Using Pandas
# Reduce the data so that we only have screen name and gender
gender_df = purchase_data[["SN", "Gender"]]
gender_unique_df = gender_df.drop_duplicates()
gender_unique_df.reset_index(drop=True)
# Count genders within the purchase data
gender_counts = gender_unique_df["Gender"].value_counts()
gender_percentages = (gender_counts * 100) /total_players
gender_demographics_df = pd.DataFrame({
"Total Count": gender_counts,
"Percentage of Players": gender_percentages,
})
gender_demographics_df["Percentage of Players"] = \
gender_demographics_df["Percentage of Players"].map("{:.2f}%".format)
gender_demographics_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Solution inspired by:
# https://stackoverflow.com/questions/39922986/pandas-group-by-and-sum
gender_analysis_df = purchase_data.groupby('Gender')
purchase_count = gender_analysis_df['Item Name'].count()
average_purchase_price = gender_analysis_df['Price'].sum() / gender_analysis_df['Item Name'].count()
total_purchase_value = gender_analysis_df['Price'].sum()
#Solution inspired by:
# https://stackoverflow.com/questions/38309729/count-unique-values-with-pandas-per-groups/38309823
avg_total_purchase_per_person = gender_analysis_df['Price'].sum() / gender_analysis_df['SN'].nunique()
# print(purchase_count)
# print(average_purchase_price)
# print(total_purchase_value)
# print(avg_total_purchase_per_person)
gender_purchasing_analysis_df = pd.DataFrame({
"Purchase Count": purchase_count,
"Average Purchase Price": average_purchase_price,
"Total Purchase Value": total_purchase_value,
"Avg Total Purchase Per Person": avg_total_purchase_per_person
})
gender_purchasing_analysis_df["Average Purchase Price"] = \
gender_purchasing_analysis_df["Average Purchase Price"].map("${:.2f}".format)
gender_purchasing_analysis_df["Total Purchase Value"] = \
gender_purchasing_analysis_df["Total Purchase Value"].map("${:,.2f}".format)
gender_purchasing_analysis_df["Avg Total Purchase Per Person"] = \
gender_purchasing_analysis_df["Avg Total Purchase Per Person"].map("${:.2f}".format)
gender_purchasing_analysis_df
###Output
Gender
Female 113
Male 652
Other / Non-Disclosed 15
Name: Item Name, dtype: int64
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
age_demographics_df = purchase_data.copy()
bins = [0, 10, 15, 20, 25, 30, 35, 40, 150]
labels = ['<10', '10-14', '15-19', '20-24', '25-29', '30-34', '35-39', '40+']
age_demographics_df["Age Group"] = pd.cut(age_demographics_df["Age"],
bins,
labels=labels,
include_lowest=True,
right= False
)
age_demographics_unique_df = age_demographics_df[['SN', 'Age', 'Age Group']].drop_duplicates()
age_demographics_org_df = age_demographics_unique_df.groupby('Age Group')
total_count = age_demographics_org_df['SN'].count()
total_percentages = (total_count * 100) /total_players
total_percentages
age_demographics_summary_df = pd.DataFrame({
"Total Count": total_count,
"Percentage of Players": total_percentages,
})
age_demographics_summary_df["Percentage of Players"] = \
age_demographics_summary_df["Percentage of Players"].map("{:.2f}%".format)
age_demographics_summary_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
purchase_data.head()
age_demographics_df = purchase_data.copy()
bins = [0, 10, 15, 20, 25, 30, 35, 40, 150]
labels = ['<10', '10-14', '15-19', '20-24', '25-29', '30-34', '35-39', '40+']
age_demographics_df["Age Group"] = pd.cut(age_demographics_df["Age"],
bins,
labels=labels,
include_lowest=True,
right= False
)
age_demographics_df
age_demographics_grouped_df = age_demographics_df.groupby('Age Group')
age_demographics_grouped_df.count()
purchase_count = age_demographics_grouped_df['Item Name'].count()
average_purchase_price = age_demographics_grouped_df['Price'].sum() / age_demographics_grouped_df['Item Name'].count()
total_purchase_value = age_demographics_grouped_df['Price'].sum()
# #Solution inspired by:
# # https://stackoverflow.com/questions/38309729/count-unique-values-with-pandas-per-groups/38309823
avg_total_purchase_per_person = age_demographics_grouped_df['Price'].sum() / age_demographics_grouped_df['SN'].nunique()
# print(purchase_count)
# print(average_purchase_price)
# print(total_purchase_value)
# print(avg_total_purchase_per_person)
age_purchasing_analysis_df = pd.DataFrame({
"Purchase Count": purchase_count,
"Average Purchase Price": average_purchase_price,
"Total Purchase Value": total_purchase_value,
"Avg Total Purchase Per Person": avg_total_purchase_per_person
})
age_purchasing_analysis_df["Average Purchase Price"] = \
age_purchasing_analysis_df["Average Purchase Price"].map("${:.2f}".format)
age_purchasing_analysis_df["Total Purchase Value"] = \
age_purchasing_analysis_df["Total Purchase Value"].map("${:,.2f}".format)
age_purchasing_analysis_df["Avg Total Purchase Per Person"] = \
age_purchasing_analysis_df["Avg Total Purchase Per Person"].map("${:.2f}".format)
age_purchasing_analysis_df
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
top_spenders_df = purchase_data.copy()
top_spenders_df
top_spenders_grouped_df = top_spenders_df.groupby('SN')
purchase_count = top_spenders_grouped_df['Item Name'].count()
average_purchase_price = top_spenders_grouped_df['Price'].sum() / top_spenders_grouped_df['Item Name'].count()
total_purchase_value = top_spenders_grouped_df['Price'].sum()
# print(purchase_count)
# print(average_purchase_price)
# print(total_purchase_value)
top_spenders_analysis_df = pd.DataFrame({
"Purchase Count": purchase_count,
"Average Purchase Price": average_purchase_price,
"Total Purchase Value": total_purchase_value
})
#Sort before you clean up formatting
top_spenders_analysis_df = top_spenders_analysis_df.sort_values(by=['Total Purchase Value'], ascending = False)
top_spenders_analysis_df["Average Purchase Price"] = \
top_spenders_analysis_df["Average Purchase Price"].map("${:.2f}".format)
top_spenders_analysis_df["Total Purchase Value"] = \
top_spenders_analysis_df["Total Purchase Value"].map("${:,.2f}".format)
top_spenders_analysis_df.head()
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, average item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
most_popular_items_df = purchase_data.copy()
most_popular_items_df['Item Name'].unique()
most_popular_items_df
most_popular_items_grouped_df = most_popular_items_df.groupby('Item ID')
most_popular_items_grouped_df = most_popular_items_df.groupby(['Item ID', 'Item Name'])
purchase_count = most_popular_items_grouped_df['Item Name'].count()
average_purchase_price = most_popular_items_grouped_df['Price'].sum() / most_popular_items_grouped_df['Item Name'].count()
total_purchase_value = most_popular_items_grouped_df['Price'].sum()
# print(purchase_count)
# print(average_purchase_price)
# print(total_purchase_value)
most_popular_items_analysis_df = pd.DataFrame({
"Purchase Count": purchase_count,
"Average Purchase Price": average_purchase_price,
"Total Purchase Value": total_purchase_value
})
#Sort before you clean up formatting
most_popular_items_analysis_cleaned_df = most_popular_items_analysis_df.sort_values(by=['Purchase Count'], ascending = False)
most_popular_items_analysis_cleaned_df["Average Purchase Price"] = \
most_popular_items_analysis_cleaned_df["Average Purchase Price"].map("${:.2f}".format)
most_popular_items_analysis_cleaned_df["Total Purchase Value"] = \
most_popular_items_analysis_cleaned_df["Total Purchase Value"].map("${:,.2f}".format)
most_popular_items_analysis_cleaned_df.head()
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
most_popular_items_analysis_df.head()
most_profitable_items_analysis_cleaned_df = most_popular_items_analysis_df.sort_values(by=['Total Purchase Value'], ascending = False)
most_profitable_items_analysis_cleaned_df["Average Purchase Price"] = \
most_profitable_items_analysis_cleaned_df["Average Purchase Price"].map("${:.2f}".format)
most_profitable_items_analysis_cleaned_df["Total Purchase Value"] = \
most_profitable_items_analysis_cleaned_df["Total Purchase Value"].map("${:,.2f}".format)
most_profitable_items_analysis_cleaned_df.head()
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
# The essential code 'purchase_data["SN"].nunique()' counts the number of unique
# observations in the 'SN' column to find the total number of players
# Did the DataFrame with the dictionary just to display it like the output in
# the strater ipynb-file
pd.DataFrame({"Total Players" : [purchase_data["SN"].nunique()]})
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Number of Unigue Items is nunigue() on 'Item ID'
#nunigue() on 'Item Name' gives different result
item_count = purchase_data["Item ID"].nunique()
#Average Price is mean() of 'Price' column
av_price = purchase_data["Price"].mean()
#Number of Purchases is number of rows
purchase_num = purchase_data.shape[0]
#Total Revenue is sum of 'Price' column
total_rev = purchase_data["Price"].sum()
#Assembling new dataframe
purchasing_analysis_total_df = pd.DataFrame({"Number of Unique Items" : [item_count],
"Average Price" : [av_price],
"Number of Purchases" : [purchase_num],
"Total Revenue" : [total_rev]})
#Optional formatting
purchasing_analysis_total_df["Average Price"] = purchasing_analysis_total_df["Average Price"].map('${:.2f}'.format)
purchasing_analysis_total_df["Total Revenue"] = purchasing_analysis_total_df["Total Revenue"].map('${:,.2f}'.format)
#Display
purchasing_analysis_total_df
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
#Just a list of index/row names based on the order in the instructions
row_order = ["Male", "Female", "Other / Non-Disclosed"]
#Grouping by "Gender" and getting number of unique counts based on "SN" Column
#and converting that to dataframe and reindexing rows based on "row_order"
gender_counts = purchase_data.groupby(["Gender"])["SN"].nunique().to_frame().reindex(row_order)
#Deleting the name "Gender" of the index column
gender_counts.index.name = None
#Renaming the "SN" column to "Total Counts"
gender_counts.rename(columns={"SN" : "Total Count"}, inplace = True)
#Creating to column "Percentage of Players"
gender_counts["Percentage of Players"] = 100 * gender_counts["Total Count"] / gender_counts["Total Count"].sum()
#Reformatting numbers in percent column
gender_counts["Percentage of Players"] = gender_counts["Percentage of Players"].map("{:.2f}%".format)
#Displaying
gender_counts
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Grouped by "Gender"
gender_grouped = purchase_data.groupby(["Gender"])
#Number of purchases of each gender group is just the row-count
#of each gender-group, then converted to dataframe and named
gender_df = gender_grouped["Purchase ID"].count().to_frame()
#Just renaming the first and only column of the new dataframe to "Purchase Count"
gender_df.rename(columns={"Purchase ID" : "Purchase Count"}, inplace = True)
#Average Purchase Price is the mean() of the "Price" column for each gender-group
gender_df["Average Purchase Price"] = gender_grouped["Price"].mean()
#Total Purchase Value is the sum() of the "Price" column for each gender-group
gender_df["Total Purchase Value"] = gender_grouped["Price"].sum()
#Avg Total Purchase per Person is the "Total Purchase Value" column divided element-wise by
#the number of unigue person counts based on the "SN"
gender_df["Avg Total Purchase per Person"] = gender_df["Total Purchase Value"] / gender_grouped["SN"].nunique()
#Formating last three columns
gender_df["Average Purchase Price"] = gender_df["Average Purchase Price"].map("${:.2f}".format)
gender_df["Total Purchase Value"] = gender_df["Total Purchase Value"].map("${:,.2f}".format)
gender_df["Avg Total Purchase per Person"] = gender_df["Avg Total Purchase per Person"].map("${:.2f}".format)
#Display
gender_df
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
#Bins for age and group names
bins = [0,9,14,19,24,29,34,39,100]
bin_names = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
#Slicing the data according to the bins and category/group names and adding it
#as new column "Age Range" to original dataframe
purchase_data["Age Range"] = pd.cut(purchase_data["Age"], bins, labels = bin_names)
#Grouping by "Age Range" and getting counts of unique values based on "SN" column
age_counts = purchase_data.groupby(["Age Range"])["SN"].nunique().to_frame()
#Deleting the name "Age Range" of the index column
age_counts.index.name = None
#Renaming the "SN" column to "Total Counts"
age_counts.rename(columns={"SN" : "Total Count"}, inplace = True)
#Calculating the corresponding percentages
age_counts["Percentage of Players"] = 100 * age_counts["Total Count"] / age_counts["Total Count"].sum()
#Formatting percents
age_counts["Percentage of Players"] = age_counts["Percentage of Players"].map("{:.2f}%".format)
#Display
age_counts
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Grouped by "Age Range"
age_grouped = purchase_data.groupby(["Age Range"])
#Number of purchases of each age group is just the row-count
#of each age-group, then converted to dataframe and named
age_df = age_grouped["Purchase ID"].count().to_frame()
#Just renaming the first and only column of the new dataframe to "Purchase Count"
age_df.rename(columns={"Purchase ID" : "Purchase Count"}, inplace = True)
#Average Purchase Price is the mean() of the "Price" column for each age-group
age_df["Average Purchase Price"] = age_grouped["Price"].mean()
#Total Purchase Value is the sum() of the "Price" column for each age-group
age_df["Total Purchase Value"] = age_grouped["Price"].sum()
#Avg Total Purchase per Person is the "Total Purchase Value" column divided element-wise by
#the number of unigue person counts based on the "SN"
age_df["Avg Total Purchase per Person"] = age_df["Total Purchase Value"] / age_grouped["SN"].nunique()
#Formatting last three columns
age_df["Average Purchase Price"] = age_df["Average Purchase Price"].map("${:.2f}".format)
age_df["Total Purchase Value"] = age_df["Total Purchase Value"].map("${:,.2f}".format)
age_df["Avg Total Purchase per Person"] = age_df["Avg Total Purchase per Person"].map("${:.2f}".format)
#Display
age_df
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#Grouped by "SN"
sn_grouped = purchase_data.groupby(["SN"])
#Number of purchases of each SN group is just the row-count
#of each SN-group, then converted to dataframe and named
sn_df = sn_grouped["Purchase ID"].count().to_frame()
#Just renaming the first and only column of the new dataframe to "Purchase Count"
sn_df.rename(columns={"Purchase ID" : "Purchase Count"}, inplace = True)
#Average Purchase Price is the mean() of the "Price" column for each SN-group
sn_df["Average Purchase Price"] = sn_grouped["Price"].mean()
#Total Purchase Value is the sum() of the "Price" column for each SN-group
sn_df["Total Purchase Value"] = sn_grouped["Price"].sum()
#Sorting on total purchase value
sn_df.sort_values("Total Purchase Value", ascending=False, inplace=True)
#Formatting
sn_df["Average Purchase Price"] = sn_df["Average Purchase Price"].map("${:.2f}".format)
sn_df["Total Purchase Value"] = sn_df["Total Purchase Value"].map("${:.2f}".format)
#Display
sn_df.head()
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#Grouped by "Item ID" and "Item Name"
item_grouped = purchase_data.groupby(["Item ID", "Item Name"])
#Number of purchases of each group is just the row-count
#of each group, then converted to dataframe and named
item_df = item_grouped["Purchase ID"].count().to_frame()
#Just renaming the first and only column of the new dataframe to "Purchase Count"
item_df.rename(columns={"Purchase ID" : "Purchase Count"}, inplace = True)
#Item Price is same as mean in this case
item_df["Item Price"] = item_grouped["Price"].mean()
#multiplying item price by purchase count to get total purchase value for each item
item_df["Total Purchase Value"] = item_df["Item Price"] * item_df["Purchase Count"]
#sorting based on purchase counts
item_pc_sort = item_df.sort_values("Purchase Count", ascending=False)
#Formatting
item_pc_sort["Item Price"] = item_pc_sort["Item Price"].map("${:.2f}".format)
item_pc_sort["Total Purchase Value"] = item_pc_sort["Total Purchase Value"].map("${:.2f}".format)
#displaying
item_pc_sort.head()
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
#Sorting item_df based on Total Purchase Value in descending order
item_tpv_sort = item_df.sort_values("Total Purchase Value", ascending=False)
#Formatting
item_tpv_sort["Item Price"] = item_tpv_sort["Item Price"].map("${:.2f}".format)
item_tpv_sort["Total Purchase Value"] = item_tpv_sort["Total Purchase Value"].map("${:.2f}".format)
#Display
item_tpv_sort.head()
###Output
_____no_output_____
###Markdown
Player Count- Display the total number of players
###Code
# Calculate the unique player count
number_players = df['SN'].nunique()
# Place the number of players into a dataframe
pd.DataFrame(number_players, columns = ['Total_Players'], index=[0])
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total)- Run basic calculations to obtain: Number of Unique Items, Average Purchase Price, Total Number of Purchases, Total Revenue- Create a summary data frame to hold the results- Optional: give the displayed data cleaner formatting- Display the summary data frame
###Code
# Check the kind of items that were purchased most
df['Item Name'].value_counts()
# Calculate number of unique items
unique_items = df['Item ID'].nunique()
# Calculate the average price of purchase
avg_price = df['Price'].mean()
# Calculate the number of purchases
total_purchases = df.shape[0]
# Calculate the total revenue
total_revenue = df['Price'].sum()
# change the format of avg_ price and total_revenue
mean_format = '${:.2f}'.format(avg_price)
total_format = '${:,.2f}'.format(total_revenue)
# create the purchasing analysis(total) Dataframe
summary_df = pd.DataFrame({'Number of Unique Items': [unique_items],
'Average Price': [mean_format],
'Number of Purchases': [total_purchases],
'Total Revenue': [total_format]})
summary_df
###Output
_____no_output_____
###Markdown
Gender Demographics - Percentage and Count of Male Players- Percentage and Count of Female Players- Percentage and Count of Other / Non-Disclosed
###Code
# Check Gender keys and values
df['Gender'].value_counts()
# Create a copy of the dataframe
df_full = df.copy()
# Drop duplicate values and save the results as a new dataframe
df_unique = df_full.copy()
df_unique = df_unique.drop_duplicates(subset=['SN'])
df_unique.shape
# Calculate the total count of males/females/others without duplicates
male_count = sum(df_unique['Gender'] == 'Male')
female_count = sum(df_unique['Gender'] == 'Female')
other_count = sum(df_unique['Gender']== 'Other / Non-Disclosed')
print(male_count, female_count, other_count)
# Calculate the percent of males/females/others with out duplicates
m_percent = male_count/df_unique['SN'].count()
f_percent = female_count/df_unique['SN'].count()
oth_percent = other_count/df_unique['SN'].count()
male_percent = '{:.2%}'.format(m_percent)
female_percent = '{:.2%}'.format(f_percent)
other_percent = '{:.2%}'.format(oth_percent)
print(male_percent, female_percent, other_percent)
# Create a new dataframe with the gender demographics variables
gender_demo = pd.DataFrame({'Total Count': [male_count, female_count, other_count],
'Percentage of Players': [male_percent, female_percent, other_percent]},
index=['Male', 'Female', 'Other / Non-Disclosed'])
gender_demo
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender)- Run basic calculations to obtain purchase count, avg. purchase price, total purchase value, Average Purchase Total per Person by Gender- Create a summary data frame to hold the results- Optional: give the displayed data cleaner formatting- Display the summary data frame
###Code
# group by gender and place the results into a new dataframe
p_count = df_full.groupby(['Gender']).count()['Purchase ID']
avg_purchase_price = df_full.groupby(['Gender']).mean()['Price']
tot_purchase_value = df_full.groupby(['Gender']).sum()['Price']
tot_purchase_person = tot_purchase_value/gender_demo['Total Count']
p_analysis_gender = pd.DataFrame({'Purchase Count': p_count,
'Average Purchase Price': avg_purchase_price,
'Total Purchase Value':tot_purchase_value,
'Avg Total Purchase per Person':tot_purchase_person})
p_analysis_gender['Average Purchase Price'] = p_analysis_gender['Average Purchase Price'].apply('${:.2f}'.format)
p_analysis_gender['Total Purchase Value'] = p_analysis_gender['Total Purchase Value'].apply('${:,.2f}'.format)
p_analysis_gender['Avg Total Purchase per Person'] = p_analysis_gender['Avg Total Purchase per Person'].apply('${:.2f}'.format)
p_analysis_gender
###Output
_____no_output_____
###Markdown
Age Demographics- The below each broken into bins of 4 years (i.e. <10, 10-14, 15-19, etc.) - Purchase Count - Average Purchase Price - Total Purchase Value - Average Purchase Total per Person by Age Group - Establish bins for ages- Categorize the existing players using the age bins. Hint: use pd.cut()- Calculate the numbers and percentages by age group- Create a summary data frame to hold the results- Optional: round the percentage column to two decimal points- Display Age Demographics Table
###Code
bins = [0, 9, 14, 19, 24, 29, 34, 39, 45]
# np.linspace(0, 45, 9, dtype= int)
labels = ['<10', '10-14', '15-19', '20-24', '25-29', '30-34', '35-39', '40+']
# Pass the bins and labels into pd.cut to create a new column
# that has range of Age
df_full['Age Range'] = pd.cut(df_full['Age'], bins=bins, labels =labels, include_lowest=True)
# Create a new copy of the dataframe in order to have the 'Age Range' column in our df of unique values
df_unique2 = df_full.copy()
df_unique2 = df_unique2.drop_duplicates(subset=['SN'])
# Use the copy of the dataset to visualize the unique percentage of players according to their age range
age_count = df_unique2.groupby(['Age Range']).count()['SN']
per_players = age_count/df_unique2.shape[0]
age_demo = pd.DataFrame({'Total Count': age_count,
'Percentage of Players':per_players})
age_demo['Percentage of Players'] = age_demo['Percentage of Players'].apply('{:.2%}'.format)
age_demo
# Make Purchasing analysis by age using groupby and passing the results to a new dataframe
age_purchase_count = df_full.groupby('Age Range').count()['SN']
avg_purchase_price = df_full.groupby('Age Range').mean()['Price']
tot_purchase_value = df_full.groupby('Age Range').sum()['Price']
avg_tot_pur_person = tot_purchase_value/age_demo['Total Count']
p_analysis_age = pd.DataFrame({'Purchase Count':age_purchase_count,
'Average Purchase Price':avg_purchase_price,
'Total Purchase Value':tot_purchase_value,
'Avg Total Purchase per Person':avg_tot_pur_person})
p_analysis_age['Average Purchase Price'] = p_analysis_age['Average Purchase Price'].apply('${:.2f}'.format)
p_analysis_age['Total Purchase Value'] = p_analysis_age['Total Purchase Value'].apply('${:,.2f}'.format)
p_analysis_age['Avg Total Purchase per Person'] = p_analysis_age['Avg Total Purchase per Person'].apply('${:.2f}'.format)
p_analysis_age
###Output
_____no_output_____
###Markdown
Top SpendersIdentify the the top 5 spenders in the game by total purchase value, then list (in a table):- SN- Purchase Count- Average Purchase Price- Total Purchase Value
###Code
# Gorupby 'SN' and save it, then
# perform the corresponding operations on the desired columns, save them as variables and then pass them into
# a new dataframe, then use .sort_values by 'Total Purchase Value' in it and use iloc to just select
# the top 5 records
top_purchase_value = df_full.groupby('SN')
top_sum_price = top_purchase_value['Price'].sum()
mean_price = top_purchase_value['Price'].mean()
purchase_count = top_purchase_value['Purchase ID'].count()
top_sp_df = pd.DataFrame({'Purchase Count':purchase_count,
'Average Purchase Price':mean_price,
'Total Purchase Value':top_sum_price})
top_sp_df = top_sp_df.sort_values(by='Total Purchase Value', ascending=False)
top_sp_df = top_sp_df.iloc[:5,:]
top_sp_df['Average Purchase Price'] = top_sp_df['Average Purchase Price'].apply('${:.2f}'.format)
top_sp_df['Total Purchase Value'] = top_sp_df['Total Purchase Value'].apply('${:.2f}'.format)
top_sp_df
###Output
_____no_output_____
###Markdown
Most Popular ItemsIdentify the 5 most popular items by purchase count, then list (in a table):- Item ID- Item Name- Purchase Count- Item Price- Total Purchase Value
###Code
# Gorupby 'Item ID' and 'Item Name' and save it, then
# perform the corresponding operations on the desired columns, save them as variables and pass them into
# a new dataframe, use .sort_values by 'Purchase Count' in it and use iloc to just select
# the top 5 records
most_popular = df_full.groupby(['Item ID', 'Item Name'])
item_count = most_popular['Item ID'].count()
item_price = most_popular['Price'].mean()
item_value = most_popular['Price'].sum()
most_popular_df = pd.DataFrame({'Purchase Count': item_count,
'Item Price': item_price,
'Total Purchase Value': item_value})
most_popular_df = most_popular_df.sort_values(by='Purchase Count', ascending=False)
most_popular_df = most_popular_df.iloc[:5,:]
most_popular_df['Item Price'] = most_popular_df['Item Price'].apply('${:.2f}'.format)
most_popular_df['Total Purchase Value'] = most_popular_df['Total Purchase Value'].apply('${:.2f}'.format)
most_popular_df
###Output
_____no_output_____
###Markdown
Most Profitable ItemsIdentify the 5 most profitable items by total purchase value, then list (in a table):- Item ID- Item Name- Purchase Count- Item Price- Total Purchase Value
###Code
# Gorupby 'Item ID' and 'Item Name' and save it, then
# perform the corresponding operations on the desired columns, save them as variables and pass them into
# a new dataframe, use .sort_values by 'Total Purchase Value' in it and use iloc to just select
# the top 5 records
most_profit = df_full.groupby(['Item ID', 'Item Name'])
item_count = most_profit['Item ID'].count()
item_price = most_profit['Price'].mean()
item_value = most_profit['Price'].sum()
most_popular_df = pd.DataFrame({'Purchase Count': item_count,
'Item Price': item_price,
'Total Purchase Value': item_value})
most_popular_df = most_popular_df.sort_values(by='Total Purchase Value', ascending=False)
most_popular_df = most_popular_df.iloc[:5,:]
most_popular_df['Item Price'] = most_popular_df['Item Price'].apply('${:.2f}'.format)
most_popular_df['Total Purchase Value'] = most_popular_df['Total Purchase Value'].apply('${:.2f}'.format)
most_popular_df
df_full['Price'].max()
df_full['Price'].min()
df_full['Price'].median()
df_full['Price'].mean()
df_full['Price'].std()
###Output
_____no_output_____
###Markdown
Heroes Of Pymoli Data Analysis* Of the 1163 active players, the vast majority are male (84%). There also exists, a smaller, but notable proportion of female players (14%).* Our peak age demographic falls between 20-24 (44.8%) with secondary groups falling between 15-19 (18.60%) and 25-29 (13.4%). ----- Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
# Create total players dataframe to get count
players_df = pd.DataFrame(purchase_data['SN'].unique())
total_players = pd.DataFrame(players_df.count())
# rename the columns
total_players.columns = ['Total Players']
total_players
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#purchase_data.columns
#Get the unique items
unique_items = len(purchase_data['Item ID'].unique())
#get the average price
average_price = purchase_data['Price'].mean()
#get the number of purchases
number_of_purchases = purchase_data['Price'].count()
#calculate total revenue
total_revenue = purchase_data['Price'].sum()
#create a new dataframe for the summary of data
summary_df = pd.DataFrame([{'Number of Unique Items': unique_items
, 'Average Price': average_price
, 'Number of Purchases': number_of_purchases
, 'Total Revenue' : total_revenue}])
#map format for currency
summary_df['Average Price'] = summary_df['Average Price'].map("${:,.2f}".format)
summary_df['Total Revenue'] = summary_df['Total Revenue'].map("${:,.2f}".format)
summary_df
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
# Create a new data frame of Gender data to drop duplicates
clean_gender_data_df = purchase_data[['SN', 'Gender']].drop_duplicates()
# Create a groupby
gender_df = clean_gender_data_df.groupby(['Gender'])
gender_count_df = gender_df.count()
gender_count_df.rename(columns={'SN': 'Total Count'}, inplace=True)
# Create new column for percentages
total_number_players = total_players.iloc[0,0]
gender_count_df['Percentage of Players'] = (gender_count_df['Total Count'] / total_number_players * 100).round(2).map("{:,.2f}%".format)
#print(percentage.head())
# Removes Gender from output
del gender_count_df.index.name
#gender_count_df['Percentage of Players'] = gender_count_df['Percentage of Players'].map("{:,.2f}%".format)
sorted_df = gender_count_df.sort_values('Total Count', ascending=False)
sorted_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#copy over gender and price data
clean_gender_purchase_data_df = purchase_data[['Gender', 'Price']]
# Get counts by gender
gender_purchase_summary_df = clean_gender_purchase_data_df.groupby('Gender').count()
# Capture total purchases
gender_purchase_summary_df['Total Purchases Value'] = gender_purchase_summary_df['Price'].sum()
# Calculate Average by dividing total purchases by number of purchases
gender_purchase_summary_df['Avg Total Purchase per Person'] = gender_purchase_summary_df['Total Purchases Value'] / gender_count_df['Total Count']
# Rename Price to Purchase Count
gender_purchase_summary_df.rename(columns={'Price': 'Purchase Count'}, inplace=True)
# do currency formats using map
gender_purchase_summary_df['Total Purchases Value'] = gender_purchase_summary_df['Total Purchases Value'].map("${:,.2f}".format)
gender_purchase_summary_df['Avg Total Purchase per Person'] = gender_purchase_summary_df['Avg Total Purchase per Person'].map("${:,.2f}".format)
gender_purchase_summary_df
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
# scrub data to remove duplicate players
age_data_df = purchase_data.copy()[['SN', 'Age']].drop_duplicates()
#Create bins for the groups
bins = [0, 9, 14, 19, 24, 29, 34, 39, 120]
group_labels = ['<10', '10-14', '15-19', '20-24', '25-29', '30-34', '35-39', '40+']
#Create a new column Age Group
age_data_df['Age Group'] = pd.cut(age_data_df['Age'], bins, labels=group_labels)
#Get the counts by age group
age_groupby_df = age_data_df.groupby('Age Group').count()
#Calculate the Percentage of Players within an age group
age_groupby_df['Percentage of Players'] = (age_groupby_df['Age'] / total_number_players * 100).round(2).map("{:,.2f}%".format)
#drop Age Group index column name
del age_groupby_df.index.name
#drop SN column
age_groupby_df.drop(columns=['SN'], inplace=True)
#rename Age to Total Count
age_groupby_df.rename(columns={'Age': 'Total Count'}, inplace=True)
age_groupby_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# create Age and Price DF
age_purchase_df = purchase_data.copy()
#print(age_purchase_df.count())
#Create a new column Age Group - based on bins and labels created previously
age_purchase_df['Age Ranges'] = pd.cut(age_purchase_df['Age'], bins, labels=group_labels)
#Get the counts by age group
age_purchases_groupby_df = pd.DataFrame(age_purchase_df[['Age', 'Age Ranges']]).groupby('Age Ranges').count()
#Get the mean by age group
age_purchases_groupby_df['Average Purchase Prices'] = pd.DataFrame(age_purchase_df[['Price', 'Age Ranges']]).groupby('Age Ranges').mean()
#Get the total by age group
age_purchases_groupby_df['Total Purchase Value'] = pd.DataFrame(age_purchase_df[['Price', 'Age Ranges']]).groupby('Age Ranges').sum()
# Get the average by person in the age group. Use the total count in above Age Demographics
age_purchases_groupby_df['Avg Total Purchases per Person'] = age_purchases_groupby_df['Total Purchase Value'] / age_groupby_df['Total Count']
# Use map format to set currency format for float columns
age_purchases_groupby_df['Average Purchase Prices'] = age_purchases_groupby_df['Average Purchase Prices'].map("${:,.2f}".format)
age_purchases_groupby_df['Total Purchase Value'] = age_purchases_groupby_df['Total Purchase Value'].map("${:,.2f}".format)
age_purchases_groupby_df['Avg Total Purchases per Person'] = age_purchases_groupby_df['Avg Total Purchases per Person'].map("${:,.2f}".format)
age_purchases_groupby_df
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#Get the purchases of each user
user_purchases_df = purchase_data.copy()[['SN', 'Price']]
#group by the user
user_purchases_groupby = user_purchases_df.groupby('SN')
# Get the count of the purchases
user_purchases_groupby_df = pd.DataFrame(user_purchases_groupby['Price'].count())
# Get the average price of the purchases
user_purchases_groupby_df['Average Purchase Price'] = pd.DataFrame(user_purchases_groupby['Price'].mean())
# Get the total purchases
user_purchases_groupby_df['Total Purchase Value'] = pd.DataFrame(user_purchases_groupby['Price'].sum())
# Sort by tatal purchases and only take 5
user_purchases_summary_df = user_purchases_groupby_df.sort_values('Total Purchase Value', ascending=False).head(5)
# Rename Price to purchase count
user_purchases_summary_df.rename(columns={'Price' : 'Purchase Count'}, inplace=True)
#Format for currency display
user_purchases_summary_df['Average Purchase Price'] = user_purchases_summary_df['Average Purchase Price'].map("${:,.2f}".format)
user_purchases_summary_df['Total Purchase Value'] = user_purchases_summary_df['Total Purchase Value'].map("${:,.2f}".format)
user_purchases_summary_df
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#Get the purchases of each item
item_purchases_df = purchase_data.copy()[['Item ID', 'Item Name', 'Price']]
item_info_df = item_purchases_df.drop_duplicates()
#group by the Item ID and Item Name
item_purchases_groupby = item_purchases_df.groupby(['Item ID', 'Item Name'])
# Get the count of the purchases
item_purchases_groupby_df = pd.DataFrame(item_purchases_groupby['Price'].count())
# Get the Total Purchase Value
item_purchases_groupby_df['Total Purchase Value'] = pd.DataFrame(item_purchases_groupby['Price'].sum())
item_purchases_groupby_df.head(200)
# Get the Item Price by merging to item_info_df ....did not work as need to calculate item price
#item_purchases_merge_df = pd.merge(item_purchases_groupby_df, item_info_df, on=['Item ID', 'Item Name'])
#print( item_purchases_groupby_df['Price'].head())
item_purchases_groupby_df['Item Price'] = item_purchases_groupby_df['Total Purchase Value'] / item_purchases_groupby_df['Price']
#rename the columns
item_purchases_groupby_df.rename(columns={'Price': 'Purchase Count'}, inplace=True)
#reorder the columns....note because of groupby Item ID and Item Name do not need to be listed
summary_purchases = item_purchases_groupby_df[['Purchase Count', 'Item Price', 'Total Purchase Value']]
# sort and only take the top 5
sorted_summary_purchases_most_items = summary_purchases.sort_values(['Purchase Count'], ascending=False).head(5)
#Format currencies using map format
sorted_summary_purchases_most_items['Item Price'] = sorted_summary_purchases_most_items['Item Price'].map("${:,.2f}".format)
sorted_summary_purchases_most_items['Total Purchase Value'] = sorted_summary_purchases_most_items['Total Purchase Value'].map("${:,.2f}".format)
sorted_summary_purchases_most_items
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
# sort and only take the top 5
sorted_summary_purchases_total_value = summary_purchases.sort_values(['Total Purchase Value'], ascending=False).head(5)
sorted_summary_purchases_total_value['Item Price'] = sorted_summary_purchases_total_value['Item Price'].map("${:,.2f}".format)
sorted_summary_purchases_total_value['Total Purchase Value'] = sorted_summary_purchases_total_value['Total Purchase Value'].map("${:,.2f}".format)
sorted_summary_purchases_total_value
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
purchase_data.head()
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
total_players = purchase_data['SN'].nunique()
total_players
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#pull our metrics into vars
unique_items = purchase_data['Item ID'].nunique()
avg_price = round(purchase_data['Price'].mean(),2)
purchases = purchase_data['Purchase ID'].count()
total_revenue = purchase_data['Price'].sum()
#build df
df_summary = pd.DataFrame({'Number of Unique Items':[unique_items],'Average Price':[avg_price],'Number of Purchases':[purchases],'Total Revenue':[total_revenue]})
#format
df_summary['Average Price'] = df_summary['Average Price'].map("${:,.2f}".format)
df_summary['Total Revenue'] = df_summary['Total Revenue'].map("${:,.2f}".format)
df_summary
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
#build a gropued df
df_gender = purchase_data.groupby('Gender')
#get our metrics into variables
gender_count = df_gender['SN'].nunique()
pct_players = gender_count / total_players *100
#construct new df with metrics
df_gender_summary = pd.DataFrame({'Total Count':gender_count
,'Percentage of Players':pct_players}).sort_values(by=['Total Count'],ascending=False)
#apply formatting
df_gender_summary['Percentage of Players'] = df_gender_summary['Percentage of Players'].map("{:.2f}%".format)
df_gender_summary.head()
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#get metrics into vars from grouped df
g_purchase_count = df_gender['Purchase ID'].nunique()
g_average_price = round(df_gender['Price'].mean(),2)
g_total_purchase = df_gender['Price'].sum()
g_avg_total_purchase = g_total_purchase / gender_count
#build df from vars
g_purchase_analysis = pd.DataFrame({'Purchase Count':g_purchase_count
,'Average Purchase Price':g_average_price
,'Total Purchase Value':g_total_purchase
,'Avg Total Purchase per Person':g_avg_total_purchase}).sort_values(by=['Purchase Count'],ascending=False)
#apply formatting
g_purchase_analysis['Average Purchase Price'] = g_purchase_analysis['Average Purchase Price'].map("${:,.2f}".format)
g_purchase_analysis['Total Purchase Value'] = g_purchase_analysis['Total Purchase Value'].map("${:,.2f}".format)
g_purchase_analysis['Avg Total Purchase per Person'] = g_purchase_analysis['Avg Total Purchase per Person'].map("${:,.2f}".format)
g_purchase_analysis.head()
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
# Create the bins in which Data will be held
bins = [0, 9, 14, 19, 24, 29, 34, 39, 200]
# Create the names for the bins
group_names = ['<10','10-14','15-19','20-24','25-29','30-34','35-39','40+']
df_age = purchase_data
df_age["AgeBin"] = pd.cut(purchase_data['Age'], bins, labels=group_names)
df_age = df_age.groupby('AgeBin')
#get our metrics into vars
age_total_count = df_age['SN'].nunique()
age_percent_players = age_total_count / total_players * 100
#build df
df_age_summary = pd.DataFrame({'Total Count':age_total_count,'Percentage of Players':age_percent_players})
#apply formatting
df_age_summary['Percentage of Players'] = df_age_summary['Percentage of Players'].map("{:.2f}%".format)
df_age_summary.head(15)
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#get metrics into vars from grouped df
age_purchase_count = df_age['Purchase ID'].nunique()
age_average_price = round(df_age['Price'].mean(),2)
age_total_purchase = df_age['Price'].sum()
age_avg_total_purchase = age_total_purchase / age_total_count
#build df from vars
age_purchase_analysis = pd.DataFrame({'Purchase Count':age_purchase_count
,'Average Purchase Price':age_average_price
,'Total Purchase Value':age_total_purchase
,'Avg Total Purchase per Person':age_avg_total_purchase})
#apply formatting
age_purchase_analysis['Average Purchase Price'] = age_purchase_analysis['Average Purchase Price'].map("${:,.2f}".format)
age_purchase_analysis['Total Purchase Value'] = age_purchase_analysis['Total Purchase Value'].map("${:,.2f}".format)
age_purchase_analysis['Avg Total Purchase per Person'] = age_purchase_analysis['Avg Total Purchase per Person'].map("${:,.2f}".format)
age_purchase_analysis.head(15)
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#build a gropued df
purchase_analysis = purchase_data.groupby('SN')
#get metrics into vars from grouped df
purchase_count = purchase_analysis['Purchase ID'].nunique()
average_price = round(purchase_analysis['Price'].mean(),2)
total_purchase = purchase_analysis['Price'].sum()
#build df from vars
purchase_analysis = pd.DataFrame({'Purchase Count':purchase_count
,'Average Purchase Price':average_price
,'Total Purchase Value':total_purchase}).sort_values(by=['Total Purchase Value'],ascending=False)
#apply formatting
purchase_analysis['Average Purchase Price'] = purchase_analysis['Average Purchase Price'].map("${:,.2f}".format)
purchase_analysis['Total Purchase Value'] = purchase_analysis['Total Purchase Value'].map("${:,.2f}".format)
purchase_analysis.head()
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#build a gropued df
item_analysis = purchase_data.groupby(['Item ID','Item Name'])
#get metrics into vars from grouped df
item_purchase_count = item_analysis['Purchase ID'].nunique()
item_price = item_analysis['Price'].mean()
total_purchase = item_analysis['Price'].sum()
#build df from vars
item_analysis = pd.DataFrame({'Purchase Count':item_purchase_count
,'Item Price':item_price
,'Total Purchase Value':total_purchase}).sort_values(by=['Purchase Count'],ascending=False)
#apply formatting
item_analysis['Item Price'] = item_analysis['Item Price'].map("${:,.2f}".format)
item_analysis['Total Purchase Value'] = item_analysis['Total Purchase Value'].map("${:,.2f}".format)
item_analysis.head()
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
#build a gropued df
item_analysis = purchase_data.groupby(['Item ID','Item Name'])#.count()
#get metrics into vars from grouped df
item_purchase_count = item_analysis['Purchase ID'].nunique()
item_price = item_analysis['Price'].mean()
total_purchase = item_analysis['Price'].sum()
#build df from vars
item_analysis = pd.DataFrame({'Purchase Count':item_purchase_count
,'Item Price':item_price
,'Total Purchase Value':total_purchase}).sort_values(by=['Total Purchase Value'],ascending=False)
#apply formatting
item_analysis['Item Price'] = item_analysis['Item Price'].map("${:,.2f}".format)
item_analysis['Total Purchase Value'] = item_analysis['Total Purchase Value'].map("${:,.2f}".format)
item_analysis.head()
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
demographics = purchase_data.loc[:,['Gender', 'SN', 'Age']].drop_duplicates()
num_players = demographics.count()[0]
player_count = pd.DataFrame({'Total Players' : [num_players]})
player_count
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
items = len(purchase_data['Item ID'].unique())
average_price = purchase_data['Price'].mean()
purchases = purchase_data['Price'].count()
revenue = purchase_data['Price'].sum()
purchase_analysis = pd.DataFrame({'Number of Unique Items' : [items],
'Average Price' : average_price,
'Number of Purchases' : purchases,
'Total Revenue' : revenue})
purchase_analysis["Average Price"] = purchase_analysis["Average Price"].map("${:,.2f}".format)
purchase_analysis["Total Revenue"] = purchase_analysis["Total Revenue"].map("${:,.2f}".format)
purchase_analysis
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
gender_count = demographics['Gender'].value_counts()
gender_perc = gender_count/num_players
gender_summary = pd.DataFrame({'Total Count' : gender_count,
'Percentage of Players' : gender_perc})
gender_summary["Percentage of Players"] = gender_summary["Percentage of Players"].map("{:,.2%}".format)
gender_summary
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
purchase_count = purchase_data.groupby('Gender').count()['Price']
avg_price = purchase_data.groupby('Gender').mean()['Price']
total_purchase = purchase_data.groupby('Gender').sum()['Price']
avg_tot_purchase = total_purchase/gender_summary['Total Count']
gender_analysis = pd.DataFrame({'Purchase Count' : purchase_count,
'Average Purchase Price' : avg_price,
'Total Purchase Value' : total_purchase,
'Avg Total Purchase per Person' : avg_tot_purchase})
gender_analysis[["Average Purchase Price" , "Total Purchase Value" , "Avg Total Purchase per Person"]] = gender_analysis[["Average Purchase Price" , "Total Purchase Value" , "Avg Total Purchase per Person"]].applymap("${:,.2f}".format)
gender_analysis
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
bins = [0 , 9.9, 14.9, 19.9, 24.9, 29.9, 34.9, 39.9, 9999]
labels = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
demographics["Age Ranges"] = pd.cut(demographics["Age"], bins, labels=labels)
tot_demographics = demographics["Age Ranges"].value_counts()
tot_demographics
perc_group = tot_demographics/num_players
age_demographics = pd.DataFrame({'Total Count' : tot_demographics,
'Percentage of Players' : perc_group})
age_demographics = age_demographics.sort_index()
age_demographics["Percentage of Players"] = age_demographics["Percentage of Players"].map("{:,.2%}".format)
age_demographics
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
purchase_data["Age Ranges"] = pd.cut(purchase_data["Age"], bins, labels = labels)
age_purchase_count = purchase_data.groupby("Age Ranges").count()['Price']
age_purchase_total = purchase_data.groupby("Age Ranges").sum()['Price']
age_purchase_ave = purchase_data.groupby("Age Ranges").mean()['Price']
age_purchase_tot_person = age_purchase_total/age_demographics['Total Count']
age_purchase_tot_person
age_purchase_analysis = pd.DataFrame({'Purchase Count' : age_purchase_count,
'Average Purchase Price' : age_purchase_ave,
'Total Purchase Value' : age_purchase_total,
'Avg Total Purchase per Person' : age_purchase_tot_person})
age_purchase_analysis[["Average Purchase Price", "Total Purchase Value", "Avg Total Purchase per Person"]] = age_purchase_analysis[["Average Purchase Price", "Total Purchase Value", "Avg Total Purchase per Person"]].applymap("${:,.2f}".format)
age_purchase_analysis
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
top_purchase_count = purchase_data.groupby('SN').count()['Price']
top_avg_price = purchase_data.groupby('SN').mean()['Price']
top_total_value = purchase_data.groupby('SN').sum()['Price']
top_analysis = pd.DataFrame({'Purchase Count' : top_purchase_count,
'Average Purchase Price' : top_avg_price,
'Total Purchase Value' : top_total_value,
})
top_analysis = top_analysis.sort_values(by = "Total Purchase Value" , ascending=False)
top_analysis[["Average Purchase Price" , "Total Purchase Value"]] = top_analysis[["Average Purchase Price" , "Total Purchase Value"]].applymap("${:,.2f}".format)
top_analysis.head()
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, average item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
pop_count = purchase_data.groupby(['Item ID' , 'Item Name']).count()['Price']
pop_avg_iPrice = purchase_data.groupby(['Item ID' , 'Item Name']).mean()['Price']
pop_tot_value = purchase_data.groupby(['Item ID' , 'Item Name']).sum()['Price']
pop_analysis = pd.DataFrame({'Purchase Count' : pop_count,
'Item Price' : pop_avg_iPrice,
'Total Purchase Value' : pop_tot_value,
})
pop_analysis_formatted = pop_analysis.sort_values(by = "Purchase Count" , ascending=False)
pop_analysis_formatted[["Item Price" , "Total Purchase Value"]] = pop_analysis_formatted[["Item Price" , "Total Purchase Value"]].applymap("${:,.2f}".format)
pop_analysis_formatted.head()
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
most_profit = pop_analysis.sort_values(by = "Total Purchase Value" , ascending=False)
most_profit[['Item Price' , 'Total Purchase Value']] = most_profit[['Item Price' , 'Total Purchase Value']].applymap("${:,.2f}".format)
most_profit.head()
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file)
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
# Calculate the Number of Unique Players
player_demographics = purchase_data.loc[:, ["Gender", "SN", "Age"]]
player_demographics = player_demographics.drop_duplicates()
num_players = player_demographics.count()[0]# Display the total number of players
pd.DataFrame({"Total Players": [num_players]})
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Run basic calculations
average_item_price = purchase_data["Price"].mean()
total_purchase_value = purchase_data["Price"].sum()
purchase_count = purchase_data["Price"].count()
item_count = len(purchase_data["Item ID"].unique())
# Create a DataFrame to hold results
summary_table = pd.DataFrame({"Number of Unique Items": item_count,
"Total Revenue": [total_purchase_value],
"Number of Purchases": [purchase_count],
"Average Price": [average_item_price]})
# Minor Data Munging
summary_table = summary_table.round(2)
summary_table ["Average Price"] = summary_table["Average Price"].map("${:,.2f}".format)
summary_table ["Number of Purchases"] = summary_table["Number of Purchases"].map("{:,}".format)
summary_table ["Total Revenue"] = summary_table["Total Revenue"].map("${:,.2f}".format)
summary_table = summary_table.loc[:,["Number of Unique Items", "Average Price", "Number of Purchases", "Total Revenue"]]
# Display the summary_table
summary_table
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
# Calculate the Number and Percentage by Gender
gender_demographics_totals = player_demographics["Gender"].value_counts()
gender_demographics_percents = gender_demographics_totals / num_players
gender_demographics = pd.DataFrame({"Total Count": gender_demographics_totals, "Percentage of Players": gender_demographics_percents})
# Data Munging
gender_demographics['Percentage of Players'] = gender_demographics['Percentage of Players'].map("{:,.2%}".format)
gender_demographics
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Run basic calculations
gender_purchase_total = purchase_data.groupby(["Gender"]).sum()["Price"].rename("Total Purchase Value")
gender_average = purchase_data.groupby(["Gender"]).mean()["Price"].rename("Average Purchase Price")
gender_counts = purchase_data.groupby(["Gender"]).count()["Price"].rename("Purchase Count")
# Calculate Normalized Purchasing (Average Total Purchase per Person)
normalized_total = gender_purchase_total / gender_demographics["Total Count"]
# Convert to DataFrame
gender_data = pd.DataFrame({"Purchase Count": gender_counts, "Average Purchase Price": gender_average, "Total Purchase Value": gender_purchase_total, "Normalized Totals": normalized_total})
# Minor Data Munging
gender_data["Average Purchase Price"] = gender_data["Average Purchase Price"].map("${:,.2f}".format)
gender_data["Total Purchase Value"] = gender_data["Total Purchase Value"].map("${:,.2f}".format)
gender_data ["Purchase Count"] = gender_data["Purchase Count"].map("{:,}".format)
gender_data["Avg Total Purchase per Person"] = gender_data["Normalized Totals"].map("${:,.2f}".format)
gender_data = gender_data.loc[:, ["Purchase Count", "Average Purchase Price", "Total Purchase Value", "Avg Total Purchase per Person"]]
# Display the Gender Table
gender_data
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
# Establish the bins
age_bins = [0, 9.90, 14.90, 19.90, 24.90, 29.90, 34.90, 39.90, 99999]
group_names = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
# Categorize the existing players using the age bins
player_demographics["Age Ranges"] = pd.cut(player_demographics["Age"], age_bins, labels=group_names)
# Calculate the Numbers and Percentages by Age Group
age_demographics_totals = player_demographics["Age Ranges"].value_counts()
age_demographics_percents = age_demographics_totals / num_players
age_demographics = pd.DataFrame({"Total Count": age_demographics_totals, "Percentage of Players": age_demographics_percents})
# Minor Data Munging
age_demographics['Percentage of Players'] = age_demographics['Percentage of Players'].map("{:,.2%}".format)
# Display Age Demographics Table
age_demographics = age_demographics.sort_index()
age_demographics
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Bin the Purchasing Data
purchase_data["Age Ranges"] = pd.cut(purchase_data["Age"], age_bins, labels=group_names)
# Run basic calculations
age_purchase_total = purchase_data.groupby(["Age Ranges"]).sum()["Price"].rename("Total Purchase Value")
age_average = purchase_data.groupby(["Age Ranges"]).mean()["Price"].rename("Average Purchase Price")
age_counts = purchase_data.groupby(["Age Ranges"]).count()["Price"].rename("Purchase Count")
# Calculate Normalized Purchasing (Average Purchase Total per Person)
normalized_total = age_purchase_total / age_demographics["Total Count"]
# Convert to DataFrame
age_data = pd.DataFrame({"Purchase Count": age_counts, "Average Purchase Price": age_average, "Total Purchase Value": age_purchase_total, "Normalized Totals": normalized_total})
# Minor Data Munging
age_data["Average Purchase Price"] = age_data["Average Purchase Price"].map("${:,.2f}".format)
age_data["Total Purchase Value"] = age_data["Total Purchase Value"].map("${:,.2f}".format)
age_data ["Purchase Count"] = age_data["Purchase Count"].map("{:,}".format)
age_data["Avg Total Purchase per Person"] = age_data["Normalized Totals"].map("${:,.2f}".format)
age_data = age_data.loc[:, ["Purchase Count", "Average Purchase Price", "Total Purchase Value", "Avg Total Purchase per Person"]]
# Display the Age Table
age_data
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# Basic Calculations
user_total = purchase_data.groupby(["SN"]).sum()["Price"].rename("Total Purchase Value")
user_average = purchase_data.groupby(["SN"]).mean()["Price"].rename("Average Purchase Price")
user_count = purchase_data.groupby(["SN"]).count()["Price"].rename("Purchase Count")
# Convert to DataFrame
user_data = pd.DataFrame({"Total Purchase Value": user_total, "Average Purchase Price": user_average, "Purchase Count": user_count})
# Display Table
user_sorted = user_data.sort_values("Total Purchase Value", ascending=False)
# Minor Data Munging
user_sorted["Average Purchase Price"] = user_sorted["Average Purchase Price"].map("${:,.2f}".format)
user_sorted["Total Purchase Value"] = user_sorted["Total Purchase Value"].map("${:,.2f}".format)
user_sorted = user_sorted.loc[:,["Purchase Count", "Average Purchase Price", "Total Purchase Value"]]
# Display DataFrame
user_sorted.head(5)
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, average item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# Extract item Data
item_data = purchase_data.loc[:,["Item ID", "Item Name", "Price"]]
# Create new DataFrame
item_data_pd = pd.DataFrame({"Total Purchase Value": total_item_purchase, "Item Price": average_item_purchase, "Purchase Count": item_count})
# Perform basic calculations
total_item_purchase = item_data.groupby(["Item ID", "Item Name"]).sum()["Price"].rename("Total Purchase Value")
average_item_purchase = item_data.groupby(["Item ID", "Item Name"]).mean()["Price"]
item_count = item_data.groupby(["Item ID", "Item Name"]).count()["Price"].rename("Purchase Count")
# Sort Values
item_data_count_sorted = item_data_pd.sort_values("Purchase Count", ascending=False)
# Minor Data Munging
item_data_count_sorted["Item Price"] = item_data_count_sorted["Item Price"].map("${:,.2f}".format)
item_data_count_sorted["Purchase Count"] = item_data_count_sorted["Purchase Count"].map("{:,}".format)
item_data_count_sorted["Total Purchase Value"] = item_data_count_sorted["Total Purchase Value"].map("${:,.2f}".format)
item_popularity = item_data_count_sorted.loc[:,["Purchase Count", "Item Price", "Total Purchase Value"]]
item_popularity.head(5)
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
# Item Table (Sorted by Total Purchase Value)
item_total_purchase = item_data_pd.sort_values("Total Purchase Value", ascending=False)
# Minor Data Munging
item_total_purchase["Item Price"] = item_total_purchase["Item Price"].map("${:,.2f}".format)
item_total_purchase["Purchase Count"] = item_total_purchase["Purchase Count"].map("{:,}".format)
item_total_purchase["Total Purchase Value"] = item_total_purchase["Total Purchase Value"].map("${:,.2f}".format)
item_profitable = item_total_purchase.loc[:,["Purchase Count", "Item Price", "Total Purchase Value"]]
item_profitable.head(5)
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Heroes of Pymoli Reporting
# Dependencies and Setup
import pandas as pd
# Upload Purchase Data CSV File
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
purchase_data.head()
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
# The value_counts method counts unique values in a column
No_Players = len(purchase_data["SN"].value_counts())
# Place all of the data found into a summary DataFrame
No_Players_Summary = pd.DataFrame({"Total Players": [No_Players ]})
No_Players_Summary
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Calculate the number of unique items and Purchases in the DataFrame
No_Uniq_Items = len(purchase_data["Item ID"].unique())
Av_Price = purchase_data["Price"].mean()
No_Purchases = len(purchase_data["Purchase ID"].unique())
Tot_Revenue = purchase_data["Price"].sum()
# Place all of the data found into a summary DataFrame
purchasing_analysis = pd.DataFrame({"Number of Unique Items": [No_Uniq_Items],
"Average Price": [Av_Price],
"Number of Purchases": [No_Purchases],
"Total Revenue": [Tot_Revenue]})
purchasing_analysis
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
# Grouping the Dataframe by Gender
gender_group = pd.DataFrame(purchase_data["Gender"].value_counts())
#Reset the Index
gender_group.reset_index(inplace=True)
#Assign name for the column variables
gender_group.columns = ["Gender","#Gender"]
# Calculate Gender%
perc = (purchase_data["Gender"].value_counts()/len(purchase_data["Purchase ID"].value_counts()))*100
# Store into a pandas Dataframe
gender_perc = pd.DataFrame(perc)
#Reset the Index
gender_perc.reset_index(inplace=True)
#Assign name for the column variables
gender_perc.columns = ["Gender","%Gender"]
#Merge Gender and Gender% Pandas Dataframes to get final Gender Demographics summary
Final_demo = pd.merge(gender_group,gender_perc,on = "Gender")
#Assign Percentage formatting to the %Gender column variable and Drop the index varibles
Final_demo.style.format({"%Gender": "{0:,.2f}%"}).hide_index()
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Grouping the Dataframe by Gender and get aggregate values - Count, Mean and Sum
gender_summary = pd.DataFrame(purchase_data.groupby("Gender").agg({"Price":["count","mean","sum"]}))
#Reset the Index
gender_summary.reset_index(inplace=True)
#Assign name for the column variables
gender_summary.columns = ["Gender","Purchase Count","Average Purchase Value","Total Purchase Value"]
# Grouping the Dataframe by Gender and Get counts of Unique Players
uniqplayer_summary = purchase_data.groupby("Gender")["SN"].nunique().reset_index()
# Merge Gender Summary Data and Gender - Unique Players Summary Data
purchasing_analysis = pd.merge(gender_summary,uniqplayer_summary,on = "Gender")
# Calculate Av Total Purchase per Person with Gender - Unique Players count
purchasing_analysis ["Av Total Purchase per Person"] = purchasing_analysis["Total Purchase Value"]/purchasing_analysis ["SN"]
# Drop the Unique Players count variable(SN)
purchasing_analysis_final = purchasing_analysis .drop('SN',axis =1)
#Assign Currency formatting to the Price Values column variables and Drop the index varibles
purchasing_analysis_final.style.format({"Average Purchase Value": "${:20,.2f}",
"Total Purchase Value": "${:20,.2f}",
"Av Total Purchase per Person": "${:20,.2f}"}).hide_index()
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
# Create the age bins in which Data will be held
# Bins are "<10", "10-14", "15-19", "20-24", "25-29", "30-34","35-39","40+"
bins = [0,9,14,19,24,29,34,39,100]
# Create the names for the age bins
bin_names = ["<10","10-14","15-19","20-24","25-29","30-34","35-39","40+"]
# Assign the bins to the purchase_data dataframe
purchase_data['age_bins']=pd.cut(purchase_data["Age"],bins, labels=bin_names, include_lowest=True)
# Grouping the Dataframe by Age bins and Get counts of Unique Players
bins_summary = purchase_data.groupby("age_bins")["SN"].nunique().reset_index()
# Calculating the percentages by the age group
bins_summary['Percentage of Players'] = ((bins_summary['SN'] / bins_summary['SN'].sum()) * 100).map("{0:,.2f}%".format)
# #Assign name for the column variables
bins_summary.columns = ["Age Ranges","Total Count","Percentage of Players"]
#Drop the index varibles
bins_summary.style.hide_index()
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Group by the Age bins and get the aggregate values Total,Av price and Total Purchase value
bins_summary2 = pd.DataFrame(purchase_data.groupby("age_bins").agg({"Price":["count","mean","sum"]}))
#Reset the Index
bins_summary2.reset_index(inplace=True)
#Assign name for the column variables
bins_summary2.columns = ["Age Ranges","Purchase Count","Average Purchase Value","Total Purchase Value"]
# Merge Age bins Uniq Players Data from the previous Age demographics report and Age bins Summary Data by Price
bins_summary2_final = pd.merge(bins_summary,bins_summary2,on = "Age Ranges")
# Calculate Av Total Purchase per Person with Age Bins - Unique Players count
bins_summary2_final["Av Total Purchase per Person"] = bins_summary2_final["Total Purchase Value"]/bins_summary2_final["Total Count"]
# Drop the columns not required
bins_summary2_final2 = bins_summary2_final.drop(['Total Count','Percentage of Players'],axis =1)
#Assign Currency formatting to the Price Values column variables and Drop the index varibles
bins_summary2_final2.style.format({"Average Purchase Value": "${:20,.2f}",
"Total Purchase Value": "${:20,.2f}",
"Av Total Purchase per Person": "${:20,.2f}"}).hide_index()
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# Group by the Players and get the aggregate Purchase Count,Av price and Total Purchase value
topspender_summary = pd.DataFrame(purchase_data.groupby("SN").agg({"Price":["count","mean","sum"]}))
#Reset the Index
topspender_summary.reset_index(inplace=True)
#Assign name for the column variables
topspender_summary.columns = ["SN","Purchase Count","Average Purchase Price","Total Purchase Value"]
#Sort the Total Purchase Value variable by descending order to display the top 5 spenders
topspender = topspender_summary.sort_values(by='Total Purchase Value', ascending=False)
#Assign Currency formatting to the Price Values column variables and Drop the index varibles
topspender.head().style.format({"Average Purchase Price": "${:20,.2f}",
"Total Purchase Value": "${:20,.2f}"}).hide_index()
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# Group by multiple columns(Item ID and Item Name) and get the aggregate Purchase Count,Av price and Total Purchase value
popularitems_summary = pd.DataFrame(purchase_data.groupby(['Item ID','Item Name']).agg({"Price":["count","mean","sum"]}))
#Reset the Index
popularitems_summary.reset_index(inplace=True)
#Assign name for the column variables
popularitems_summary.columns = ["Item ID","Item Name","Purchase Count","Average Purchase Price","Total Purchase Value"]
#Sort the Purchase Count by descending order to display the top 5 popular Items and associated ID's
mostpopular = popularitems_summary.sort_values(by='Purchase Count', ascending=False)
#Assign Currency formatting to the Price Values column variables and Drop the index varibles
mostpopular.head().style.format({"Average Purchase Price": "${:20,.2f}",
"Total Purchase Value": "${:20,.2f}"}).hide_index()
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
# Use the popularitems_summary dataframe to sort it by the total purchase value to get the most profitable items
mostprofitable = popularitems_summary.sort_values(by='Total Purchase Value', ascending=False)
#Assign Currency formatting to the Price Values column variables and Drop the index varibles
mostprofitable.head().style.format({"Average Purchase Price": "${:20,.2f}",
"Total Purchase Value": "${:20,.2f}"}).hide_index()
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
heros_of_pymoli = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(heros_of_pymoli)
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
players=purchase_data.loc[:, "SN"]
players = players.drop_duplicates()
players_total=players.count()
pd.DataFrame({"Total Player": [players_total]})
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Number of unique items
unique_items_count= len(purchase_data["Item ID"].unique())
#Average Purchase Price
avg_purchase_price= purchase_data["Price"].mean()
#Total Number of Purchases
total_purchases= len(purchase_data["Purchase ID"].unique())
#Total Revenue
total_revenue= purchase_data["Price"].sum()
#Create Summary
purchasing_table= pd.DataFrame([{
"Number of Unique Items": unique_items_count,
"Average Price": avg_purchase_price,
"Number of Purchases": total_purchases,
"Total Revenue": total_revenue,
}], columns=["Number of Unique Items","Average Price","Number of Purchases","Total Revenue"])
purchasing_table["Average Price"]= purchasing_table["Average Price"].map("${0:,.2f}".format)
purchasing_table["Total Revenue"]= purchasing_table["Total Revenue"].map("${0:,.2f}".format)
purchasing_table
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
# Count and Percentage of Male Players
male_players= purchase_data.loc[purchase_data["Gender"]=="Male"]
male_count= len(male_players['SN'].unique())
male_percent= (male_count/players_total*100)
# Count and Percentage of Female Players
female_players= purchase_data.loc[purchase_data["Gender"]=="Female"]
female_count= len(female_players['SN'].unique())
female_percent= (female_count/players_total*100)
# Count and Percentage of Other / Non-Disclosed Players
other_players= purchase_data.loc[purchase_data["Gender"]=="Other / Non-Disclosed"]
other_count= len(other_players['SN'].unique())
other_percent= (other_count/players_total*100)
# Summary of Gender Demographics
gender_demographics = pd.DataFrame([{"Gender": "Male", "Total Count": male_count, "Percentage of Players": male_percent},
{"Gender": "Female", "Total Count": female_count, "Percentage of Players": female_percent},
{"Gender": "Other / Non-Disclosed", "Total Count": other_count, "Percentage of Players": other_percent}],
columns=["Gender", "Total Count", "Percentage of Players"])
# Formatting with % sign and 2 decimals
pd.options.display.float_format='{:.2f}%'.format
gender_demographics = gender_demographics.set_index('Gender')
gender_demographics.index.name = None
gender_demographics
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Purchase Count, Average Purchase Price and Total Purchase Vale of Male Players
male_purchase=purchase_data.loc[purchase_data["Gender"]=="Male", :]
male_purchase_count= len(male_purchase)
avg_male_price= purchase_data.loc[purchase_data["Gender"]=="Male", ["Price"]].mean()
total_male_purchase=purchase_data.loc[purchase_data["Gender"]=="Male", ["Price"]].sum()
# Purchase Count, Average Purchase Price and Total Purchase Vale of Female Players
female_purchase=purchase_data.loc[purchase_data["Gender"]=="Female", :]
female_purchase_count= len(female_purchase)
avg_female_price= purchase_data.loc[purchase_data["Gender"]=="Female", ["Price"]].mean()
total_female_purchase=purchase_data.loc[purchase_data["Gender"]=="Female", ["Price"]].sum()
# Purchase Count, Average Purchase Price and Total Purchase Vale of Other / Non-Disclosed
other_purchase=purchase_data.loc[purchase_data["Gender"]=="Other / Non-Disclosed", :]
other_purchase_count= len(other_purchase)
avg_other_price= purchase_data.loc[purchase_data["Gender"]=="Other / Non-Disclosed", ["Price"]].mean()
total_other_purchase=purchase_data.loc[purchase_data["Gender"]=="Other / Non-Disclosed", ["Price"]].sum()
# Average Purchase Total per Person by Gender
avg_male_purchase= total_male_purchase/male_count
avg_female_purchase= total_female_purchase/female_count
avg_other_purchase= total_other_purchase/other_count
# Summary of Purchasing Analysis(Gender)
gender_purchase_df=pd.DataFrame([
{"Gender": "Male", "Purchase Count":male_purchase_count,
"Average Purchase Price":avg_male_price[0],
"Total Purchase Value":total_male_purchase[0],
"Average Total Purchase per Person":avg_male_purchase[0]},
{"Gender": "Female", "Purchase Count":female_purchase_count,
"Average Purchase Price":avg_female_price[0],
"Total Purchase Value":total_female_purchase[0],
"Average Total Purchase per Person":avg_female_purchase[0]},
{"Gender": "Other / Non-Disclosed", "Purchase Count":other_purchase_count,
"Average Purchase Price":avg_other_price[0],
"Total Purchase Value":total_other_purchase[0],
"Average Total Purchase per Person":avg_other_purchase[0]}],
columns= ["Gender", "Purchase Count", "Average Purchase Price", "Total Purchase Value", "Average Total Purchase per Person"])
# Formatting with $
pd.options.display.float_format='${:,.2f}'.format
gender_purchase_df = gender_purchase_df.set_index('Gender')
gender_purchase_df
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
# Establish bins for ages
age_bins= [0, 9, 14, 19, 24, 29, 34, 39, 46]
age_ranges= ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
# Categorize the existing players using the age bins
purchase_data["Age Ranges"]= pd.cut(purchase_data["Age"], bins=age_bins, labels=age_ranges)
purchase_data
#Develop groupby from "Age Ranges"
age_group= purchase_data.groupby("Age Ranges")
# Count Total by each "Age Ranges"
total_age_count= age_group["SN"].nunique()
# Percentage for each "Age Ranges"
percentage_per_age= (total_age_count/players_total)*100
# Summary DataFrame
age_demographics= pd.DataFrame ({
"Total Players":total_age_count,
"Percentage of Players":percentage_per_age},
columns= ["Total Players", "Percentage of Players"])
# Formatting with % sign and 2 decimals
pd.options.display.float_format='{:.2f}%'.format
age_demographics.index.name = None
age_demographics
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Calculate Purchase Count
purchase_count_by_age= age_group["SN"].count()
# Calculate Average Purchase Price
avg_age_purchase_price=age_group["Price"].mean()
# Calculate Total Purchase Value
total_purchase_value=age_group["Price"].sum()
# Calculate Average Total Purchase per Person
avg_total_purchase_per_person=total_purchase_value/total_age_count
# Summary DataFrame
purchasing_analysis_by_age = pd.DataFrame({
"Purchase Count": purchase_count_by_age,
"Average Purchase Price": avg_age_purchase_price,
"Total Purchase Value": total_purchase_value,
"Average Total Purchase per Person":avg_total_purchase_per_person
})
# Formatting with $
pd.options.display.float_format='${:,.2f}'.format
purchasing_analysis_by_age
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# Determine the top 5 spenders
top_spenders = purchase_data.groupby("SN")
# Calculate Purchase Count for top 5 spenders
top_spenders_purchase_count= top_spenders["Purchase ID"].count()
# Calculate Average Purchase Price
spenders_avg_purchase_price= top_spenders["Price"].mean()
# Calculate Total Purchase Value
spenders_total_purchase_value= top_spenders["Price"].sum()
# Summary DataFrame
top_spenders_df= pd.DataFrame({
"Purchase Count":top_spenders_purchase_count,
"Average Purchase Price":spenders_avg_purchase_price,
"Total Purchase Value":spenders_total_purchase_value})
# Formatting with $
pd.options.display.float_format='${:,.2f}'.format
#sort spenders
sort_top_spenders=top_spenders_df.sort_values(["Total Purchase Value"], ascending=False).head()
sort_top_spenders
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, average item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# Retrieve the Item ID, Item Name, and Item Price columns
popular_items_list= purchase_data[["Item ID", "Item Name", "Price"]]
# Group by Item ID and Item Name
popular_items = popular_items_list.groupby(["Item ID", "Item Name"])
# Calculate Purchase Count
item_purchase_count= popular_items["Price"].count()
# Calculate Total Purchase Value
item_total_purchase_value= popular_items["Price"].sum()
# Calculate Average item Price
item_price= item_total_purchase_value/item_purchase_count
# Summary DataFrame
most_popular_items= pd.DataFrame({
"Purchase Count":item_purchase_count,
"Item Price":item_price,
"Total Purchase Value":item_total_purchase_value
})
# Formatting with $
pd.options.display.float_format='${:,.2f}'.format
#sort spenders
most_popular_items.sort_values("Purchase Count", ascending=False).head()
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
most_profitable_items_df=most_popular_items.sort_values("Total Purchase Value", ascending=False).head()
pd.options.display.float_format='${:,.2f}'.format
most_profitable_items_df
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
import numpy as np
# File to Load (Remember to Change These)
purchase_data = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(purchase_data)
purchase_data.head()
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
player_count = len(purchase_data["SN"].unique())
player_count
player_count_table = pd.DataFrame({"Total Players": [player_count]})
player_count_table
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
unique_items_count = len(purchase_data["Item ID"].unique())
avg_purchase_price = purchase_data["Price"].mean()
total_purchases = len(purchase_data["Purchase ID"].unique())
total_revenue = purchase_data["Price"].sum()
Data_table = pd.DataFrame([{
"Number of Unique Items": unique_items_count,
"Average Price": avg_purchase_price,
"Number of Purchases": total_purchases,
"Total Revenue": total_revenue,
}], columns=["Number of Unique Items", "Average Price", "Number of Purchases", "Total Revenue"])
Data_table["Average Price"] = Data_table["Average Price"].map("${0:.2f}".format)
Data_table["Total Revenue"] = Data_table["Total Revenue"].map("${0:,.2f}".format)
Data_table
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
male_players = purchase_data.loc[purchase_data["Gender"] == "Male"]
male_count = len(male_players["SN"].unique())
male_percent = "{:.2f}%".format(male_count / player_count * 100)
female_players = purchase_data.loc[purchase_data["Gender"] == "Female"]
female_count = len(female_players["SN"].unique())
female_percent = "{:.2f}%".format(female_count / player_count * 100)
other_players = purchase_data.loc[purchase_data["Gender"] == "Other / Non-Disclosed"]
other_count = len(other_players["SN"].unique())
other_percent = "{:.2f}%".format(other_count / player_count * 100)
gender_demographics_table = pd.DataFrame([{
"Gender": "Male", "Total Count": male_count,
"Percentage of Players": male_percent},
{"Gender": "Female", "Total Count": female_count,
"Percentage of Players": female_percent},
{"Gender": "Other / Non-Disclosed", "Total Count": other_count,
"Percentage of Players": other_percent
}], columns=["Gender", "Total Count", "Percentage of Players"])
gender_demographics_table
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
female_purchase_data = purchase_data.loc[purchase_data["Gender"] == "Female", :]
female_purchase_count = len(female_purchase_data)
avg_female_purchase_price = purchase_data.loc[purchase_data["Gender"] == "Female", ["Price"]].mean()
total_female_purchase_value = purchase_data.loc[purchase_data["Gender"] == "Female", ["Price"]].sum()
male_purchase_data = purchase_data.loc[purchase_data["Gender"] == "Male", :]
male_purchase_count = len(male_purchase_data)
avg_male_purchase_price = purchase_data.loc[purchase_data["Gender"] == "Male", ["Price"]].mean()
total_male_purchase_value = purchase_data.loc[purchase_data["Gender"] == "Male", ["Price"]].sum()
other_purchase_data = purchase_data.loc[purchase_data["Gender"] == "Other / Non-Disclosed", :]
other_purchase_count = len(other_purchase_data)
avg_other_purchase_price = purchase_data.loc[purchase_data["Gender"] == "Other / Non-Disclosed", ["Price"]].mean()
total_other_purchase_value = purchase_data.loc[purchase_data["Gender"] == "Other / Non-Disclosed", ["Price"]].sum()
avg_male_purchase_total_person = total_male_purchase_value / male_count
avg_female_purchase_total_person = total_female_purchase_value / female_count
avg_other_purchase_total_person = total_other_purchase_value / other_count
gender_purchasing_analysis_table = pd.DataFrame([{
"Gender": "Female", "Purchase Count": female_purchase_count,
"Average Purchase Price": "${:.2f}".format(avg_female_purchase_price[0]),
"Total Purchase Value": "${:.2f}".format(total_female_purchase_value[0]),
"Avg Total Purchase per Person": "${:.2f}".format(avg_female_purchase_total_person[0])},
{"Gender": "Male", "Purchase Count": male_purchase_count,
"Average Purchase Price": "${:.2f}".format(avg_male_purchase_price[0]),
"Total Purchase Value": "${:,.2f}".format(total_male_purchase_value[0]),
"Avg Total Purchase per Person": "${:.2f}".format(avg_male_purchase_total_person[0])},
{"Gender": "Other / Non-Disclosed", "Purchase Count": other_purchase_count,
"Average Purchase Price": "${:.2f}".format(avg_other_purchase_price[0]),
"Total Purchase Value": "${:.2f}".format(total_other_purchase_value[0]),
"Avg Total Purchase per Person": "${:.2f}".format(avg_other_purchase_total_person[0])
}], columns=["Gender", "Purchase Count", "Average Purchase Price", "Total Purchase Value", "Avg Total Purchase per Person"])
gender_purchasing_analysis_table
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
age_bins = [0, 9, 14, 19, 24, 29, 34, 39]
groups_names = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39"]
purchase_data["Age Group"] = pd.cut(purchase_data["Age"], bins=age_bins, labels=groups_names)
purchase_data
age_group = purchase_data.groupby("Age Group")
total_count_age = age_group["SN"].nunique()
percentage_by_age = round(total_count_age / player_count * 100,2)
age_demographics_table = pd.DataFrame({
"Total Count": total_count_age,
"Percentage of Players": percentage_by_age
})
age_demographics_table
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
bins = [0, 9, 14, 19, 24, 29, 34, 39]
groups_names = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39"]
purchase_data["Age Group"] = pd.cut(purchase_data["Age"], bins=age_bins, labels=groups_names)
age_purchase_count = age_group["SN"].count()
avg_age_purchase_price = round(age_group["Price"].mean(),2)
total_age_purchase_value = round(age_group["Price"].sum(),2)
avg_total_age_purchase_person = round(total_age_purchase_value / total_count_age,2)
age_purchasing_analysis_table = pd.DataFrame({
"Purchase Count": age_purchase_count,
"Average Purchase Price": avg_age_purchase_price,
"Total Purchase Value": total_age_purchase_value,
"Avg Total Purchase per Person": avg_total_age_purchase_person
})
age_purchasing_analysis_table
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
bins = [0, 9, 14, 19, 24, 29, 34, 39]
groups_names = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39"]
purchase_data["Age Group"] = pd.cut(purchase_data["Age"], bins=age_bins, labels=groups_names)
age_purchase_count = age_group["SN"].count()
avg_age_purchase_price = round(age_group["Price"].mean(),2)
total_age_purchase_value = round(age_group["Price"].sum(),2)
avg_total_age_purchase_person = round(total_age_purchase_value / total_count_age,2)
age_purchasing_analysis_table = pd.DataFrame({
"Purchase Count": age_purchase_count,
"Average Purchase Price": avg_age_purchase_price,
"Total Purchase Value": total_age_purchase_value,
"Avg Total Purchase per Person": avg_total_age_purchase_person
})
age_purchasing_analysis_table
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, average item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
top_spenders = purchase_data.groupby("SN")
spender_purchase_count = top_spenders["Purchase ID"].count()
average_spender_purchase_price = round(top_spenders["Price"].mean(),2)
total_spender_purchase_value = top_spenders["Price"].sum()
top_spenders_table = pd.DataFrame({
"Purchase Count": spender_purchase_count,
"Average Purchase Price": average_spender_purchase_price,
"Total Purchase Value": total_spender_purchase_value
})
sort_top_spenders
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
df = pd.read_csv(file)
df.head()
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
#Find the total number of players by counting the unique SN
total_players = df["SN"].nunique()
#Put the total in a new data frame
total_df = pd.DataFrame({"Total Players": total_players}, index=[0])
#Display
total_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Find the number of unique items by counting the unique Item ID
unique_items = len(df["Item ID"].unique())
#Calculate the average price by taking the mean of the price column
avg_price = "${:.2f}".format(df["Price"].mean())
#Find the total number of purchases by couting the total Purchase IDs
total_purchases = df["Purchase ID"].count()
#Find the total revenue by taking the sum of the price column
revenue = "${:,.2f}".format(df["Price"].sum())
#Put all of the above items and their labels into a dictionary
summary_dict = {"Number of Unique Items": unique_items, "Average Price": avg_price,
"Number of Purchases": total_purchases, "Total Revenue": revenue}
#Convert that dictionary into a dataframe
summary_df = pd.DataFrame(summary_dict, index=[0])
#Display
summary_df
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
#Drop the duplicate values for SN
unique_df = df.drop_duplicates("SN")
#Use the new unique dataframe to get value counts for each gender
unique_gender_count = unique_df["Gender"].value_counts()
#Calculate the percentage of each gender by dividing the gender count by the total players (from previous cell)
gender_percent = (unique_gender_count / total_players)*100
#Put all the above items into a new dataframe
gender_df = pd.DataFrame({"Total Count": unique_gender_count, "Percentage of Players": gender_percent})
#Format as Percentage
pd.options.display.float_format = "{:.2f}%".format
#Display
gender_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Group the original data frame by Gender
gen_df = df.groupby("Gender")
#Get the total gender purchase count by counting the items in Gender in the grouped data frame
gender_count = gen_df["Gender"].count()
#Get the total purchase value by taking the sum of Price in the grouped data frame
gender_sum = gen_df["Price"].sum()
#Calculate average price for each gender by taking the gender sum dividided by the gender count
avg_price_gender = gender_sum / gender_count
#Calculate the average price per person for each gender by dividing the gender sum by the unique gender count
#(from the previous cell)
avg_ppr_gender = gender_sum / unique_gender_count
#Put all the above items into a new dataframe
gender_analysis = pd.DataFrame({"Purchase Count": gender_count, "Average Purchase Price": avg_price_gender,
"Total Purchase Value": gender_sum, "Average Total Purchase Per Person": avg_ppr_gender})
#Format as Currency
pd.options.display.float_format = "${:.2f}".format
#Display
gender_analysis
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
#Create the bins to use for the age ranges
bins = [0, 9.9, 14, 19, 24, 29, 34, 39, 100]
#Create the names for the bins
group_names = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
#Add a new column "Age Ranges" to the original data frame
df["Age Ranges"] = pd.cut(df["Age"], bins, labels=group_names, include_lowest=True)
#Drop the duplicate values for SN
unique_age_df = df.drop_duplicates("SN")
#Group the unique data frame by Age Range
unique_age_df = unique_age_df.groupby("Age Ranges")
#Get the unique age counts by counting the items in Age Ranmges in the grouped unique data frame
unique_age_count = unique_age_df["Age Ranges"].count()
#Get the percentage of players in Age Ranges by dividing the unique age count by the total players
age_percent = (unique_age_count/total_players)*100
#Put all the above items into a new dataframe
age_df = pd.DataFrame({"Total Count": unique_age_count, "Percentage of Players": age_percent})
#Format as percentage
pd.options.display.float_format = "{:.2f}%".format
#Display
age_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Group the new data frame with Age Ranges included by the Age Ranges
age_data = df.groupby("Age Ranges")
#Get the age purchase count by counting the items in Age
age_count = age_data["Age"].count()
#Get the total purchase value by taking the sum of the items in Price
age_sum = age_data["Price"].sum()
#Get the average purchase price for each range by dividing the age sum by the age count
avg_price_age = age_sum / age_count
#Get the average purchase per person for the age ranges by dividing the age sum by the unique age count
avg_ppr_age = age_sum / unique_age_count
#Put all the above items into a new dataframe
age_analysis = pd.DataFrame({"Purchase Count": age_count, "Average Purchase Price": avg_price_age,
"Total Purchase Value": age_sum, "Average Total Purchase Per Person": avg_ppr_age})
#Format as currency
pd.options.display.float_format = '${:,.2f}'.format
#Display
age_analysis
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#Group the original data frame by SN
#Get the purchase count for each player by counting the items in Price
#Get the average purchase price per player by taking the mean of the items in Price
#Get the total purchase value for each player by taking the sum of the items in Price
purchase_count = df.groupby("SN").count()["Price"]
player_avg_price = df.groupby("SN").mean()["Price"]
player_sum = df.groupby("SN").sum()["Price"]
#Put all the above items into a new dataframe
player_spending_data = pd.DataFrame({"Purchase Count": purchase_count, "Average Purchase Price": player_avg_price,
"Total Purchase Value": player_sum})
#Format as currency
pd.options.display.float_format = '${:,.2f}'.format
#Sort the new data frame by Total Purchase Value (descending) top get the Top Spenders
top_spenders = player_spending_data.sort_values("Total Purchase Value", ascending=False)
#Display
top_spenders.head(5)
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#Group the whole data frame by Item ID and Item Name
#Get the purchase count of each item by counting the items in Purchase ID
#Get the Item Price by taking the max of the items in Price
#(each item has only one price so the max will just return the item price)
#Get the total purchase value for each item by multiplying the purchase count and purchase price of each item
purch_count_item = df.groupby(["Item ID", "Item Name"]).count()["Purchase ID"]
purch_price = df.groupby(["Item ID", "Item Name"]).max()["Price"]
total_purch_value = purch_count_item * purch_price
#Put all the above items into a new dataframe
most_popular_df = pd.DataFrame({"Purchase Count": purch_count_item, "Item Price": purch_price,
"Total Purchase Value": total_purch_value})
#Sort the new data frame by Purchase Count (descending) to get the Most Popular items
most_popular_df = most_popular_df.sort_values("Purchase Count", ascending=False)
#Display
most_popular_df.head()
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
#Take the Most Popular data frame from previous cell and sort by Total Purchase Value
#this will get the Most Profitable items
most_profitable_df = most_popular_df.sort_values("Total Purchase Value", ascending=False)
#Display
most_profitable_df.head(5)
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# import Dependencies
import pandas as pd
# Read Purchasing File and store into Pandas data frame
gamers_df = pd.read_csv("Resources/purchase_data.csv")
gamers_df.head()
#Do a simple count of columns to make sure there's no missing data
gamers_df.count()
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
#Find the total number of unique sign names/players ("SN")
player_count = len(gamers_df["SN"].unique())
player_count
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Calculate the number of Unique Items
unique_items = len(gamers_df["Item Name"].unique())
unique_items
#Calculate the average price
average_price = gamers_df["Price"].mean()
average_price
#What is the total number of purchases
total_purchases = gamers_df["Price"].count()
total_purchases
#What is the revenue
total_revenue = gamers_df["Price"].sum()
total_revenue
#Display all calculations in a summary table
summary_table = pd.DataFrame({"Number of Unique Items": [unique_items],
"Average Price": average_price,
"Number of Purchases": total_purchases,
"Total Revenue": total_revenue})
summary_table
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
#Create a new dataframe dropping username duplicates
unique_gamers_df = gamers_df.drop_duplicates("SN")
unique_gamers_df
#Find our value counts by gender for our new df
unique_gamers_df["Gender"].value_counts()
#Create New DataFrame using the counts
Gender_Counts = unique_gamers_df["Gender"].value_counts().to_frame(name = "Total Count")
Gender_Counts
#Create a new column calculating the % of players
POP = Gender_Counts["Total Count"]/576
Gender_Counts["Percentage of Players"] = POP
Gender_Counts
#Convert Percentage of Players column to 0.00%
dict = {'Percentage of Players': '{:.2%}'}
Gender_Demographics_Final = Gender_Counts.style.format(dict)
Gender_Demographics_Final
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Use our original dataframe to create our Groupby Summaries
gamers_df.groupby('Gender')['Price'].agg(['count', 'mean', 'sum'])
#Create new dataframe using the summaries
Purchase_Analysis = gamers_df.groupby('Gender')['Price'].agg(['count', 'mean', 'sum'])
Purchase_Analysis
#Rename the columns
renamed_df = Purchase_Analysis.rename(columns={"count":"Purchase Count", "mean":"Average Purchase Price", "sum":"Total Purchase Value"})
renamed_df
#Convert Average Purchase Price and Total Purchase Value columns to $
dict = {'Average Purchase Price':'${0:,.2f}', 'Total Purchase Value':'${0:,.2f}'}
Purchasing_Analysis_Final = renamed_df.style.format(dict)
Purchasing_Analysis_Final
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
#Create a new dataframe frome our df where we dropped the duplicates
demos=unique_gamers_df.copy()
demos
# Create the bins in which Data will be held
bins = [0, 10, 14, 19, 24, 29, 34, 39, 100]
# Create the names for the bins
age_ranges = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
# Slice the data and place it into bins
pd.cut(demos["Age"], bins, labels=age_ranges, include_lowest=True)
#Create Age Range Column
demos["Age Range"] = pd.cut(demos["Age"], bins, labels=age_ranges, include_lowest=True)
demos
# Create a GroupBy for Ages
Age_Count = demos.groupby("Age Range")
Age_Count["Age"].count()
#Create New DataFrame using the counts
Total_Count = Age_Count["Age"].count().to_frame(name = "Total Count")
Total_Count
#Create a new column calculating the % of players
Percentage_of_Players = Total_Count["Total Count"]/576
Total_Count["Percentage of Players"] = Percentage_of_Players
Total_Count
#Make sure the percentage column is a float
Total_Count.dtypes
#Convert Percentage column to 0.00%
dict = {'Percentage of Players': '{:.2%}'}
Age_Demographics_Final = Total_Count.style.format(dict)
Age_Demographics_Final
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Create new data frame from original
new_gamers_df = gamers_df.copy()
new_gamers_df
# Create the bins in which Data will be held
bins2 = [0, 10, 14, 19, 24, 29, 34, 39, 100]
# Create the names for the bins
age_ranges2 = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
# Slice the data and place it into bins
pd.cut(new_gamers_df["Age"], bins2, labels=age_ranges2, include_lowest=True)
#Create Age Range Column
new_gamers_df["Age Range"] = pd.cut(new_gamers_df["Age"], bins2, labels=age_ranges2, include_lowest=True)
new_gamers_df
#Create a groupby calling simple calcs on data frame
new_gamers_df.groupby('Age Range')['Price'].agg(['count', 'mean', 'sum'])
#Create new dataframe using the summaries
Purchase_Analysis2 = new_gamers_df.groupby('Age Range')['Price'].agg(['count', 'mean', 'sum'])
Purchase_Analysis2
#Rename the columns
renamed_df2 = Purchase_Analysis2.rename(columns={"count":"Purchase Count", "mean":"Average Purchase Price", "sum":"Total Purchase Value"})
renamed_df2
#Convert Average Purchase Price and Total Purchase Value columns to $
dict = {'Average Purchase Price':'${0:,.2f}', 'Total Purchase Value':'${0:,.2f}'}
Purchasing_Analysis_Age_Final = renamed_df2.style.format(dict)
Purchasing_Analysis_Age_Final
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#Create new data frame from original
Top_spenders_df = gamers_df.copy()
Top_spenders_df
#Create our groupby by sign name/users
grouped_spenders = Top_spenders_df.groupby(['SN'])
#Count the number of purchases per spender
df_count = grouped_spenders["Price"].count()
df_count
#Sum all the purchases per spender
df_sum = grouped_spenders["Price"].sum()
df_sum
#Average the purchases per spender
df_mean = grouped_spenders["Price"].mean()
df_mean
#Create a new data frame from previous calcs
Top_spenders2 = pd.DataFrame({"Purchase Count": df_count, "Average Purchase Price": df_mean,
"Total Purchase Value": df_sum})
Top_spenders2
#Sort the Total Purchase Value column in descending order to look at the top spender
sorted_values = Top_spenders2.sort_values("Total Purchase Value", ascending=False)
sorted_values
#Convert Average Purchase Price and Total Purchase Value columns to $ and create final data frame
dict = {'Average Purchase Price':'${0:,.2f}', 'Total Purchase Value':'${0:,.2f}'}
Top_Spenders_Final = sorted_values.style.format(dict)
Top_Spenders_Final
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#Create new data frame from original
most_popular_items = gamers_df.copy()
most_popular_items
#Create our groupby by Item Id and Item Name
grouped = most_popular_items.groupby(['Item ID', 'Item Name', 'Price'])
#Total up the purchases per item
df_count2 = grouped["Price"].count()
df_count2
#Sum up the purchases per item
df_sum2 = grouped["Price"].sum()
df_sum2
#Create New dataframe and entering in previous calcs
Top_items = pd.DataFrame({"Purchase Count": df_count2, "Total Purchase Value": df_sum2})
Top_items
#Sort the Purchase Count column in descending order to see the most purchased items
sorted_values2 = Top_items.sort_values("Purchase Count", ascending=False)
sorted_values2
#Move the Price index into a column
sorted_values3 = sorted_values2.reset_index(level='Price')
sorted_values3
#Makes sure Price is now a column
sorted_values3.columns
# Reorganize the columns using double brackets
organized_df = sorted_values3[["Purchase Count", "Price", "Total Purchase Value"]]
organized_df
# Rename the Price column to "Item Price"
renamed_df2 = organized_df.rename(columns={"Price":"Item Price"})
renamed_df2
#Convert Item Price and Total Purchase Value columns to $ and create final data frame
dict = {'Item Price':'${0:,.2f}', 'Total Purchase Value':'${0:,.2f}'}
Most_Popular_Items_Final = renamed_df2.style.format(dict)
Most_Popular_Items_Final
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
#Create new data frame from previous dataframe
most_profitable_items = renamed_df2
most_profitable_items
#Sort the Total_Purchase_Value column in descending order to see the most valuable item
most_profitable_items2 = most_profitable_items.sort_values("Total Purchase Value", ascending=False)
most_profitable_items2
#Convert Item Price and Total Purchase Value columns to $ and create final data frame
dict = {'Item Price':'${0:,.2f}', 'Total Purchase Value':'${0:,.2f}'}
Most_Profitable_Items_Final = most_profitable_items2.style.format(dict)
Most_Profitable_Items_Final
###Output
_____no_output_____
###Markdown
Heroes Of Pymoli Data Analysis* Of the 780 active players, the vast majority are male (83.59%). There also exists, a smaller, but notable proportion of female players (14.49%).* Peak age demographic falls between the ages of 23-30 (29.10%) with secondary groups falling between 31-35 (19.87%), and 18-22 (11.79%). * Item 92, Final Critic was both the most popular and most profitable item with a purchase count of 13 at 4.61 per unit for a total purchase value of 59.99. ----- Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
import numpy as np
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv('/Users/danny_delatorre/Desktop/Data Bootcamp/Pending Homework/pandas-challenge/HeroesOfPymoli/purchase_data.csv')
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
# Display the total number of players
player_count_df = pd.DataFrame({'Total Players':[player_count]})
player_count_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Run basic calculations to obtain number of... unique items, average price, etc.
# unique items
unique_items = purchase_data['Item ID'].nunique()
# average price
average_price = purchase_data['Price'].mean()
# total purchases
total_purchases = purchase_data['Purchase ID'].count()
# total revenue
total_revenue = purchase_data['Price'].sum()
# Create a summary data frame to hold the results
total_purchases_df = pd.DataFrame({'Unique Items':[unique_items], 'Average Price':[average_price],
'# of Purchases':[total_purchases], 'Revenue':[total_revenue]})
total_purchases_df
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
# Gender Demographics
# Percentage and Count of Male Players - Percentage and Count of Female Players - Percentage and Count of Other / Non-Disclosed
player_count = purchase_data['SN'].count()
gender_count = purchase_data['Gender'].value_counts()
gender_percent = gender_count/(player_count)*100
rounded_percent = gender_percent.round(2)
# output dataframe
gender_demo_df = pd.DataFrame({'Players Percentage':rounded_percent, 'Player Count':gender_count})
# give index a title
gender_demo_df.index.name = 'Gender'
# align all df headers
gender_demo_df.columns.name = gender_demo_df.index.name
gender_demo_df.index.name = None
# print df
gender_demo_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender
player_count = purchase_data['SN'].count()
purchase_count = purchase_data.groupby(['Gender']).count()['Price']
average_purchase_price = purchase_data.groupby(['Gender']).mean()['Price']
purchase_total = purchase_data.groupby(['Gender']).sum()['Price']
purchase_total_per_person = (purchase_total / player_count)
# drop duplicates
purchase_data.drop_duplicates(inplace=True)
# Create a summary data frame to hold the results
gender_summary_df = pd.DataFrame({'Purchase Count':purchase_count,
'Avg. Purchase Price':average_purchase_price.round(2),
'Avg. Purchase Total': purchase_total_per_person})
# align all df headers
gender_summary_df.columns.name = gender_summary_df.index.name
gender_summary_df.index.name = None
# print df
gender_summary_df
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
# Establish Bins For Ages
bins = [0, 12, 13, 18, 23, 31, 36, 40]
bin_labels = ['0-12', '13-17', '18-22', '23-30', '31-35', '36-40', '40+']
# Categorize the existing players using the age bins. Hint: use pd.cut()
age_ranges = purchase_data['Age Group'] = pd.cut(purchase_data['Age'], bins, labels = bin_labels)
age_ranges_df = pd.DataFrame({'Age Group':age_ranges})
# Calculate the numbers and percentages by age group - find count and percents essentially
players_count = len(purchase_data.index)
age_groups = purchase_data.groupby('Age Group')
age_total_count = age_groups['SN'].nunique()
percentages_by_age_group = (age_total_count/players_count) * 100
# Create a summary data frame to hold the results - Optional: round the percentage column to two decimal points (.round(2))
age_demographics_df = pd.DataFrame({'% of Players': percentages_by_age_group.round(2), 'Player Count': age_total_count})
# align all df headers
age_demographics_df.columns.name = age_demographics_df.index.name
age_demographics_df.index.name = None
# Display Age Demographics Table
age_demographics_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Establish Bins For Ages
bins = [0, 12, 13, 18, 23, 31, 36, 40]
bin_labels = ['0-12', '13-17', '18-22', '23-30', '31-35', '36-40', '40+']
# Categorize the existing players using the age bins. Hint: use pd.cut()
age_ranges = purchase_data['Age Group'] = pd.cut(purchase_data['Age'], bins, labels = bin_labels)
age_ranges_df = pd.DataFrame({'Age Group':age_ranges})
# Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below
purchase_count = purchase_data.groupby(['Age Group']).count()['Price']
average_purchase_price = purchase_data.groupby(['Age Group']).mean()['Price']
# purchase total
purchase_total_per_person = purchase_data.groupby(['Age Group']).sum()['Price']
purchase_data.drop_duplicates(inplace=True)
# Create a summary data frame to hold the results
puchasing_analysis_df = pd.DataFrame({'Purchase Count':purchase_count,
'Average Price':average_purchase_price.round(2),
'Purchase Total': purchase_total_per_person})
# Give the displayed data cleaner formatting
puchasing_analysis_df.columns.name = puchasing_analysis_df.index.name
puchasing_analysis_df.index.name = None
# Display the summary data frame
puchasing_analysis_df
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# Run basic calculations to obtain the results in the table below
transaction_count = purchase_data.groupby(['SN']).count()['Price']
total_spent = purchase_data.groupby(['SN']).sum()['Price']
# Create a summary data frame to hold the results
top_spenders_df = pd.DataFrame({'Transaction Count': transaction_count, 'Purchase Value': total_spent})
# Sort the total purchase value column in descending order
top_spenders_df = top_spenders_df.sort_values('Transaction Count', ascending=False)
# Optional: give the displayed data cleaner formatting
top_spenders_df.columns.name = top_spenders_df.index.name
top_spenders_df.index.name = None
# Display a preview of the summary data frame
top_spenders_df.head()
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#Retrieve the Item ID, Item Name, and Item Price columns
retrieve_items = purchase_data[['Item ID', 'Item Name', 'Price']]
# Group by Item ID and Item Name.
item_data = most_popular_items.groupby(['Item ID', 'Item Name'])
# Perform calculations to obtain...
# purchase count
purchase_item_count = item_data['Price'].count()
# total purchase value
purchase_value = item_data['Price'].sum()
# & item price
item_price = (purchase_value/purchase_item_count)
# Create a summary data frame to hold the results
most_popular_items_df = pd.DataFrame({'Purchase Count': purchase_item_count, 'Item Price': item_price.round(2), 'Total Purchase Value': purchase_value})
# Sort the purchase count column in descending order
most_popular_items_df = most_popular_items_df.sort_values('Purchase Count', ascending=False)
# Display a preview of the summary data frame
most_popular_items_df.head()
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
# Sort the above table by total purchase value in descending order
most_profitable_items_df = most_popular_items_df.sort_values(by = ['Total Purchase Value'],ascending = False)
most_profitable_items_df = most_popular_items_df.drop(labels = ['Purchase Count','Item Price'],axis = 1)
# Display a preview of the data frame - Top 10
most_profitable_items_df.head(10)
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
purchase_data
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
count = purchase_data["SN"].value_counts()
total_players = pd.DataFrame([len(count)], columns = ["Total Players"])
total_players
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
unique = purchase_data["Item Name"].value_counts()
average = purchase_data["Price"].mean()
purchase = purchase_data["Purchase ID"].value_counts()
revenue = purchase_data["Price"].sum()
#summary of the purchases
purchasing_analysis = pd.DataFrame({
"Number of Unique Items": [len(unique)],
"Average Price": "$"+ str(average.round(2)),
"Number of Purchases": [len(purchase)],
"Total Revenue" : "$"+ str(revenue)
})
purchasing_analysis
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
gender = purchase_data[["SN","Gender"]]
gender = gender.drop_duplicates()
counts = gender["Gender"].value_counts()
#first find the number by gender
demographics = pd.DataFrame({"Total Count": counts})
totalcount = demographics["Total Count"].sum()
#using what we've found regarding number by gender, find the percentage
percentage = [str((counts[0]*100/totalcount).round(2)) + "%",
str((counts[1]*100/totalcount).round(2)) + "%",
str((counts[2]*100/totalcount).round(2)) + "%"]
percentage_col = pd.DataFrame({"Total Count" : counts, "Percentage of Players" : percentage})
percentage_col
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
grouped_df = purchase_data.groupby(["Gender"])
pur_count = purchase_data["Gender"].value_counts()
avg_pur = grouped_df["Price"].mean()
total_pur = grouped_df["Price"].sum()
avg_by_person = total_pur / counts
# the variable counts is defined/functioned in the previous code,
#difference between pur_count & counts are the exclusion of repeated people
percentage_col = pd.DataFrame({
"Purchase Count" : pur_count,
"Average Purchase Price": avg_pur,
"Total Purchase Value" : total_pur,
"Avg Total Purchase per Person" : avg_by_person
})
percentage_col["Average Purchase Price"] = percentage_col["Average Purchase Price"].map("${:.2f}".format)
percentage_col["Total Purchase Value"] = percentage_col["Total Purchase Value"].map("${:,.2f}".format)
percentage_col["Avg Total Purchase per Person"] = percentage_col["Avg Total Purchase per Person"].map("${:.2f}".format)
percentage_col
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
#sorting, binding
age = purchase_data.sort_values("Age", ascending = False)
#bins
bins = [0, 9, 14, 19, 24, 29,
34, 39, 45]
#lables
group_labels = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
purchase_data["Age Group"] = pd.cut(purchase_data["Age"], bins, labels=group_labels)
#greate new data frame to avoid complexity
age_demo = purchase_data[["SN","Age", "Age Group"]]
age_demo = age_demo.drop_duplicates()
age_group = age_demo.groupby("Age Group")
group_count = age_group["SN"].count()
total_group_count = group_count.sum()
#find the percentage of players
age_percentage = [str((group_count[0]*100/total_group_count).round(2)) + "%",
str((group_count[1]*100/total_group_count).round(2)) + "%",
str((group_count[2]*100/total_group_count).round(2)) + "%",
str((group_count[3]*100/total_group_count).round(2)) + "%",
str((group_count[4]*100/total_group_count).round(2)) + "%",
str((group_count[5]*100/total_group_count).round(2)) + "%",
str((group_count[6]*100/total_group_count).round(2)) + "%",
str((group_count[7]*100/total_group_count).round(2)) + "%"]
age_group_demo = pd.DataFrame({"Total Count" : group_count, "Percentage of Players" : age_percentage})
age_group_demo
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#create another data frame including price
#the column regarding age group is already added previously
age_demo_2 = purchase_data[["SN","Age", "Age Group", "Price"]]
age_group_2 = age_demo_2.groupby("Age Group")
group_count_2 = age_group_2["SN"].count()
total_group_count = group_count.sum()
grouped2_df = purchase_data.groupby(["Age Group"])
pur_count_2 = purchase_data["Age Group"].value_counts()
avg_pur_2 = grouped2_df["Price"].mean()
total_pur_2 = grouped2_df["Price"].sum()
#create new in order to remove number of repeated people (with the 'price', it is not possible to remove the repeated SN)
age_demo_3 = purchase_data[["SN","Age", "Age Group"]]
age_demo_3 = age_demo_3.drop_duplicates()
age_group_3 = age_demo_3.groupby("Age Group")
age_counts = age_demo_3["Age Group"].value_counts()
avg_by_person_2 = (total_pur_2 / age_counts)
percentage_col_2 = pd.DataFrame({
"Purchase Count" : pur_count_2,
"Average Purchase Price": avg_pur_2,
"Total Purchase Value" : total_pur_2,
"Avg Total Purchase per Person" : avg_by_person_2
})
percentage_col_2["Average Purchase Price"] = percentage_col_2["Average Purchase Price"].map("${:.2f}".format)
percentage_col_2["Total Purchase Value"] = percentage_col_2["Total Purchase Value"].map("${:,.2f}".format)
percentage_col_2["Avg Total Purchase per Person"] = percentage_col_2["Avg Total Purchase per Person"].map("${:.2f}".format)
percentage_col_2
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#find top spenders using sorting / grouping
sn_group = purchase_data.groupby(["SN"])
sn_count = sn_group["SN"].count()
sn_avg_pur = sn_group["Price"].mean()
sn_total_pur = sn_group["Price"].sum()
spenders = pd.DataFrame({
"Purchase Count" : sn_count,
"Average Purchase Price": sn_avg_pur,
"Total Purchase Value" : sn_total_pur
})
top_spenders = spenders.sort_values("Total Purchase Value", ascending = False)
top_spenders["Average Purchase Price"] = top_spenders["Average Purchase Price"].map("${:.2f}".format)
top_spenders["Total Purchase Value"] = top_spenders["Total Purchase Value"].map("${:.2f}".format)
top_spenders.head()
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, average item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#using sorting /grouping/ and functions to find the most popular items
popular_items = purchase_data[["Purchase ID","Item ID", "Item Name", "Price"]]
item_group = popular_items.groupby(["Item ID", "Item Name"])
item_count = item_group["Purchase ID"].count()
total_value = item_group["Price"].sum()
populars = pd.DataFrame({
"Purchase Count" : item_count,
"Item Price": total_value / item_count,
"Total Purchase Value" : total_value
})
most_popular = populars.sort_values("Purchase Count", ascending = False)
most_popular["Item Price"] = most_popular["Item Price"].map("${:.2f}".format)
most_popular["Total Purchase Value"] = most_popular["Total Purchase Value"].map("${:.2f}".format)
most_popular.head(5)
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
#only change sorting from the previous dataframe
most_profitable = populars.sort_values("Total Purchase Value", ascending = False)
most_profitable["Item Price"] = most_profitable["Item Price"].map("${:.2f}".format)
most_profitable["Total Purchase Value"] = most_profitable["Total Purchase Value"].map("${:.2f}".format)
most_profitable.head(5)
###Output
_____no_output_____
###Markdown
Heroes Of Pymoli Data Analysis* Of the 1163 active players, the vast majority are male (84%). There also exists, a smaller, but notable proportion of female players (14%).* Our peak age demographic falls between 20-24 (44.8%) with secondary groups falling between 15-19 (18.60%) and 25-29 (13.4%). ----- Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
import numpy as np
# File to Load (Remember to Change These)
data_file = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(data_file)
###Output
_____no_output_____
###Markdown
Player Count
###Code
# to see headers in table
purchase_data.head()
# to see data info
purchase_data.info()
#to see all data types summary
purchase_data.describe(include='all')
# use numpy unique (nunique) for unique number of players in table
player_count = pd.DataFrame({"Total Players":[purchase_data["SN"].nunique()]})
player_count
###Output
_____no_output_____
###Markdown
* Display the total number of players Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# to see headers in table
purchase_data.head()
# Variables and Calculations
unique_items = purchase_data["Item ID"].nunique()
avg_price = purchase_data["Price"].mean()
unique_purchase = purchase_data["Purchase ID"].nunique()
total_rev = purchase_data["Price"].sum()
output = [unique_items, avg_price, unique_purchase, total_rev] # check calculations
output
# create summary data frame
purch_summary_df = pd.DataFrame(
{"Number of Unique Items":[unique_items],
"Average Price":["$"+'{0:,.2f}'.format(avg_price)],
"Number of Purchases":[unique_purchase],
"Total Revenue":["$"+'{0:,.2f}'.format(total_rev)]
})
purch_summary_df
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
# to see headers in table
purchase_data.head()
# Variables and Calculations
unique_SN_df = unique_players = purchase_data.drop_duplicates(["SN"])
unique_players = purchase_data.drop_duplicates(["SN"])
num_players = unique_players.count()[0]
total_gen = unique_SN_df["Gender"].value_counts()
prct_players = round((total_gen/num_players)*100,2)
# create summary data frame
gen_demog_df = pd.DataFrame({
"Total Count": total_gen,
"Percentage of Players": prct_players
})
#formatting
gen_demog_df["Percentage of Players"] = gen_demog_df["Percentage of Players"].astype(str) + "%"
gen_demog_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# to see headers in table
purchase_data.head()
# group by gender
gender = purchase_data.groupby(["Gender"])
# Variables and Calculations
purch_count = gender["SN"].count()
avg_purch_price = gender["Price"].mean()
purch_total = gender["Price"].sum()
avg_total_pp = purch_total / gen_demog_df["Total Count"]
# create summary data frame
purchasing_analysis=pd.DataFrame({
"Purchase Count":purch_count,
"Average Purchase Price":avg_purch_price,
"Total Purchase Value": purch_total,
"Avg Total Purchase per Person":avg_total_pp})
#formatting
purchasing_analysis["Average Purchase Price"] = purchasing_analysis["Average Purchase Price"].map("${:,.2f}".format)
purchasing_analysis["Total Purchase Value"] = purchasing_analysis["Total Purchase Value"].map("${:,.2f}".format)
purchasing_analysis["Avg Total Purchase per Person"] = purchasing_analysis["Avg Total Purchase per Person"].map("${:,.2f}".format)
purchasing_analysis
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
# to see headers in table
purchase_data.head()
# Establish the bins
age_bins = [0, 9.9, 14.9, 19.9, 24.9, 29.9, 34.9, 39.90, 99999]
group_names = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
# Categorize the existing players using the age bins and add Age Ranges Column
unique_purchase_data = purchase_data.loc[:,["SN","Age"]].drop_duplicates(["SN"])
unique_purchase_data["Age Ranges"] = pd.cut(unique_purchase_data["Age"], age_bins, labels=group_names)
# Variables and Calculations
age_demog_count = unique_purchase_data["Age Ranges"].value_counts()
age_demog_prct = round((age_demog_count / num_players)*100,2)
# create summary data frame
age_demog_df = pd.DataFrame({
"Total Count": age_demog_count,
"Percentage of Players": age_demog_prct})
# formatting
age_demog_df["Percentage of Players"] = age_demog_df["Percentage of Players"].astype(str) + "%"
# sort dataframe
age_demog_df.sort_index()
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
purchase_data.head()
# Bins inclusive of all purchase data
purchase_data["Age Ranges"] = pd.cut(purchase_data["Age"], age_bins, labels=group_names)
# Variables and Calculations
age_purch_count = purchase_data.groupby(["Age Ranges"]).count()["Price"].rename("Purchase Count")
age_purch_price = purchase_data.groupby(["Age Ranges"]).mean()["Price"].rename("Average Purchase Price")
total_purch_value = purchase_data.groupby(["Age Ranges"]).sum()["Price"].rename("Total Purchase Value")
avg_purch_pp = total_purch_value / age_demog_df["Total Count"]
# create summary data frame
age_purch_df = pd.DataFrame({
"Purchase Count": age_purch_count,
"Average Purchase Price": age_purch_price,
"Total Purchase Value": total_purch_value,
"Avg Total Purchase per Person": avg_purch_pp})
# formatting
age_purch_df["Average Purchase Price"] = age_purch_df["Average Purchase Price"].map("${:,.2f}".format)
age_purch_df["Total Purchase Value"] = age_purch_df["Total Purchase Value"].map("${:,.2f}".format)
age_purch_df["Avg Total Purchase per Person"] = age_purch_df["Avg Total Purchase per Person"].map("${:,.2f}".format)
age_purch_df.sort_index()
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
purchase_data.head()
# Variables and Calculations
total_purch_value = purchase_data.groupby(["SN"]).sum()["Price"].rename("Total Purchase Value")
avg_purch_value = purchase_data.groupby(["SN"]).mean()["Price"].rename("Average Purchase Price")
purch_count = purchase_data.groupby(["SN"]).count()["Price"].rename("Purchase Count")
# create summary data frame
top_spenders_df = pd.DataFrame({
"Purchase Count": purch_count,
"Average Purchase Price": avg_purch_value,
"Total Purchase Value": total_purch_value})
# Sort the total purchase value column in descending order
top_spenders_df=top_spenders_df.sort_values("Total Purchase Value", ascending=False).head(5)
# Formatting
top_spenders_df["Average Purchase Price"] = top_spenders_df["Average Purchase Price"].map("${:,.2f}".format)
top_spenders_df["Total Purchase Value"] = top_spenders_df["Total Purchase Value"].map("${:,.2f}".format)
top_spenders_df
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
purchase_data.head()
# extracting data
item_data = purchase_data.loc[:,["Item ID", "Item Name", "Price"]]
# Variables and Calculations
total_purch_value = item_data.groupby(["Item ID","Item Name"]).sum()["Price"].rename("Total Purchase Value")
item_price = item_data.groupby(["Item ID","Item Name"]).mean()["Price"]
purch_count = item_data.groupby(["Item ID","Item Name"]).count()["Price"].rename("Purchase Count")
# create summary data frame
item_data_df = pd.DataFrame({
"Purchase Count": purch_count,
"Item Price": item_price,
"Total Purchase Value": total_purch_value
})
# Sort the purchase count column in descending order
item_data_df = item_data_df.sort_values("Purchase Count", ascending=False).head(5)
# formatting
item_data_df["Item Price"] = item_data_df["Item Price"].map("${:,.2f}".format)
item_data_df["Total Purchase Value"] = item_data_df["Total Purchase Value"].map("${:,.2f}".format)
item_data_df
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
# Sort the above table by total purchase value in descending order
item_data_df.sort_values("Total Purchase Value", ascending=False).head(5)
###Output
_____no_output_____ |
models/ML Pipeline Preparation-2.ipynb | ###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
from sqlalchemy import create_engine
import nltk
nltk.download(['punkt', 'wordnet'])
import re
import numpy as np
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.multioutput import MultiOutputClassifier
import pickle
# load data from database
engine = create_engine('sqlite:///ETL_Preparation.db')
df = pd.read_sql_table('ETL', engine)
X = df.message
y = df[df.columns[4:]]
category_names = y.columns
def tokenize(text):
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
X_train, X_test, y_train, y_test = train_test_split(X, y)
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_test)
for i in range(36):
print(y_test.columns[i], ':')
print(classification_report(y_test.iloc[:,i], y_pred[:,i], target_names=category_names), '...................................................')
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
pipeline.get_params()
###Output
_____no_output_____
###Markdown
4. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
parameters = {
#'vect__ngram_range': ((1, 1),(1,2)),
#'vect__max_df': (0.5, 0.75, 1.0),
#'vect__max_features': (None, 5000, 10000),
#'tfidf__use_idf': (True, False),
'clf__estimator__n_estimators': [50, 100, 150],
'clf__estimator__min_samples_split': [2, 3, 4],
}
cv = GridSearchCV(pipeline, param_grid=parameters, n_jobs=4, verbose=2)
###Output
_____no_output_____
###Markdown
5. Train pipeline- Split data into train and test sets- Train pipeline
###Code
#MultiOutputClassifier(KNeighborsClassifier()).fit(X, y)
cv.fit(X_train, y_train)
cv.grid_scores_
###Output
_____no_output_____
###Markdown
6. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
#finding the best paramesters based on grip search
print(cv.best_params_)
#building new model
optimised_model = cv.best_estimator_
print (cv.best_estimator_)
y_pred = optimised_model.predict(X_test)
for i in range(36):
print(y_test.columns[i], ':')
print(classification_report(y_test.iloc[:,i], y_pred[:,i]), '...................................................')
###Output
_____no_output_____
###Markdown
7. Improve your modelUse grid search to find better parameters. 7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF 9. Export your model as a pickle file
###Code
with open('MLclassifier.pkl', 'wb') as file:
pickle.dump(optimised_model, file)
###Output
_____no_output_____ |
content/02_data/00_introduction_to_colab/colab.ipynb | ###Markdown
Copyright 2019 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Introduction to Colab In this unit we will explore [Colaboratory](https://colab.research.google.com/) (Colab for short). Colab is a tool for notebook-based programming. This style of programming turns out to be a great platform for machine learning education and research!A **notebook** is a file that contains code, documentation, and output from the execution of the code. [Jupyter](https://jupyter.org/) is currently the most popular form of notebook.Jupyter notebooks can be edited and executed locally. It turns out that they are also very effective when they are hosted in the cloud. **Colab** is just that. Colab is a platform that stores notebooks in Google Drive and executes the code in the notebooks on virtual machines in the cloud. This allows for a zero-setup instant environment for working on data science problems. Notebooks Colab notebooks are *almost* exactly the same as Jupyter notebooks. You can open a notebook created in Jupyter in Colab, and vice versa. Since Colab is running online and created by Google, it has a few different features related to integration with Google products. In practice this shouldn't affect you during this course. If you do find yourself downloading the notebooks and running them directly in Jupyter, you might have to adapt your notebook when a Google-specific feature is used. CellsA notebook is a list of cells. There are two types of cells: code cells and text cells. We'll work with both in this lab. Code CellsBelow is a **code cell**. You can click in the cell to select it. To execute the code in the cell you have a few options:1. Click the **Play icon** in the left gutter of the cell.1. Type **Cmd/Ctrl+Enter** to run the cell in place.1. Type **Shift+Enter** to run the cell and move focus to the next cell (adding one if none exists).1. Type **Alt+Enter** to run the cell and insert a new code cell immediately below it.1. Click the **Runtime** menu and select **Run the focused cell**.You'll notice in the **Runtime** menu, there are also options for running all cells, cells before, cells after, and the selected cells.
###Code
a = 10
a
###Output
_____no_output_____
###Markdown
After you run a cell, you can see the output displayed immediately after the cell.You might have also noticed a little delay the first time you ran the code cell. This is related to *runtimes*, which we will talk about soon.When you run code, Colab passes that code to [IPython](https://ipython.org/) to execute. The IPython session doesn't restart for every code cell, so you can do things like set a variable in one block:
###Code
a = 1000
###Output
_____no_output_____
###Markdown
And then use that variable in another block.
###Code
print(a)
###Output
_____no_output_____
###Markdown
It doesn't matter which order the code blocks show up in a notebook. What matters is the order that they are executed in.In the two code blocks below we define a variable in one and print the variable in the other. If you run the first block before the second, you'll get an error. If you run the second and then the first, everything will work fine.
###Code
# This cell should be executed after the subsequent cell.
print(b)
# This cell should be executed first since it defines 'b'.
b = 1234
###Output
_____no_output_____
###Markdown
Most programs execute in a very defined flow. The nature of an interactive programming environment allows you to run code cells in any order that fits your workflow. This can be very convenient, but it can also lead to tricky bugs where variables have unexpected values due to the order in which the data scientist ran them in their exploration. Exercise 1: Writing and Running Code in ColabThis Colab runs Python 3 code. Let's write and execute some Python in the cell below.Write the following code in the student solution cell:```python print("Hello Colab!")```And then run the cell. **Student Solution**
###Code
# Your Solution Goes Here
###Output
_____no_output_____
###Markdown
--- Shell CommandsWhen you run code through a notebook, that code is running on a computer somewhere. With Colab, that machine is likely a virtual machine in one of Google's data centers. These machines are full [Linux](https://www.linux.org/) installations.Linux offers many powerful commands through what is known as a **shell**. You can access these commands using the exclamation point, `!`, at the start of a line.For example, the shell command to list all of the files in a directory is `ls`. To run `ls` in the shell simply type `!ls` into a code cell, and execute it.You can see `ls` in action below.
###Code
!ls
###Output
_____no_output_____
###Markdown
There are many more shell commands. The University of Washington has a nice [reference card](https://courses.cs.washington.edu/courses/cse390a/14au/bash.html) listing some of the more common commands. Exercise 2: Running Shell CommandsFind the shell command that displays the current working directory. Execute that command in a code block. **Student Solution**
###Code
# Your Solution Goes Here
###Output
_____no_output_____
###Markdown
--- MagicsWe've seen Python code and shell commands in code blocks. There are other commands called **magics** that can change how a line or cell works. This is a concept native to Jupyter, so you can consult the [documentation for Jupyter's magics](http://nbviewer.jupyter.org/github/ipython/ipython/blob/1.x/examples/notebooks/Cell%20Magics.ipynb) to see a list of magics. Line magics start with a single percentage mark, `%`, and are limited to operating on one line of the cell.For instance, the `%timeit` magic below runs the line of code passed to it multiple times and records those times.
###Code
import numpy as np
%timeit np.linalg.eigvals(np.random.rand(100, 100))
###Output
_____no_output_____
###Markdown
Cell magics start with two percentage signs, `%%`, and apply to all subsequent content in the cell. For example, the `%%html` magic interprets the cell as HTML and outputs rendered HTML instead of raw text.
###Code
%%html
<marquee style='width: 30%; color: blue;'><b>Whee!</b></marquee>
###Output
_____no_output_____
###Markdown
Exercise 3: Magics for Shell CommandsWe have seen how to run a shell command using the exclamation point, `!`. You can also use a shell magic to run shell commands without having to prefix each with `!`.Find a magic that tells Colab to interpret a code cell as Bash (a common Linux shell), then run the `ls` command using that magic. **Student Solution**
###Code
# Your Solution Goes Here
###Output
_____no_output_____
###Markdown
--- Charts and GraphsColab also allows for charts and graphs to be displayed inline. Don't pay too much attention to the code below, but notice that when you run the code block, a chart is displayed in the output.
###Code
import numpy as np
from matplotlib import pyplot as plt
ys = 200 + np.random.randn(100)
x = [x for x in range(len(ys))]
plt.plot(x, ys, '-')
plt.fill_between(x, ys, 195, where=(ys > 195), facecolor='g', alpha=0.6)
plt.title("Fills and Alpha Example")
plt.show()
###Output
_____no_output_____
###Markdown
Getting HelpColab provides hints to explore attributes of Python objects, as well as to quickly view documentation strings. As an example, first run the following cell to import the [`numpy`](http://www.numpy.org) module.
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Now, insert your cursor after `np.random` below and type a dot: `.`. Wait just a moment, and you should see a drop-down list of all of the attributes of `np.random`.If you don't see the drop-down list, try pressing the **Tab** key. Some platforms have slightly different triggers for the suggested completions.
###Code
np.random
###Output
_____no_output_____
###Markdown
If you type an opening parenthesis after `np.random.rand` below, you should see a pop-up with documentation about the function. If not, try pressing the **Tab** key.
###Code
np.random.rand
###Output
_____no_output_____
###Markdown
To open the documentation in a persistent pane at the bottom or right-hand side of your screen, add a **?** after the object or method name and execute the cell using **Cmd/Ctrl+Enter**:
###Code
np.random?
###Output
_____no_output_____ |
downloaded_kernels/loan_data/kernel_247.ipynb | ###Markdown
Loans Per CaptiaI've seen a lot of people who are exploring this data look at the raw number of loans per state. It's interesting in that it fairly accurately shows the ranking of states by population. So, in this short script, I look at the loans granted per captia. I got the population data from [this](https://en.wikipedia.org/wiki/List_of_U.S._states_and_territories_by_population) Wikipedia page.
###Code
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
df = pd.read_csv('../input/loan.csv', usecols = ['loan_amnt', 'addr_state'])
perStatedf = df.groupby('addr_state', as_index = False).count().sort_values(by = 'loan_amnt', ascending=False)
perStatedf.columns = ['State', 'Num_Loans']
###Output
_____no_output_____
###Markdown
Here's the plot of the raw loan numbers by state.
###Code
fig, ax = plt.subplots(figsize = (16,8))
ax = sns.barplot(x='State', y='Num_Loans', data=perStatedf)
ax.set(ylabel = 'Number of Loans', title = 'Loans per State')
plt.show()
###Output
_____no_output_____
###Markdown
I load the population data in as a dictionary, convert it to a dataframe and merge it with my other data. I could have probably found an easier way to load in the population data without entering it in by hand, but I'm pretty good at ten key so it took less time than looking for the 'easier' way.
###Code
statePop = {'CA' : 39144818,
'TX' : 27469144,
'FL' : 20271878,
'NY' : 19795791,
'IL' : 12859995,
'PA' : 12802503,
'OH' : 11613423,
'GA' : 10214860,
'NC' : 10042802,
'MI' : 9922576,
'NJ' : 8958013,
'VA' : 8382993,
'WA' : 7170351,
'AZ' : 6828065,
'MA' : 6794422,
'IN' : 6619680,
'TN' : 6600299,
'MO' : 6083672,
'MD' : 6006401,
'WI' : 5771337,
'MN' : 5489594,
'CO' : 5456574,
'SC' : 4896146,
'AL' : 4858979,
'LA' : 4670724,
'KY' : 4425092,
'OR' : 4028977,
'OK' : 3911338,
'CT' : 3890886,
'IA' : 3123899,
'UT' : 2995919,
'MS' : 2992333,
'AK' : 2978204,
'KS' : 2911641,
'NV' : 2890845,
'NM' : 2085109,
'NE' : 1896190,
'WV' : 1844128,
'ID' : 1654930,
'HI' : 1431603,
'NH' : 1330608,
'ME' : 1329328,
'RI' : 1053298,
'MT' : 1032949,
'DE' : 945934,
'SD' : 858469,
'ND' : 756927,
'AK' : 738432,
'DC' : 672228,
'VT' : 626042,
'WY' : 586107}
statePopdf = pd.DataFrame.from_dict(statePop, orient = 'index').reset_index()
statePopdf.columns = ['State', 'Pop']
perStatedf = pd.merge(perStatedf, statePopdf, on=['State'], how = 'inner')
perStatedf['PerCaptia'] = perStatedf.Num_Loans / perStatedf.Pop
fig, ax = plt.subplots(figsize = (16,8))
ax = sns.barplot(x='State', y='PerCaptia', data=perStatedf.sort_values(by = 'PerCaptia', ascending=False))
ax.set(ylabel = 'Number of Loans', title = 'Per Captia Loans by State')
plt.show()
###Output
_____no_output_____
###Markdown
Here we can see that per person, Nevada takes out the most loans by a fair margin. The former leader, California, is now ranked at number 10.Now, because I have the data right there, I'm going to look at loan amount by state and per capita loan amount by state.
###Code
perStatedf = df.groupby('addr_state', as_index = False).sum().sort_values(by = 'loan_amnt', ascending=False)
perStatedf.columns = ['State', 'loan_amt']
fig, ax = plt.subplots(figsize = (16,8))
ax = sns.barplot(x='State', y='loan_amt', data=perStatedf)
ax.set(ylabel = 'Number of Loans', title = 'Total Loan Amount per State')
plt.show()
perStatedf = pd.merge(perStatedf, statePopdf, on=['State'], how = 'inner')
perStatedf['PerCaptia'] = perStatedf.loan_amt / perStatedf.Pop
fig, ax = plt.subplots(figsize = (16,8))
ax = sns.barplot(x='State', y='PerCaptia', data=perStatedf.sort_values(by = 'PerCaptia', ascending=False))
ax.set(ylabel = 'Number of Loans', title = 'Per Captia Loan Amount by State')
plt.show()
###Output
_____no_output_____
###Markdown
We can see again, that the raw loan amount by state follows the state populations pretty close. Again, when you look at the per capita amounts, Nevada is at the top. Here we see that the former number 1, California, again drops in rank. It's now in thirteenth place.Next, I'm going to look at the per capita bad loans.
###Code
df = pd.read_csv('../input/loan.csv', usecols = ['loan_status', 'addr_state'])
df.loan_status.unique()
badLoan = ['Charged Off',
'Default',
'Late (31-120 days)',
'Late (16-30 days)', 'In Grace Period',
'Does not meet the credit policy. Status:Charged Off']
df['isBad'] = [ 1 if x in badLoan else 0 for x in df.loan_status]
perStatedf = df.groupby('addr_state', as_index = False).sum().sort_values(by = 'isBad', ascending=False)
perStatedf.columns = ['State', 'badLoans']
fig, ax = plt.subplots(figsize = (16,8))
ax = sns.barplot(x='State', y='badLoans', data=perStatedf)
ax.set(ylabel = 'Number of Loans', title = 'Total Bad Loans per State')
plt.show()
perStatedf = pd.merge(perStatedf, statePopdf, on=['State'], how = 'inner')
perStatedf['PerCaptia'] = perStatedf.badLoans / perStatedf.Pop
fig, ax = plt.subplots(figsize = (16,8))
ax = sns.barplot(x='State', y='PerCaptia', data=perStatedf.sort_values(by = 'PerCaptia', ascending=False))
ax.set(ylabel = 'Number of Loans', title = 'Per Captia Bad Loans by State')
plt.show()
###Output
_____no_output_____
###Markdown
Again we see that Nevada tops the charts with the most per capita bad loans. The most interesting result is Washington DC. It is 5th in total loans per capita, but it is 30th in per capita bad loans.Looking at these results, I think looking at the percentage of bad loans by state would offer more insight into this.
###Code
df = pd.read_csv('../input/loan.csv', usecols = ['loan_status', 'addr_state'])
perStatedf = df.groupby('addr_state', as_index = False).count().sort_values(by = 'loan_status', ascending = False)
perStatedf.columns = ['State', 'totalLoans']
df['isBad'] = [ 1 if x in badLoan else 0 for x in df.loan_status]
badLoansdf = df.groupby('addr_state', as_index = False).sum().sort_values(by = 'isBad', ascending = False)
badLoansdf.columns = ['State', 'badLoans']
perStatedf = pd.merge(perStatedf, badLoansdf, on = ['State'], how = 'inner')
perStatedf['percentBadLoans'] = (perStatedf.badLoans / perStatedf.totalLoans)*100
fig, ax = plt.subplots(figsize = (16,8))
ax = sns.barplot(x='State', y='percentBadLoans', data=perStatedf.sort_values(by = 'percentBadLoans', ascending=False))
ax.set(ylabel = 'Percent', title = 'Percent of Bad Loans by State')
plt.show()
perStatedf.sort_values(by = 'percentBadLoans', ascending = False).head()
###Output
_____no_output_____ |
beginner-lessons/interdisciplinary-communication/Welcome.ipynb | ###Markdown
Welcome to the Hour of CI!The Hour of Cyberinfrastructure (Hour of CI) project will introduce you to the world of cyberinfrastructure (CI). If this is your first lesson, then we recommend starting with the **[Gateway Lesson](https://www.hourofci.org/gateway-lesson)**, which will introduce you to the Hour of CI project and the eight knowledge areas that make up Cyber Literacy for Geographic Information Science. This is the **Beginner Interdisciplinary Communication** lesson.To start, click on the "Run this cell" button below to setup your Hour of CI environment. It looks like this:
###Code
!cd ../..; sh setupHourofCI # Run this cell (button on left) to setup your Hour of CI environment
###Output
_____no_output_____
###Markdown
Welcome to the Hour of CI!The Hour of Cyberinfrastructure (Hour of CI) project will introduce you to the world of cyberinfrastructure (CI). If this is your first lesson, then we recommend starting with the **[Gateway Lesson](https://www.hourofci.org/gateway-lesson)**, which will introduce you to the Hour of CI project and the eight knowledge areas that make up Cyber Literacy for Geographic Information Science. This is the **Beginner Interdisciplinary Communication** lesson.To start, click on the "Run this cell" button below to setup your Hour of CI environment. It looks like this:
###Code
!cd ../..; sh setupHourofCI # Run this cell (button on left) to setup your Hour of CI environment
###Output
Hour of CI Setup Starting ...
Hour of CI Setup Complete!
###Markdown
Welcome to the Hour of CI!The Hour of Cyberinfrastructure (Hour of CI) project will introduce you to the world of cyberinfrastructure (CI). If this is your first lesson, then we recommend starting with the **[Gateway Lesson](https://www.hourofci.org/gateway-lesson)**, which will introduce you to the Hour of CI project and the eight knowledge areas that make up Cyber Literacy for Geographic Information Science. This is the **Beginner Interdisciplinary Communication** lesson.To start, click on the "Run this cell" button below to setup your Hour of CI environment. It looks like this:
###Code
!cd ../..; sh setupHourofCI # Run this cell (button on left) to setup your Hour of CI environment
###Output
_____no_output_____ |
project1/project1.ipynb | ###Markdown
Project 1 - Mc907/Mo651 - Mobile Robotics Student:Luiz Eduardo Cartolano - RA: 183012 Instructor:Esther Luna Colombini Github Link:[Project Repository](https://github.com/luizcartolano2/mc907-mobile-robotics) Subject of this Work:The general objective of this work is to build, on the V-REP robotic simulator, an odometry and feature extraction system for the Pioneer P3-DX robot. Goals:1. Implement the kinematic model of the differential robot P3DX and compute the robot odometry through its kinematic model.2. Acquire sensor data as the robot moves around and display features (point cloud, objects, etc.) extracted from them in global coordinates. Code Starts Here Import of used libraries
###Code
from lib import vrep
import sys, time
from src import robot as rb
from src.utils import vrep2array
import math
from time import time
import numpy as np
import cv2
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
import skfuzzy as fuzz
import skfuzzy.control as ctrl
###Output
_____no_output_____
###Markdown
Defining the kinematic model of the Pionner P3DXFor this project, we are going to use the configuration of the mobile robot being characterized by the position (x,y) and the orientation in a Cartesian coordinate.Using the follow parameters:1. $V_R$: linear velocity of the right wheel.2. $V_L$: linear velocity of the left wheel.3. $W$: angular velocity of the mobile robot.4. $X$: abscissa of the robot.5. $Y$: intercept of the robot.6. $X,Y$ : the actual position coordinates.7. $\theta$: orientation of the robot.8. $L$: the distance between the driving wheels.The kinematic model is given by these equations [1](https://www.hindawi.com/journals/cin/2016/9548482/abs/):\begin{align}\frac{dX}{dt} & = \frac{V_L + V_R}{2} \cdot cos(\theta) \\\frac{dY}{dt} & = \frac{V_L + V_R}{2} \cdot sen(\theta) \\\frac{d \theta}{dt} & = \frac{V_L - V_R}{2} \\\end{align}Where ($X$,$Y$ and $\theta$) are the robot actual position and orientation angle in world reference frame. In simulation, we use the discrete form to build a model of the robot. The discrete form of the kinematic model is given by the following equations:\begin{align}X_{k+1} & = X_k + T \cdot \frac{V_{lk} + V_{rk}}{2} \cdot cos(\theta_k + \frac{d \theta}{dt} ) \\Y_{k+1} & = Y_k + T \cdot \frac{V_{lk} + V_{rk}}{2} \cdot sen(\theta_k + \frac{d \theta}{dt}) \\\theta_{k+1} & = \theta_k + T \cdot \frac{V_{lk} + V_{rk}}{L} \\\end{align}where $X_{k+1}$ and $Y_{k+1}$ represent the position of the center axis of the mobile robot and $T$ is the sampling time.
###Code
class Pose:
"""
A class used to store the robot pose.
...
Attributes
----------
x : double
The x position of the robot on the map
y : double
The y position of the robot on the map
orientation : double
The angle theta of the robot on the map
Methods
-------
The class doesn't have any methods
"""
def __init__(self, x=None, y=None, orientation=None):
self.x = x
self.y = y
self.orientation = orientation
class Odometry():
"""
A class used to implement methods that allow a robot to calculate his own odometry.
...
Attributes
----------
robot : obj
The robot object
lastPose : obj Pose
Store the robot's pose during his movement
lastTimestamp : time
Store the last timestamp
left_vel : double
Store the velocity of the left robot wheel
right_vel : double
Store the velocity of the right robot wheel
delta_time : double
Store how much time has passed
delta_theta : double
Store how the orientation change
delta_space : double
Store how the (x,y) change
Methods
-------
ground_truth_updater()
Function to update the ground truth, the real pose of the robot at the simulator
odometry_pose_updater()
Function to estimate the pose of the robot based on the kinematic model
"""
def __init__(self, robot):
self.robot = robot
self.lastPose = None
self.lastTimestamp = time()
self.left_vel = 0
self.right_vel = 0
self.delta_time = 0
self.delta_theta = 0
self.delta_space = 0
def ground_truth_updater(self):
"""
Function to update the ground truth, the real pose of the robot at the simulator
"""
# get the (x,y,z) position of the robot at the simulator
pose = self.robot.get_current_position()
# get the orientation of the robot (euler angles)
orientation = self.robot.get_current_orientation()
# return an pose object (x,y,theta)
return Pose(x=pose[0], y=pose[1], orientation=orientation[2])
def odometry_pose_updater(self):
"""
Function to estimate the pose of the robot based on the knematic model
"""
if self.lastPose is None:
self.lastPose = self.ground_truth_updater()
return self.lastPose
# get the actual timestamp
time_now = time()
# get the robot linear velocity for the left and right wheel
left_vel, right_vel = self.robot.get_linear_velocity()
# calculate the difference between the acutal and last timestamp
delta_time = time_now - self.lastTimestamp
# calculate the angle deslocation - based on the kinematic model
delta_theta = (right_vel - left_vel) * (delta_time / self.robot.ROBOT_WIDTH)
# calculate the distance deslocation - based on the kinematic model
delta_space = (right_vel + left_vel) * (delta_time / 2)
# auxiliary function to sum angles
add_deltha = lambda start, delta: (((start+delta)%(2*math.pi))-(2*math.pi)) if (((start+delta)%(2*math.pi))>math.pi) else ((start+delta)%(2*math.pi))
# calculate the new X pose
x = self.lastPose.x + (delta_space * math.cos(add_deltha(self.lastPose.orientation, delta_theta/2)))
# calculate the new Y pose
y = self.lastPose.y + (delta_space * math.sin(add_deltha(self.lastPose.orientation, delta_theta/2)))
# calculate the new Orientation pose
theta = add_deltha(self.lastPose.orientation, delta_theta)
# uptade the state of the class
self.lastPose = Pose(x, y, theta)
self.lastTimestamp = time_now
self.left_vel = left_vel
self.right_vel = right_vel
self.delta_time = delta_time
self.delta_theta = delta_theta
self.delta_space = delta_space
return self.lastPose
###Output
_____no_output_____
###Markdown
Defining the class that controls the robot walkerFor this project we are going to use simple controllers in order to make the robot move in the map. Controllers:**1. Wall-Follower** The Wall-Follower is an indoor navigation robot which will automatically approach and follow a wall. In order to implement the behavior, we used the two ultra-sonic in the front of the robot for obstacle avoidance. The other two ultra-sonic on the side are used to detect a wall and tell the robot’s relative position to the wall. So, with that information it's possible, to follow the pipeline: 1. Firts the robot find a wall: Walks until the front sensors are busy 2. Aling with the wall Robot turn in aiming to align his left side with the wall 3. Follow the wall Once aligned with the wall is just keep a safe distance and follow it.
###Code
class Walker():
"""
A class used to implement methods that allow a robot to walk, based on different behaviors.
...
Attributes
----------
robot : obj
the robot object
Methods
-------
find_wall()
Function that aims to find a wall in the front of the robot.
turn_left()
Function that aims to make the robot turn left.
follow_wall()
Function that aims to make the robot follow the wall that stays at his left side.
"""
def __init__(self, robot):
"""
Instanciate the object
"""
self.robot = robot
def find_wall(self):
"""
Function that aims to find a wall in the front of the robot.
"""
# read the sonars
sensors = self.robot.read_ultrassonic_sensors()
# loop to make the robot walk until he finds a wall
if sensors[3] > 0.45 and sensors[4] > 0.45:
# set the initial velocity
self.robot.set_left_velocity(3)
self.robot.set_right_velocity(3)
return False
else:
return True
def turn_left(self):
"""
Function that aims to make the robot turn left.
"""
# read the sonar sensors
sensors = self.robot.read_ultrassonic_sensors()
if (sensors[0] > 0.5 and sensors[15] > 0.5) or (sensors[0] - sensors[15] > 0.1) or (sensors[15] - sensors[0] > 0.1):
# start the motors velocity
self.robot.set_left_velocity(1.5)
self.robot.set_right_velocity(0)
return False
else:
return True
def follow_wall(self):
"""
Function that aims to make the robot follow the wall that stays at his left side.
"""
sensors = self.robot.read_ultrassonic_sensors()
if sensors[3] < 0.6 or sensors[4] < 0.6 or sensors[5] < 0.6 or sensors[6] < 0.6:
self.robot.set_right_velocity(0)
self.robot.set_left_velocity(1.5)
elif sensors[0] < 0.37 or sensors[1] < 0.37 or sensors[2] < 0.37:
self.robot.set_right_velocity(0)
self.robot.set_left_velocity(1.5)
elif sensors[0] > 0.55 or sensors[1] > 0.55:
self.robot.set_right_velocity(1.5)
self.robot.set_left_velocity(0)
else:
self.robot.set_right_velocity(1.5)
self.robot.set_left_velocity(1.5)
return
###Output
_____no_output_____
###Markdown
**2. Fuzzy** A fuzzy control system is a control system based on fuzzy logic—a mathematical system that analyzes analog input values in terms of logical variables that take on continuous values between 0 and 1, in contrast to classical or digital logic, which operates on discrete values of either 1 or 0 (true or false, respectively). The input variables in a fuzzy control system are in general mapped by sets of membership functions similar to this, known as "fuzzy sets". The process of converting a crisp input value to a fuzzy value is called "fuzzification". A control system may also have various types of switch, or "ON-OFF", inputs along with its analog inputs, and such switch inputs of course will always have a truth value equal to either 1 or 0, but the scheme can deal with them as simplified fuzzy functions that happen to be either one value or another. Given "mappings" of input variables into membership functions and truth values, the microcontroller then makes decisions for what action to take, based on a set of "rules", each of the form:~~~ IF brake temperature IS warm AND speed IS not very fast THEN brake pressure IS slightly decreased.~~~ For this project hte implemented fuzzy was very simples, and aims just to make the robot abble to avoid obstacles on his way. He uses the ultrassonic sensors for his three inputs (front, left and right distance) and outputs the linear velocity for both weels.
###Code
class FuzzyControler():
"""
A class used to implement methods that allow a robot to walk, based on a fuzzy logic controller.
...
Attributes
----------
forward: skfuzzy object
Skfuzzy input object
left: skfuzzy object
Skfuzzy input object
right: skfuzzy object
Skfuzzy input object
output_left: skfuzzy object
Skfuzzy output object
output_right: skfuzzy object
Skfuzzy output object
rules: skfuzzy object
List of rules to the fuzzy
control: skfuzzy object
Skfuzzy controller object
simulator: skfuzzy object
Skfuzzy simulator object
Methods:
-------
create_inputs()
Function to create skfuzzy input functions
create_outputs()
Function to create skfuzzy output functions
create_rules()
Function to create skfuzzy rules
create_control()
Function to create skfuzzy controller
show_fuzzy()
Function to show the fuzzy rules as a graph
create_simulator()
Function that controls the fuzzy pipeline
simulate()
Function that give outputs velocity based on input distance
"""
def __init__(self):
self.forward = None
self.left = None
self.right = None
self.output_left = None
self.output_right = None
self.rules = []
self.control = None
self.simulator = None
def create_inputs(self):
# variable universe
x_dst = np.arange(0, 1, 0.00001)
# create skfuzzy object
self.forward = ctrl.Antecedent(x_dst, 'forward')
self.left = ctrl.Antecedent(x_dst, 'left')
self.right = ctrl.Antecedent(x_dst, 'right')
# set the variable universe as near, medium and far
self.forward['near'] = fuzz.trapmf(x_dst, [0.00397, 0.0145, 0.242063492063492, 0.374])
self.forward['medium'] = fuzz.trapmf(x_dst, [0.295, 0.366, 0.55, 0.652116402116402])
self.forward['far'] = fuzz.trapmf(x_dst, [0.583333333333333, 0.702, 1.00, 1.0])
self.left['near'] = fuzz.trapmf(x_dst, [0.00132, 0.0119, 0.261, 0.358465608465608])
self.left['medium'] = fuzz.trapmf(x_dst, [0.287, 0.374, 0.55, 0.675925925925926])
self.left['far'] = fuzz.trapmf(x_dst, [0.61, 0.705026455026455, 1.03, 1.3])
self.right['near'] = fuzz.trapmf(x_dst, [0.00132, 0.0119, 0.261, 0.358465608465608])
self.right['medium'] = fuzz.trapmf(x_dst, [0.287, 0.374, 0.55, 0.675925925925926])
self.right['far'] = fuzz.trapmf(x_dst, [0.61, 0.705026455026455, 1.03, 1.3])
return
def create_outputs(self):
# variable universe
x_out = np.arange(-1,1, 0.00001)
# create skfuzzy object
self.output_left = ctrl.Consequent(x_out, 'output_left')
self.output_right = ctrl.Consequent(x_out, 'output_right')
# set the variable universe as reverse, slow, normal and fast
self.output_left['reverse'] = fuzz.trapmf(out_dst, [-1, -1, 0, 0])
self.output_left['slow'] = fuzz.trapmf(out_dst, [0.0, 0.0, 0.5, 0.5])
self.output_left['normal'] = fuzz.trapmf(out_dst, [0.4, 0.4, 0.8, 0.8])
self.output_left['fast'] = fuzz.trapmf(out_dst, [0.7, 0.7, 1.0, 1.0])
self.output_right['reverse'] = fuzz.trapmf(out_dst, [-1, -1, 0, 0])
self.output_right['slow'] = fuzz.trapmf(out_dst, [0.0, 0.0, 0.5, 0.5])
self.output_right['normal'] = fuzz.trapmf(out_dst, [0.4, 0.4, 0.8, 0.8])
self.output_right['fast'] = fuzz.trapmf(out_dst, [0.7, 0.7, 1.0, 1.0])
return
def create_rules(self, forward, left, right, output_left, output_right):
# rule 1
rule1 = ctrl.Rule(antecedent=(forward['near'] & left['near'] & right['far']),
consequent=(output_left['fast'], output_right['slow']))
# rule 2
rule2 = ctrl.Rule(antecedent=(forward['near'] & left['near'] & right['medium']),
consequent=(output_left['normal'], output_right['fast']))
# rule 3
rule3 = ctrl.Rule(antecedent=(forward['medium'] & left['medium'] & right['far']),
consequent=(output_left['normal'], output_right['slow']))
# rule 4
rule4 = ctrl.Rule(antecedent=(forward['medium'] & left['near'] & right['far']),
consequent=(output_left['fast'], output_right['slow']))
# rule 5
rule5 = ctrl.Rule(antecedent=(forward['medium'] & left['medium'] & right['medium']),
consequent=(output_left['slow'], output_right['slow']))
# rule 6
rule6 = ctrl.Rule(antecedent=(forward['medium'] & left['near'] & right['medium']),
consequent=(output_left['fast'], output_right['slow']))
# rule 7
rule7 = ctrl.Rule(antecedent=(forward['far'] & left['far'] & right['far']),
consequent=(output_left['fast'], output_right['fast']))
# rule 8
rule8 = ctrl.Rule(antecedent=(forward['near'] & left['medium'] & right['far']),
consequent=(output_left['slow'], output_right['fast']))
# rule 9
rule9 = ctrl.Rule(antecedent=(forward['near'] & left['near']),
consequent=(output_left['slow'], output_right['fast']))
# rule 10
rule10 = ctrl.Rule(antecedent=(left['near']),
consequent=(output_left['fast'], output_right['slow']))
# rule 11
rule11 = ctrl.Rule(antecedent=(forward['medium'] & left['medium'] & right['near']),
consequent=(output_left['normal'], output_right['reverse']))
# rule 12
rule12 = ctrl.Rule(antecedent=(forward['medium'] & left['far'] & right['near']),
consequent=(output_left['slow'], output_right['fast']))
# rule 13
rule13 = ctrl.Rule(antecedent=(forward['near'] & left['medium'] & right['near']),
consequent=(output_left['fast'], output_right['slow']))
# rule 14
rule14 = ctrl.Rule(antecedent=(forward['near'] & right['near']),
consequent=(output_left['slow'], output_right['normal']))
# rule 15
rule15 = ctrl.Rule(antecedent=(right['near']),
consequent=(output_left['reverse'], output_right['normal']))
# rule 16
rule16 = ctrl.Rule(antecedent=(forward['near']),
consequent=(output_left['reverse'], output_right['normal']))
for i in range(1, 17):
self.rules.append(eval("rule" + str(i)))
return
def create_control(self):
# call function to create robot input
self.create_inputs()
# call function to create robot output
self.create_outputs()
# call function to create rules
self.create_rules(self.forward, self.left, self.right, self.output_left, self.output_right)
# create controller
self.control = skfuzzy.control.ControlSystem(self.rules)
return
def show_fuzzy(self):
if self.control is None:
raise Exception("Control not created yet!")
else:
self.control.view()
return
def create_simulator(self):
if self.control is None:
# crete controller if it doensn't exist
self.create_control()
# create simulator object
self.simulator = ctrl.ControlSystemSimulation(self.control)
return
def simulate(self, input_foward=None, input_left=None, input_right=None):
if self.simulator is None:
# crete simulator if it doensn't exist
self.create_simulator()
# if there is no input raise exception
if input_foward is None or input_left is None or input_right is None:
raise Exception("Inputs can't be none")
# simulte the robot linear velocity based on given inputs
self.simulator.input['forward'] = input_foward
self.simulator.input['left'] = input_left
self.simulator.input['right'] = input_right
self.simulator.compute()
return self.simulator.output['output_left'], self.simulator.output['output_right']
###Output
_____no_output_____
###Markdown
Defining the Feature Extractor of the robotFor this project we tried to implement two different strategies to extract features from the environment. Definition:Feature extraction and selection are important steps in the construction of a map to support the navigation of mobile robots in outdoor environments. The large amount of data acquired by the on-board sensors has to be reduced to retain only the crucial information for the navigation purpose. This procedure should be robust given the rough, dynamic and unpredictable conditions provided by outdoor scenarios. Extractors:**1. Image Extractor** Feature extraction operates on an image and returns one or more image features. Features are typically scalars (for example area or aspect ratio) or short vectors (for example the coordinate of an object or the parameters of a line). Image feature extraction is a necessary first step in using image data to control a robot. It is an information concentration step that reduces the data.**Note:** Unfortunately, the feature extractor using image input is not working as expected, so it's not going to be used on this project. While the development we found several problemns on the quality of the hough tranformed output and also on convert the coordinates from the camera to "real world" coordinates. However, we are still going to explain the image processing techniques used and the pipeline of the extractor.**Techniques:**1. Hough transformIs a feature extraction method for detecting simple shapes such as circles, lines etc in an image. A “simple” shape is one that can be represented by only a few parameters. For example, a line. The polar form of a line is represented as: $\rho = x cos(\theta) + y sen(\theta)$. Here $\rho$ represents the perpendicular distance of the line from the origin in pixels, and $\theta$ is the angle measured in radians. The accumulator because we will use the bins of this array to collect evidence about which lines exist in the image.Steps operated in order to extract lines from image: 1. Initialize Accumulator 2. Detect Edges 3. Voting by Edge Pixels2. Corner DetectionIn order to detect corners, we iterate over the founded lines, and check if exists an intercept point between then. To this, we use the following formula:\begin{align}P_x = \frac{(x_1 y_2-y_1 x_2)(x_3-x_4)-(x_1-x_2)(x_3 y_4-y_3 x_4)}{(x_1-x_2)(y_3-y_4)-(y_1-y_2)(x_3-x_4)} \\P_y = \frac{(x_1 y_2-y_1 x_2)(y_3-y_4)-(y_1-y_2)(x_3 y_4-y_3 x_4)}{(x_1-x_2)(y_3-y_4)-(y_1-y_2)(x_3-x_4)}\end{align}
###Code
class ImageError(Exception):
"""Raised when the input is None"""
pass
class Frame():
"""
A class used to store the image the robot saw.
...
Attributes
----------
pose : object
The pose of the robot in the moment the "picture was taken"
image : numpy.array
The matrix of pixels of the image
timestamp : time object
The time when the "picture was taken"
Methods
-------
The class doesn't have any methods
"""
def __init__(self, pose, image, wall_distance, timestamp=None):
self. pose = pose
self.image = image
self.wall_distance = wall_distance
self.timestamp = time()
class FeatureExtractor():
"""
A class used to implement the methods that are going to extract features to the robot.
...
Attributes
----------
robot : object
The robot object
firs_image : object
The frame object
Methods
-------
step(self)
Function that controls the pipeline of the feature extraction.
update(self)
Function that updates the robot stored image.
hough_transform(self, image)
Function that uses the hough transformed to detect lines on the image.
corner_detect(self, lines)
Function that detect corners from a list of lines.
"""
def __init__(self, robot):
self.robot = robot
self.first_image = None
def step(self):
"""
Function that controls the pipeline of the feature extraction.
"""
if self.update():
try:
lines = self.hough_transform(self.first_image.image)
corner = self.corner_detect(lines)
line_distance = self.find_distance(lines)
if line_distance != -1:
self.map_coordinates(line_distance, self.first_image.pose)
return (lines, corner)
except:
raise ImageError
else:
raise ImageError
def update(self):
"""
Function that updates the robot stored image.
"""
# get distance between robot and a wall
wall_distance = min(self.robot.read_ultrassonic_sensors()[3:5])
# get the (x,y,z) position of the robot at the simulator
pose = self.robot.get_current_position()
# get the orientation of the robot (euler angles)
orientation = self.robot.get_current_orientation()
# read camera
image = self.robot.read_vision_sensor()
image = vrep2array(image[1],image[0])
# form pose
pose = Pose(x=pose[0], y=pose[1], orientation=orientation[2])
# attribute image
self.first_image = Frame(pose=pose, image=image, wall_distance=wall_distance)
return True
def hough_transform(self, image):
"""
Function that uses the hough transformed to detect lines on the image.
:input: image - numpy.array of the image pixels
:return: points - list with the found lines
"""
# make the image gray (2 channels)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# apply the hough transformed
lines = cv2.HoughLines(gray, 1, np.pi/2, 60)
points = []
for line in lines:
for rho,theta in line:
# iterate over lines in order to find the (x_start, y_start) and (x_end, y_end) of the lines
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
points.append([(x1, y1), (x2, y2)])
return points
def corner_detect(self, lines):
"""
:input: lines - list with the lines found on the image
:return: corner - list with the (x,y) position of lines intersection
"""
corner = []
for line1, line2 in zip(lines, lines[1:]):
# find the distance of the x points between two lines
xdiff = (line1[0][0] - line1[1][0], line2[0][0] - line2[1][0])
# find the distance of the y point between two lines
ydiff = (line1[0][1] - line1[1][1], line2[0][1] - line2[1][1])
# calc the determinant
det = lambda a, b: a[0] * b[1] - a[1] * b[0]
div = det(xdiff, ydiff)
if div == 0:
continue
else:
# calc the point of intercept
d = (det(*line1), det(*line2))
x = det(d, xdiff) / div
y = det(d, ydiff) / div
corner.append((x,y))
return corner
def map_coordinates(self, line_size, robot_pose):
angle = math.degrees(robot_pose.orientation)
if angle <= 45 or angle >= -45:
return
print("SOMA X: ", line_size)
elif angle < -45 or angle > -135:
return
print("SUBTRAI Y: ", line_size)
elif angle < -135 or angle > 135:
return
print("SUBTRAI X: ", line_size)
else:
return
print("SOMA Y: ", line_size)
def find_distance(self, lines):
max_dist = -1
for line in lines:
if line[2] < 0.3:
pass
else:
dst = (((line[1][0] - line[0][0])**2) + ((line[1][1] - line[0][1])**2)) ** (1/2)
if dst > max_dist:
max_dist = dst
return max_dist
###Output
_____no_output_____
###Markdown
**2. KMeans Clustering Extractor**K-means clustering is a method of vector quantization, originally from signal processing, that is popular for cluster analysis in data mining. k-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. The algorithm has a loose relationship to the k-nearest neighbor classifier, a popular machine learning technique for classification that is often confused with k-means due to the name.**How It Works:**Given a set of observations $(x_1, x_2, ..., x_n)$, where each observation is a d-dimensional real vector, k-means clustering aims to partition the n observations into k $(≤ n)$ sets $S = {S_1, S_2, ..., S_k}$ so as to minimize the variance. Formally, the objective is to find:\begin{align}\underset{\mathbf{S}} {\operatorname{arg\,min}} \sum_{i=1}^{k} \sum_{\mathbf x \in S_i} \left\| \mathbf x - \boldsymbol\mu_i \right\|^2 = \underset{\mathbf{S}} {\operatorname{arg\,min}} \sum_{i=1}^k |S_i| \operatorname{Var} S_i \end{align}**How we use KMeans for mapping:**To apply the KMeans for mapping we extract enviroment information from the LIDAR (laser sensors) when the robot were close to some object (we check the robot distance from other objects using the ultrassonic sensor). So, once the LIDAR gives us the (x,y,z) information about the perceived objetc on the scene, it was necessary to perform few steps in order to acquire data: 1. First we clean the data, extracting just the (x,y) position 2. We apply the KMeans fit function (specifying the number of clusters) 3. We extract the centroids of the clusters as the position of objects on the scene **Why we choose to use KMeans:**K-means is one of the simplest algorithm which uses unsupervised learning method to solve known clustering issues. It works really well with large datasets and it's not very computational expensive. And this algorithm is guaranteed to converge to a result.Furthermore, the major drawbacks K-means usually show aren't big problem for us here, since we don't need high quality clusters, or totally free noise information, this is, we can pay this drawbacks in order to have a fast algorithm that is also very easy to implement. And, obviously, we can't forget the main fact, that it reduces the LIDAR data from around 500 position to the set number of clusters we ask.
###Code
class KMeansClustering():
"""
A class used to implement KMeans to extract information from the enviroment.
...
Attributes
----------
robot : object
The robot object
Methods
-------
step()
Function that controls the pipeline of the feature extraction.
kmean_fit()
Function that fits the model
centroids_points()
Function that return cluster centroids
"""
def __init__(self, robot):
self.robot = robot
def step(self):
"""
Function that controls the pipeline of the feature extraction.
:return: centroid_pos - position of clusters centroids
"""
if min(self.robot.read_ultrassonic_sensors()[0:15]) < 1.0:
raw_lidar = self.robot.read_laser()
points = []
for i in range(0,len(raw_lidar),3):
points.append([raw_lidar[i], raw_lidar[i+1]])
clusters = self.kmean_fit(points)
centroids_pos = self.centroids_points(clusters)
return centroids_pos
else:
return np.array([])
def kmean_fit(self, points):
"""
Function that fits the model
:input: points - lidar data
:return: y_ - the kmeans object from sklearn
"""
kmeans = KMeans()
y_ = kmeans.fit(points)
return y_
def centroids_points(self, kmeans_obj):
"""
Function that return cluster centroids
:input: kmeans_obj - the kmeans object from sklearn
:return: position of clusters centroids
"""
return kmeans_obj.cluster_centers_
###Output
_____no_output_____
###Markdown
Controls robot actionsSimple state machine that organize the robot behavior and store the informations that will be plot.
###Code
def state_machine(behavior="follow_wall"):
# first we create the robot and the walker object
robot = rb.Robot()
walker_behavior = Walker(robot)
# create the FeatureExtractor robot
#feature_extractor = FeatureExtractor(robot)
kmeans_extractor = KMeansClustering(robot)
# instantiate the odometry calculator
odometry_calculator = Odometry(robot=robot)
if behavior == "follow_wall":
# first we find a wall
while not walker_behavior.find_wall():
# read image
try:
#lin, cor = feature_extractor.step()
#lines.append(lin)
#corners.append(cor)
points = kmeans_extractor.step()
if points.size != 0:
points_kmeans.append(points)
except ImageError:
pass
# calculate the estimate new position after find a wall
temp_ground = odometry_calculator.ground_truth_updater()
ground_truth.append((temp_ground.x, temp_ground.y))
temp_odom = odometry_calculator.odometry_pose_updater()
odometry.append((temp_odom.x, temp_odom.y))
robot.stop()
# calculate the estimate new position after find a wall
temp_ground = odometry_calculator.ground_truth_updater()
ground_truth.append((temp_ground.x, temp_ground.y))
temp_odom = odometry_calculator.odometry_pose_updater()
odometry.append((temp_odom.x, temp_odom.y))
# then we align to the wall
while not walker_behavior.turn_left():
# read image
try:
#lin, cor = feature_extractor.step()
#lines.append(lin)
#corners.append(cor)
points = kmeans_extractor.step()
if points.size != 0:
points_kmeans.append(points)
except ImageError:
pass
# calculate the estimate new position after find a wall
temp_ground = odometry_calculator.ground_truth_updater()
ground_truth.append((temp_ground.x, temp_ground.y))
temp_odom = odometry_calculator.odometry_pose_updater()
odometry.append((temp_odom.x, temp_odom.y))
robot.stop()
# calculate the estimate new position after find a wall
temp_ground = odometry_calculator.ground_truth_updater()
ground_truth.append((temp_ground.x, temp_ground.y))
temp_odom = odometry_calculator.odometry_pose_updater()
odometry.append((temp_odom.x, temp_odom.y))
# now we follow the wall
while True:
# read image
try:
#lin, cor = feature_extractor.step()
#lines.append(lin)
#corners.append(cor)
points = kmeans_extractor.step()
if points.size != 0:
points_kmeans.append(points)
except ImageError:
pass
# calculate the estimate new position after find a wall
temp_ground = odometry_calculator.ground_truth_updater()
ground_truth.append((temp_ground.x, temp_ground.y))
temp_odom = odometry_calculator.odometry_pose_updater()
odometry.append((temp_odom.x, temp_odom.y))
# call the function that keeps following the left wall
walker_behavior.follow_wall()
else:
raise Exception("Not implemented!")
###Output
_____no_output_____
###Markdown
Main function - Execute the code here!Here is a simple signal handler implement in order to make the simulator execution last for a given time period.
###Code
import signal
from contextlib import contextmanager
class TimeoutException(Exception): pass
@contextmanager
def time_limit(seconds):
def signal_handler(signum, frame):
raise TimeoutException("Timed out!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds)
try:
yield
finally:
signal.alarm(0)
try:
ground_truth = []
odometry = []
lines = []
corners = []
points_kmeans = []
with time_limit(90):
state_machine()
except TimeoutException as e:
print("Timed out!")
###Output
Connected to remoteApi server.
[92m Pioneer_p3dx_ultrasonicSensor1 connected.
[92m Pioneer_p3dx_ultrasonicSensor2 connected.
[92m Pioneer_p3dx_ultrasonicSensor3 connected.
[92m Pioneer_p3dx_ultrasonicSensor4 connected.
[92m Pioneer_p3dx_ultrasonicSensor5 connected.
[92m Pioneer_p3dx_ultrasonicSensor6 connected.
[92m Pioneer_p3dx_ultrasonicSensor7 connected.
[92m Pioneer_p3dx_ultrasonicSensor8 connected.
[92m Pioneer_p3dx_ultrasonicSensor9 connected.
[92m Pioneer_p3dx_ultrasonicSensor10 connected.
[92m Pioneer_p3dx_ultrasonicSensor11 connected.
[92m Pioneer_p3dx_ultrasonicSensor12 connected.
[92m Pioneer_p3dx_ultrasonicSensor13 connected.
[92m Pioneer_p3dx_ultrasonicSensor14 connected.
[92m Pioneer_p3dx_ultrasonicSensor15 connected.
[92m Pioneer_p3dx_ultrasonicSensor16 connected.
[92m Vision sensor connected.
[92m Laser connected.
[92m Left motor connected.
[92m Right motor connected.
[92m Robot connected.
Timed out!
###Markdown
Some ResultsIn order to show how the implemented code works we are going to do and discuss three experiments. First ExperimentIn the first experiment the robot were placed as shown in the following image: **The obtained results following, and discussions about them are bellow the grafics.**
###Code
# we take the negative in order to simulate the VREP orientation
odometry_x_1 = [-i[0] for i in odometry]
odometry_y_1 = [-i[1] for i in odometry]
ground_truth_x_1 = [-i[0] for i in ground_truth]
ground_truth_y_1 = [-i[1] for i in ground_truth]
plt.figure()
plt.title('Odomotry x Ground Truth for Experiment 1')
plt.scatter(odometry_x_1, odometry_y_1, c='y',label='Odometry')
plt.scatter(ground_truth_x_1, ground_truth_y_1, c='b',label='Ground Truth')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Discussion - Odometry - Experiment 1The odometry result for the first experiment was very bad, as we can see in the image above. Clearly, after the first turn the odometry got fully lost, even making the robot think he is going to a fully opposite direction. This bad result can be possibly explained by the lots of turns made on the trajectory and also, the simplicity of the kinematic model used.We can observe that the way the robot turn it's all but stable and delicate, due to the way the 'wall-follower' controller was implemented, in order to turn or even to follow the wall the robot does a lot of micro turns and change his orientation and velocity of the wheels a lot. This situation along a kinematic that is very sensitive for angle changes, are the main reason for the obtained results.
###Code
final_mapping_x_1 = []
final_mapping_y_1 = []
for array_ in points_kmeans:
for point in array_:
final_mapping_x_1.append(-point[0])
final_mapping_y_1.append(-point[1])
plt.figure()
plt.title('Mapping for Experiment 1')
plt.scatter(final_mapping_x_1, final_mapping_y_1)
plt.show()
###Output
_____no_output_____
###Markdown
Discussion - Mapping - Experiment 1The result obtained from the mapping algorithm can be classified, in an optimistic view, as OK!. As we knew, and had already commented, the K-Means is very sensitive to noise, which is easy to see in the above figure. However, as we had discussed above, we don't need to make the best map in the world, and the obtained map, even with all the noisy, can still be usefull for the robot, identifying walls and other objects in the map. Second ExperimentIn the second experiment the robot were placed as shown in the following image: **The obtained results following, and discussions about them are bellow the grafics.**
###Code
# we take the negative in order to simulate the VREP orientation
odometry_x_2 = [-i[0] for i in odometry]
odometry_y_2 = [-i[1] for i in odometry]
ground_truth_x_2 = [-i[0] for i in ground_truth]
ground_truth_y_2 = [-i[1] for i in ground_truth]
plt.figure()
plt.title('Odomotry x Ground Truth for Experiment 2')
plt.scatter(odometry_x_2, odometry_y_2, c='y',label='Odometry')
plt.scatter(ground_truth_x_2, ground_truth_y_2, c='b',label='Ground Truth')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Discussion - Odometry - Experiment 2The odometry result for the second experiment was so much better than the first one, as we can see in the image above. The only problem we can observe lives after the first turn, as we expected, but, that time, the robot is abble to recover his orientation and recover his movement.The other main problem we can observe is the greater than normal displacement perceived by odometry.
###Code
final_mapping_x_2 = []
final_mapping_y_2 = []
for array_ in points_kmeans:
for point in array_:
final_mapping_x_2.append(-point[0])
final_mapping_y_2.append(-point[1])
plt.figure()
plt.title('Mapping for Experiment 2')
plt.scatter(final_mapping_x_2, final_mapping_y_2)
plt.show()
###Output
_____no_output_____
###Markdown
Discussion - Mapping - Experiment 2The result obtained from the mapping algorithm is, just like in odometry, better than the first one. Again, there is a lot of noise on the map, a clear point of improvement, but is kind of easy to identify the objects seen on the scene at this map. So, as we commented for the first experiment, it could be usefull for the robot localization. Third ExperimentIn the first experiment the robot were placed as shown in the following image: **The obtained results following, and discussions about them are bellow the grafics.**
###Code
# we take the negative in order to simulate the VREP orientation
odometry_x_3 = [-i[0] for i in odometry]
odometry_y_3 = [-i[1] for i in odometry]
ground_truth_x_3 = [-i[0] for i in ground_truth]
ground_truth_y_3 = [-i[1] for i in ground_truth]
plt.figure()
plt.title('Odomotry x Ground Truth for Experiment 3')
plt.scatter(odometry_x_3, odometry_y_3, c='y')
plt.scatter(ground_truth_x_3, ground_truth_y_3, c='b')
plt.show()
###Output
_____no_output_____
###Markdown
Discussion - Odometry - Experiment 3The odometry result for the third experiment was very close to the one obtained at the second one, but, even better. For the first time, the robot didn't suffer to much after make a turn, keeping a good perception of his own location.The main problem we can observe is the greater than normal displacement perceived by odometry, in both directions.
###Code
final_mapping_x_3 = []
final_mapping_y_3 = []
for array_ in points_kmeans:
for point in array_:
final_mapping_x_3.append(-point[0])
final_mapping_y_3.append(-point[1])
plt.figure()
plt.title('Mapping for Experiment 3')
plt.scatter(final_mapping_x_3, final_mapping_y_3)
plt.show()
###Output
_____no_output_____
###Markdown
Project 1: Quantified Self Due: Dec 11, 5PM (together with Project 2).As we discussed during lecture, this part of the project (Project 1) aims to give you a taste of data collection and self-analysis, inspired by the [Quantified Self community](https://quantifiedself.com/show-and-tell/). Additional instructions are posted on the Project 1 page: https://ucsb-int5.github.io/f19/hwk/project1.All code that you submit as part of this project should use the skills that you learned in this class, i.e., we will not accept projects written using Pandas or advanced Python code.The following article provides some helpful guidelines to consider when assembling your notebook. I recommend reading it before you start on the project: [A Data Science Project Style Guide from Dataquest](https://www.dataquest.io/blog/data-science-project-style-guide).Feel free to modify this notebook and add your context and code below.
###Code
import numpy as np
from datascience import *
# These lines set up graphing capabilities.
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import warnings
warnings.simplefilter('ignore', FutureWarning)
from client.api.notebook import Notebook
ok = Notebook('project1.ok')
_ = ok.auth(inline=True)
###Output
_____no_output_____
###Markdown
[Insert the title of your project] IntroductionYour introduction should explain:* the motivation for the project,* a description of the numerical variables you tracked,* questions that you planned to explore, what relationship do you expect to find and why. Data Collection* What variables did you measure? (Note that you need at least one *pair* of variables, but you can do more if you wish.)* What was the procedure for your data collection: what are you measuring? How? When / how often? (Write it in a way that someone can take your description and reproduce it in their life.)Explain how you collected the data: upload or link to an Excel/Google spreadsheet, or upload a picture of your tracker if you did it on paper or a screenshot if you collected data on some other website/app.Here's how you can include an image in your notebook:* Upload it into the project folder* Link to it by adapting the code below (if you uploaded a picture called `bujo_tracker.png` into the project folder, then the link would be ``): Recorded valuesYou will need to create a table with your recorded values and include it in the provided Jupyter notebook.
###Code
# my_data = Table() # upload a CSV or enter your values manually
###Output
_____no_output_____
###Markdown
VisualizationPlot the variables. Make sure to label the axes! AnalysisCompute the correlation and any other informative quantities that will help in your analysis.Write a short summary explaining what insights the visualization is supposed to provide. Does it look like you expected it would? Why or why not? ConclusionSummarize the insights from your visualization and analysis. Did the data answer the questions that you planned to explore? Did you find a relationship do you expected to find? Why or why not?What questions are still left unanswered? Submit your workPeriodically save your notebook and submit it to okpy to backup your work.Before you submit your final version, 1. Select `Cell -> All Output -> Clear`, then run `Cell -> Run All` to ensure that you have executed all cells and there are no errors.2. `Save and Checkpoint` from the `File` menu. **Remember to save before subitting!**3. Read through the notebook to make sure everything is fine.4. Submit using the cell below.
###Code
_ = ok.submit()
###Output
_____no_output_____
###Markdown
Project 1: World Progress In this project, you'll explore data from [Gapminder.org](http://gapminder.org), a website dedicated to providing a fact-based view of the world and how it has changed. That site includes several data visualizations and presentations, but also publishes the raw data that we will use in this project to recreate and extend some of their most famous visualizations.The Gapminder website collects data from many sources and compiles them into tables that describe many countries around the world. All of the data they aggregate are published in the [Systema Globalis](https://github.com/open-numbers/ddf--gapminder--systema_globalis/blob/master/README.md). Their goal is "to compile all public statistics; Social, Economic and Environmental; into a comparable total dataset." All data sets in this project are copied directly from the Systema Globalis without any changes.This project is dedicated to [Hans Rosling](https://en.wikipedia.org/wiki/Hans_Rosling) (1948-2017), who championed the use of data to understand and prioritize global development challenges. Logistics**Deadline.** This project is due at 11:59pm on Tuesday 2/18. Projects will be accepted up to 2 days (48 hours) late; a project submitted less than 24 hours after the deadline will receive 2/3 credit, a project submitted between 24 and 48 hours after the deadline will receive 1/3 credit, and a project submitted 48 hours or more after the deadline will receive no credit. It's **much** better to be early than late, so start working now.**Checkpoint.** For full credit, you must also complete the first 8 questions, pass all public autograder tests, and submit to Gradescope by 11:59pm on Monday 2/10. After you've submitted the checkpoint, you may still change your answers before the project deadline - only your final submission will be graded for correctness.**Partners.** You may work with one other partner; Only one of you is required to submit the project. On Gradescope, the person who submits should also designate their partner so that both of you receive credit.**Rules.** Don't share your code with anybody but your partner. You are welcome to discuss questions with other students, but don't share the answers. The experience of solving the problems in this project will prepare you for exams (and life). If someone asks you for the answer, resist! Instead, you can demonstrate how you would solve a similar problem.**Support.** You are not alone! If you need help, come to office hours or email me to make an appointment at another time.**Tests.** The tests that are given are **not comprehensive** and passing the tests for a question **does not** mean that you answered the question correctly. Tests usually only check that your table has the correct column labels. However, more tests will be applied to verify the correctness of your submission in order to assign your final score, so be careful and check your work! You might want to create your own checks along the way to see if your answers make sense. Additionally, before you submit, make sure that none of your cells take a very long time to run (several minutes).**Free Response Questions:** Make sure that you put the answers to the written questions in the indicated cell we provide.**Advice.** Develop your answers incrementally. To perform a complicated table manipulation, break it up into steps, perform each step on a different line, give a new name to each result, and check that each intermediate result is what you expect. You can add any additional names or functions you want to the provided cells. Make sure that you are using distinct and meaningful variable names throughout the notebook. Along that line, **DO NOT** reuse the variable names that we use when we grade your answers. For example, in Question 1 of the Global Poverty section, we ask you to assign an answer to `latest`. Do not reassign the variable name `latest` to anything else in your notebook, otherwise there is the chance that our tests grade against what `latest` was reassigned to.You **never** have to use just one line in this project or any others. Use intermediate variables and multiple lines as much as you would like! To get started, load `datascience`, `numpy`, `plots`, and `ok`.
###Code
from datascience import *
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
from client.api.notebook import Notebook
ok = Notebook('project1.ok')
###Output
_____no_output_____
###Markdown
1. Global Population Growth The global population of humans reached 1 billion around 1800, 3 billion around 1960, and 7 billion around 2011. The potential impact of exponential population growth has concerned scientists, economists, and politicians alike.The UN Population Division estimates that the world population will likely continue to grow throughout the 21st century, but at a slower rate, perhaps reaching 11 billion by 2100. However, the UN does not rule out scenarios of more extreme growth. In this section, we will examine some of the factors that influence population growth and how they are changing around the world.The first table we will consider is the total population of each country over time. Run the cell below.
###Code
population = Table.read_table('population.csv')
population.show(3)
###Output
_____no_output_____
###Markdown
**Note:** The population csv file can also be found [here](https://github.com/open-numbers/ddf--gapminder--systema_globalis/raw/master/ddf--datapoints--population_total--by--geo--time.csv). The data for this project was downloaded in February 2017. BangladeshIn the `population` table, the `geo` column contains three-letter codes established by the [International Organization for Standardization](https://en.wikipedia.org/wiki/International_Organization_for_Standardization) (ISO) in the [Alpha-3](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-3Current_codes) standard. We will begin by taking a close look at Bangladesh. Inspect the standard to find the 3-letter code for Bangladesh. **Question 1.** Create a table called `b_pop` that has two columns labeled `time` and `population_total`. The first column should contain the years from 1970 through 2015 (including both 1970 and 2015) and the second should contain the population of Bangladesh in each of those years.<!--BEGIN QUESTIONname: q1_1-->
###Code
b_pop = ...
b_pop
ok.grade("q1_1");
###Output
_____no_output_____
###Markdown
Run the following cell to create a table called `b_five` that has the population of Bangladesh every five years. At a glance, it appears that the population of Bangladesh has been growing quickly indeed!
###Code
b_pop.set_format('population_total', NumberFormatter)
fives = np.arange(1970, 2016, 5) # 1970, 1975, 1980, ...
b_five = b_pop.sort('time').where('time', are.contained_in(fives))
b_five
###Output
_____no_output_____
###Markdown
**Question 2.** Assign `initial` to an array that contains the population for every five year interval from 1970 to 2010. Then, assign `changed` to an array that contains the population for every five year interval from 1975 to 2015. You should use the `b_five` table to create both arrays, first filtering the table to only contain the relevant years.We have provided the code below that uses `initial` and `changed` in order to add a column to `b_five` called `annual_growth`. Don't worry about the calculation of the growth rates; run the test below to test your solution.If you are interested in how we came up with the formula for growth rates, consult the [growth rates](https://www.inferentialthinking.com/chapters/03/2/1/growth) section of the textbook.<!--BEGIN QUESTIONname: q1_2-->
###Code
initial = ...
changed = ...
b_1970_through_2010 = b_five.where('time', are.below_or_equal_to(2010))
b_five_growth = b_1970_through_2010.with_column('annual_growth', (changed/initial)**0.2-1)
b_five_growth.set_format('annual_growth', PercentFormatter)
ok.grade("q1_2");
###Output
_____no_output_____
###Markdown
While the population has grown every five years since 1970, the annual growth rate decreased dramatically from 1985 to 2005. Let's look at some other information in order to develop a possible explanation. Run the next cell to load three additional tables of measurements about countries over time.
###Code
life_expectancy = Table.read_table('life_expectancy.csv')
child_mortality = Table.read_table('child_mortality.csv').relabel(2, 'child_mortality_under_5_per_1000_born')
fertility = Table.read_table('fertility.csv')
###Output
_____no_output_____
###Markdown
The `life_expectancy` table contains a statistic that is often used to measure how long people live, called *life expectancy at birth*. This number, for a country in a given year, [does not measure how long babies born in that year are expected to live](http://blogs.worldbank.org/opendata/what-does-life-expectancy-birth-really-mean). Instead, it measures how long someone would live, on average, if the *mortality conditions* in that year persisted throughout their lifetime. These "mortality conditions" describe what fraction of people at each age survived the year. So, it is a way of measuring the proportion of people that are staying alive, aggregated over different age groups in the population. Run the following cells below to see `life_expectancy`, `child_mortality`, and `fertility`. Refer back to these tables as they will be helpful for answering further questions!
###Code
life_expectancy
child_mortality
fertility
###Output
_____no_output_____
###Markdown
**Question 3.** Perhaps population is growing more slowly because people aren't living as long. Use the `life_expectancy` table to draw a line graph with the years 1970 and later on the horizontal axis that shows how the *life expectancy at birth* has changed in Bangladesh.<!--BEGIN QUESTIONname: q1_3manual: true-->
###Code
#Fill in code here
...
###Output
_____no_output_____
###Markdown
**Question 4.** Assuming everything else stays the same, do the trends in life expectancy in the graph above directly explain why the population growth rate decreased from 1985 to 2010 in Bangladesh? Why or why not? Hint: What happened in Bangladesh in 1991, and does that event explain the overall change in population growth rate?<!--BEGIN QUESTIONname: q1_4manual: true--> *Write your answer here, replacing this text.* The `fertility` table contains a statistic that is often used to measure how many babies are being born, the *total fertility rate*. This number describes the [number of children a woman would have in her lifetime](https://www.measureevaluation.org/prh/rh_indicators/specific/fertility/total-fertility-rate), on average, if the current rates of birth by age of the mother persisted throughout her child bearing years, assuming she survived through age 49. **Question 5.** Write a function `fertility_over_time` that takes the Alpha-3 code of a `country` and a `start` year. It returns a two-column table with labels `Year` and `Children per woman` that can be used to generate a line chart of the country's fertility rate each year, starting at the `start` year. The plot should include the `start` year and all later years that appear in the `fertility` table. Then, in the next cell, call your `fertility_over_time` function on the Alpha-3 code for Bangladesh and the year 1970 in order to plot how Bangladesh's fertility rate has changed since 1970. Note that the function `fertility_over_time` should not return the plot itself. **The expression that draws the line plot is provided for you; please don't change it.**<!--BEGIN QUESTIONname: q1_5-->
###Code
def fertility_over_time(country, start):
"""Create a two-column table that describes a country's total fertility rate each year."""
country_fertility = ...
country_fertility_after_start = ...
cleaned_table = ...
...
bangladesh_code = ...
fertility_over_time(bangladesh_code, 1970).plot(0, 1) # You should *not* change this line.
ok.grade("q1_5");
###Output
_____no_output_____
###Markdown
**Question 6.** Assuming everything else is constant, do the trends in fertility in the graph above help directly explain why the population growth rate decreased from 1985 to 2010 in Bangladesh? Why or why not?<!--BEGIN QUESTIONname: q1_6manual: true--> *Write your answer here, replacing this text.* It has been observed that lower fertility rates are often associated with lower child mortality rates. The link has been attributed to family planning: if parents can expect that their children will all survive into adulthood, then they will choose to have fewer children. We can see if this association is evident in Bangladesh by plotting the relationship between total fertility rate and [child mortality rate per 1000 children](https://en.wikipedia.org/wiki/Child_mortality). **Question 7.** Using both the `fertility` and `child_mortality` tables, draw a scatter diagram that has Bangladesh's total fertility on the horizontal axis and its child mortality on the vertical axis with one point for each year, starting with 1970.**The expression that draws the scatter diagram is provided for you; please don't change it.** Instead, create a table called `post_1969_fertility_and_child_mortality` with the appropriate column labels and data in order to generate the chart correctly. Use the label `Children per woman` to describe total fertility and the label `Child deaths per 1000 born` to describe child mortality.<!--BEGIN QUESTIONname: q1_7manual: false-->
###Code
bgd_fertility = ...
bgd_child_mortality = ...
fertility_and_child_mortality = ...
post_1969_fertility_and_child_mortality = ...
post_1969_fertility_and_child_mortality.scatter('Children per woman', 'Child deaths per 1000 born') # You should *not* change this line.
ok.grade("q1_7");
###Output
_____no_output_____
###Markdown
**Question 8.** In one or two sentences, describe the association (if any) that is illustrated by this scatter diagram. Does the diagram show that reduced child mortality causes parents to choose to have fewer children?<!--BEGIN QUESTIONname: q1_8manual: true--> *Write your answer here, replacing this text.* Checkpoint (due Monday 2/10) Congratulations, you have reached the checkpoint! Save your work and submit to Gradescope. The WorldThe change observed in Bangladesh since 1970 can also be observed in many other developing countries: health services improve, life expectancy increases, and child mortality decreases. At the same time, the fertility rate often plummets, and so the population growth rate decreases despite increasing longevity. Run the cell below to generate two overlaid histograms, one for 1960 and one for 2010, that show the distributions of total fertility rates for these two years among all 201 countries in the `fertility` table.
###Code
Table().with_columns(
'1960', fertility.where('time', 1960).column(2),
'2010', fertility.where('time', 2010).column(2)
).hist(bins=np.arange(0, 10, 0.5), unit='child per woman')
_ = plots.xlabel('Children per woman')
_ = plots.ylabel('Percent per children per woman')
_ = plots.xticks(np.arange(10))
###Output
_____no_output_____
###Markdown
**Question 9.** Assign `fertility_statements` to an array of the numbers of each statement below that can be correctly inferred from these histograms.1. About the same number of countries had a fertility rate between 3.5 and 4.5 in both 1960 and 2010.1. In 2010, about 40% of countries had a fertility rate between 1.5 and 2.1. In 1960, less than 20% of countries had a fertility rate below 3.1. More countries had a fertility rate above 3 in 1960 than in 2010.1. At least half of countries had a fertility rate between 5 and 8 in 1960.1. At least half of countries had a fertility rate below 3 in 2010.<!--BEGIN QUESTIONname: q1_9-->
###Code
fertility_statements = ...
ok.grade("q1_9");
###Output
_____no_output_____
###Markdown
**Question 10.** Draw a line plot of the world population from 1800 through 2005. The world population is the sum of all the country's populations. <!--BEGIN QUESTIONname: q1_10manual: true-->
###Code
#Fill in code here
...
###Output
_____no_output_____
###Markdown
**Question 11.** Create a function `stats_for_year` that takes a `year` and returns a table of statistics. The table it returns should have four columns: `geo`, `population_total`, `children_per_woman_total_fertility`, and `child_mortality_under_5_per_1000_born`. Each row should contain one Alpha-3 country code and three statistics: population, fertility rate, and child mortality for that `year` from the `population`, `fertility` and `child_mortality` tables. Only include rows for which all three statistics are available for the country and year.In addition, restrict the result to country codes that appears in `big_50`, an array of the 50 most populous countries in 2010. This restriction will speed up computations later in the project.After you write `stats_for_year`, try calling `stats_for_year` on any year between 1960 and 2010. Try to understand the output of stats_for_year.*Hint*: The tests for this question are quite comprehensive, so if you pass the tests, your function is probably correct. However, without calling your function yourself and looking at the output, it will be very difficult to understand any problems you have, so try your best to write the function correctly and check that it works before you rely on the `ok` tests to confirm your work.<!--BEGIN QUESTIONname: q1_11manual: false-->
###Code
# We first create a population table that only includes the
# 50 countries with the largest 2010 populations. We focus on
# these 50 countries only so that plotting later will run faster.
big_50 = population.where('time', are.equal_to(2010)).sort("population_total", descending=True).take(np.arange(50)).column('geo')
population_of_big_50 = population.where('time', are.above(1959)).where('geo', are.contained_in(big_50))
def stats_for_year(year):
"""Return a table of the stats for each country that year."""
p = population_of_big_50.where('time', are.equal_to(year)).drop('time')
f = fertility.where('time', are.equal_to(year)).drop('time')
c = child_mortality.where('time', are.equal_to(year)).drop('time')
...
...
ok.grade("q1_11");
###Output
_____no_output_____
###Markdown
**Question 12.** Create a table called `pop_by_decade` with two columns called `decade` and `population`. It has a row for each `year` since 1960 that starts a decade. The `population` column contains the total population of all countries included in the result of `stats_for_year(year)` for the first `year` of the decade. For example, 1960 is the first year of the 1960's decade. You should see that these countries contain most of the world's population.*Hint:* One approach is to define a function `pop_for_year` that computes this total population, then `apply` it to the `decade` column. The `stats_for_year` function from the previous question may be useful here.This first test is just a sanity check for your helper function if you choose to use it. You will not lose points for not implementing the function `pop_for_year`.**Note:** The cell where you will generate the `pop_by_decade` table is below the cell where you can choose to define the helper function `pop_for_year`. You should define your `pop_by_decade` table in the cell that starts with the table `decades` being defined. <!--BEGIN QUESTIONname: q1_12_0manual: falsepoints: 0-->
###Code
def pop_for_year(year):
...
ok.grade("q1_12_0");
###Output
_____no_output_____
###Markdown
Now that you've defined your helper function (if you've chosen to do so), define the `pop_by_decade` table.<!--BEGIN QUESTIONname: q1_12manual: false-->
###Code
decades = Table().with_column('decade', np.arange(1960, 2011, 10))
pop_by_decade = ...
pop_by_decade.set_format(1, NumberFormatter)
ok.grade("q1_12");
###Output
_____no_output_____
###Markdown
The `countries` table describes various characteristics of countries. The `country` column contains the same codes as the `geo` column in each of the other data tables (`population`, `fertility`, and `child_mortality`). The `world_6region` column classifies each country into a region of the world. Run the cell below to inspect the data.
###Code
countries = Table.read_table('countries.csv').where('country', are.contained_in(population.group('geo').column('geo')))
countries.select('country', 'name', 'world_6region')
###Output
_____no_output_____
###Markdown
**Question 13.** Create a table called `region_counts` that has two columns, `region` and `count`. It should contain two columns: a region column and a count column that contains the number of countries in each region that appear in the result of `stats_for_year(1960)`. For example, one row would have `south_asia` as its `world_6region` value and an integer as its `count` value: the number of large South Asian countries for which we have population, fertility, and child mortality numbers from 1960.<!--BEGIN QUESTIONname: q1_13-->
###Code
region_counts = ...
region_counts
ok.grade("q1_13");
###Output
_____no_output_____
###Markdown
The following scatter diagram compares total fertility rate and child mortality rate for each country in 1960. The area of each dot represents the population of the country, and the color represents its region of the world. Run the cell. Do you think you can identify any of the dots?
###Code
from functools import lru_cache as cache
# This cache annotation makes sure that if the same year
# is passed as an argument twice, the work of computing
# the result is only carried out once.
@cache(None)
def stats_relabeled(year):
"""Relabeled and cached version of stats_for_year."""
return stats_for_year(year).relabel(2, 'Children per woman').relabel(3, 'Child deaths per 1000 born')
def fertility_vs_child_mortality(year):
"""Draw a color scatter diagram comparing child mortality and fertility."""
with_region = stats_relabeled(year).join('geo', countries.select('country', 'world_6region'), 'country')
with_region.scatter(2, 3, sizes=1, group=4, s=500)
plots.xlim(0,10)
plots.ylim(-50, 500)
plots.title(year)
fertility_vs_child_mortality(1960)
###Output
_____no_output_____
###Markdown
**Question 14.** Assign `scatter_statements` to an array of the numbers of each statement below that can be inferred from this scatter diagram for 1960. 1. As a whole, the `europe_central_asia` region had the lowest child mortality rate.1. The lowest child mortality rate of any country was from an `east_asia_pacific` country.1. Most countries had a fertility rate above 5.1. There was an association between child mortality and fertility.1. The two largest countries by population also had the two highest child mortality rate.<!--BEGIN QUESTIONname: q1_14-->
###Code
scatter_statements = ...
ok.grade("q1_14");
###Output
_____no_output_____
###Markdown
The result of the cell below is interactive. Drag the slider to the right to see how countries have changed over time. You'll find that the great divide between so-called "Western" and "developing" countries that existed in the 1960's has nearly disappeared. This shift in fertility rates is the reason that the global population is expected to grow more slowly in the 21st century than it did in the 19th and 20th centuries.**Note:** Don't worry if a red warning pops up when running the cell below. You'll still be able to run the cell!
###Code
import ipywidgets as widgets
# This part takes a few minutes to run because it
# computes 55 tables in advance: one for each year.
Table().with_column('Year', np.arange(1960, 2016)).apply(stats_relabeled, 'Year')
_ = widgets.interact(fertility_vs_child_mortality,
year=widgets.IntSlider(min=1960, max=2015, value=1960))
###Output
_____no_output_____
###Markdown
Now is a great time to take a break and watch the same data presented by [Hans Rosling in a 2010 TEDx talk](https://www.gapminder.org/videos/reducing-child-mortality-a-moral-and-environmental-imperative) with smoother animation and witty commentary. 2. Global Poverty In 1800, 85% of the world's 1 billion people lived in *extreme poverty*, defined by the United Nations as "a condition characterized by severe deprivation of basic human needs, including food, safe drinking water, sanitation facilities, health, shelter, education and information." A common measure of extreme poverty is a person living on less than \$1.25 per day.In 2018, the proportion of people living in extreme poverty was estimated to be 8%. Although the world rate of extreme poverty has declined consistently for hundreds of years, the number of people living in extreme poverty is still over 600 million. The United Nations recently adopted an [ambitious goal](http://www.un.org/sustainabledevelopment/poverty/): "By 2030, eradicate extreme poverty for all people everywhere."In this section, we will examine extreme poverty trends around the world. First, load the population and poverty rate by country and year and the country descriptions. While the `population` table has values for every recent year for many countries, the `poverty` table only includes certain years for each country in which a measurement of the rate of extreme poverty was available.
###Code
population = Table.read_table('population.csv')
countries = Table.read_table('countries.csv').where('country', are.contained_in(population.group('geo').column('geo')))
poverty = Table.read_table('poverty.csv')
poverty.show(3)
###Output
_____no_output_____
###Markdown
**Question 1.** Assign `latest_poverty` to a three-column table with one row for each country that appears in the `poverty` table. The first column should contain the 3-letter code for the country. The second column should contain the most recent year for which an extreme poverty rate is available for the country. The third column should contain the poverty rate in that year. **Do not change the last line, so that the labels of your table are set correctly.***Hint*: think about how ```group``` works: it does a sequential search of the table (from top to bottom) and collects values in the array in the order in which they appear, and then applies a function to that array. The `first` function may be helpful, but you are not required to use it.<!--BEGIN QUESTIONname: q2_1-->
###Code
def first(values):
return values.item(0)
latest_poverty = ...
latest_poverty = latest_poverty.relabeled(0, 'geo').relabeled(1, 'time').relabeled(2, 'poverty_percent') # You should *not* change this line.
latest_poverty
ok.grade("q2_1");
###Output
_____no_output_____
###Markdown
**Question 2.** Using both `latest_poverty` and `population`, create a four-column table called `recent_poverty_total` with one row for each country in `latest_poverty`. The four columns should have the following labels and contents:1. `geo` contains the 3-letter country code,1. `poverty_percent` contains the most recent poverty percent,1. `population_total` contains the population of the country in 2010,1. `poverty_total` contains the number of people in poverty **rounded to the nearest integer**, based on the 2010 population and most recent poverty rate.<!--BEGIN QUESTIONname: q2_2-->
###Code
poverty_and_pop = ...
recent_poverty_total = ...
recent_poverty_total
ok.grade("q2_2");
###Output
_____no_output_____
###Markdown
**Question 3.** Assign the name `poverty_percent` to the known percentage of the world’s 2010 population that were living in extreme poverty. Assume that the `poverty_total` numbers in the `recent_poverty_total` table describe **all** people in 2010 living in extreme poverty. You should find a number that is above the 2018 global estimate of 8%, since many country-specific poverty rates are older than 2018.*Hint*: The sum of the `population_total` column in the `recent_poverty_total` table is not the world population, because only a subset of the world's countries are included in the `recent_poverty_total` table (only some countries have known poverty rates). Use the `population` table to compute the world's 2010 total population..<!--BEGIN QUESTIONname: q2_3-->
###Code
poverty_percent = ...
poverty_percent
ok.grade("q2_3");
###Output
_____no_output_____
###Markdown
The `countries` table includes not only the name and region of countries, but also their positions on the globe.
###Code
countries.select('country', 'name', 'world_4region', 'latitude', 'longitude')
###Output
_____no_output_____
###Markdown
**Question 4.** Using both `countries` and `recent_poverty_total`, create a five-column table called `poverty_map` with one row for every country in `recent_poverty_total`. The five columns should have the following labels and contents:1. `latitude` contains the country's latitude,1. `longitude` contains the country's longitude,1. `name` contains the country's name,1. `region` contains the country's region from the `world_4region` column of `countries`,1. `poverty_total` contains the country's poverty total.<!--BEGIN QUESTIONname: q2_4-->
###Code
poverty_map = ...
poverty_map
ok.grade("q2_4");
###Output
_____no_output_____
###Markdown
Run the cell below to draw a map of the world in which the areas of circles represent the number of people living in extreme poverty. Double-click on the map to zoom in.
###Code
# It may take a few seconds to generate this map.
colors = {'africa': 'blue', 'europe': 'black', 'asia': 'red', 'americas': 'green'}
scaled = poverty_map.with_columns(
'poverty_total', 1e-4 * poverty_map.column('poverty_total'),
'region', poverty_map.apply(colors.get, 'region')
)
Circle.map_table(scaled)
###Output
_____no_output_____
###Markdown
Although people live in extreme poverty throughout the world (with more than 5 million in the United States), the largest numbers are in Asia and Africa. **Question 5.** Assign `largest` to a two-column table with the `name` (not the 3-letter code) and `poverty_total` of the 10 countries with the largest number of people living in extreme poverty.<!--BEGIN QUESTIONname: q2_5-->
###Code
largest = ...
largest.set_format('poverty_total', NumberFormatter)
ok.grade("q2_5");
###Output
_____no_output_____
###Markdown
**Question 6.** Write a function called `poverty_timeline` that takes **the name of a country** as its argument. It should draw a line plot of the number of people living in poverty in that country with time on the horizontal axis. The line plot should have a point for each row in the `poverty` table for that country. To compute the population living in poverty from a poverty percentage, multiply by the population of the country **in that year**.*Hint:* This question is long. Feel free to create cells and experiment.
###Code
def poverty_timeline(country):
'''Draw a timeline of people living in extreme poverty in a country.'''
geo = ...
# This solution will take multiple lines of code. Use as many as you need
...
###Output
_____no_output_____
###Markdown
Finally, draw the timelines below to see how the world is changing. You can check your work by comparing your graphs to the ones on [gapminder.org](https://www.gapminder.org/tools/$state$entities$show$country$/$in@=ind;;;;&marker$axis_y$which=number_of_people_in_poverty&scaleType=linear&spaceRef:null;;;&chart-type=linechart).<!--BEGIN QUESTIONname: q2_6manual: true-->
###Code
poverty_timeline('India')
poverty_timeline('Nigeria')
poverty_timeline('China')
poverty_timeline('United States')
###Output
_____no_output_____
###Markdown
Although the number of people living in extreme poverty has been increasing in Nigeria and the United States, the massive decreases in China and India have shaped the overall trend that extreme poverty is decreasing worldwide, both in percentage and in absolute number. To learn more, watch [Hans Rosling in a 2015 film](https://www.gapminder.org/videos/dont-panic-end-poverty/) about the UN goal of eradicating extreme poverty from the world. Below, we've also added an interactive dropdown menu for you to visualize `poverty_timeline` graphs for other countries. Note that each dropdown menu selection may take a few seconds to run.
###Code
# Just run this cell
all_countries = poverty_map.column('name')
_ = widgets.interact(poverty_timeline, country=list(all_countries))
###Output
_____no_output_____
###Markdown
**You're finished!** Congratulations on mastering data visualization and table manipulation. Time to submit. 3. Submission Once you're finished, select "Save and Checkpoint" in the File menu, download your .ipynb file, and upload it to Gradescope.
###Code
# For your convenience, you can run this cell to run all the tests at once!
import os
print("Running all tests...")
_ = [ok.grade(q[:-3]) for q in os.listdir("tests") if q.startswith('q') and len(q) <= 10]
print("Finished running all tests.")
###Output
_____no_output_____
###Markdown
Source of data : https://www.kaggle.com/karangadiya/fifa19
###Code
# Libraries we need
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error, median_absolute_error
from sklearn.ensemble import RandomForestRegressor
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
###Output
_____no_output_____
###Markdown
CRISP-DM Methology STAGE ONE : Business Understanding I love football, it is my favorite hobby, and I like to choose data about FIFA. I have intrested to answer questions below,- 1- What is the distrbuation of player's age?- 2- What is the distrbuation of player's height ?- 3- What is the distrbuation of player's weight ?- 4- Which clubs have the highest average player wage?- 5- What is the rate of overall of players?- 6- Who is palyer has the highest overall?
###Code
# Read dataset
fifa19_player_df = pd.read_csv('fifa_data.csv.zip')
fifa19_player_df.head()
###Output
_____no_output_____
###Markdown
STAGE TWO : Data Understanding
###Code
# number of rows
fifa19_player_df.shape[0]
# number of columns
fifa19_player_df.shape[1]
# check numeric columns
fifa19_player_df.describe()
# check datatype
fifa19_player_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 18207 entries, 0 to 18206
Data columns (total 89 columns):
Unnamed: 0 18207 non-null int64
ID 18207 non-null int64
Name 18207 non-null object
Age 18207 non-null int64
Photo 18207 non-null object
Nationality 18207 non-null object
Flag 18207 non-null object
Overall 18207 non-null int64
Potential 18207 non-null int64
Club 17966 non-null object
Club Logo 18207 non-null object
Value 18207 non-null object
Wage 18207 non-null object
Special 18207 non-null int64
Preferred Foot 18159 non-null object
International Reputation 18159 non-null float64
Weak Foot 18159 non-null float64
Skill Moves 18159 non-null float64
Work Rate 18159 non-null object
Body Type 18159 non-null object
Real Face 18159 non-null object
Position 18147 non-null object
Jersey Number 18147 non-null float64
Joined 16654 non-null object
Loaned From 1264 non-null object
Contract Valid Until 17918 non-null object
Height 18159 non-null object
Weight 18159 non-null object
LS 16122 non-null object
ST 16122 non-null object
RS 16122 non-null object
LW 16122 non-null object
LF 16122 non-null object
CF 16122 non-null object
RF 16122 non-null object
RW 16122 non-null object
LAM 16122 non-null object
CAM 16122 non-null object
RAM 16122 non-null object
LM 16122 non-null object
LCM 16122 non-null object
CM 16122 non-null object
RCM 16122 non-null object
RM 16122 non-null object
LWB 16122 non-null object
LDM 16122 non-null object
CDM 16122 non-null object
RDM 16122 non-null object
RWB 16122 non-null object
LB 16122 non-null object
LCB 16122 non-null object
CB 16122 non-null object
RCB 16122 non-null object
RB 16122 non-null object
Crossing 18159 non-null float64
Finishing 18159 non-null float64
HeadingAccuracy 18159 non-null float64
ShortPassing 18159 non-null float64
Volleys 18159 non-null float64
Dribbling 18159 non-null float64
Curve 18159 non-null float64
FKAccuracy 18159 non-null float64
LongPassing 18159 non-null float64
BallControl 18159 non-null float64
Acceleration 18159 non-null float64
SprintSpeed 18159 non-null float64
Agility 18159 non-null float64
Reactions 18159 non-null float64
Balance 18159 non-null float64
ShotPower 18159 non-null float64
Jumping 18159 non-null float64
Stamina 18159 non-null float64
Strength 18159 non-null float64
LongShots 18159 non-null float64
Aggression 18159 non-null float64
Interceptions 18159 non-null float64
Positioning 18159 non-null float64
Vision 18159 non-null float64
Penalties 18159 non-null float64
Composure 18159 non-null float64
Marking 18159 non-null float64
StandingTackle 18159 non-null float64
SlidingTackle 18159 non-null float64
GKDiving 18159 non-null float64
GKHandling 18159 non-null float64
GKKicking 18159 non-null float64
GKPositioning 18159 non-null float64
GKReflexes 18159 non-null float64
Release Clause 16643 non-null object
dtypes: float64(38), int64(6), object(45)
memory usage: 12.4+ MB
###Markdown
STAGE THREE : Prepare Data since we are using one dataset, so we don't need to gather any dataset. so we will start third stage with assessing data. Assessing Data let's check duplicate values in data
###Code
# there are 1013 dupl. values
fifa19_player_df.Name.duplicated().sum()
# check unique values for each column
fifa19_player_df.nunique()
###Output
_____no_output_____
###Markdown
there are 1013 duplicate in Name so we need to find who are they?
###Code
fifa19_player_df.Name.value_counts()
fifa19_player_df.Club.value_counts()
###Output
_____no_output_____
###Markdown
OK, let's ask ourself is there a duplicate in all data?
###Code
# there are no duplicate in data.
fifa19_player_df[fifa19_player_df.duplicated()]
# Create a function to assign age range
range_of_age = []
for age in fifa19_player_df['Age']:
if age < 20: range_of_age.append('Less than 20')
elif age > 20 and age < 25 : range_of_age.append('20-25')
elif age > 25 and age <30 : range_of_age.append('25-30')
elif age > 30: range_of_age.append('More than 30')
else: range_of_age.append('Not defiend')
# Create a column from the list
fifa19_player_df['range_of_age'] = range_of_age
# View the new dataframe
print(fifa19_player_df)
###Output
Unnamed: 0 ID Name Age \
0 0 158023 L. Messi 31
1 1 20801 Cristiano Ronaldo 33
2 2 190871 Neymar Jr 26
3 3 193080 De Gea 27
4 4 192985 K. De Bruyne 27
5 5 183277 E. Hazard 27
6 6 177003 L. Modrić 32
7 7 176580 L. Suárez 31
8 8 155862 Sergio Ramos 32
9 9 200389 J. Oblak 25
10 10 188545 R. Lewandowski 29
11 11 182521 T. Kroos 28
12 12 182493 D. Godín 32
13 13 168542 David Silva 32
14 14 215914 N. Kanté 27
15 15 211110 P. Dybala 24
16 16 202126 H. Kane 24
17 17 194765 A. Griezmann 27
18 18 192448 M. ter Stegen 26
19 19 192119 T. Courtois 26
20 20 189511 Sergio Busquets 29
21 21 179813 E. Cavani 31
22 22 167495 M. Neuer 32
23 23 153079 S. Agüero 30
24 24 138956 G. Chiellini 33
25 25 231747 K. Mbappé 19
26 26 209331 M. Salah 26
27 27 200145 Casemiro 26
28 28 198710 J. Rodríguez 26
29 29 198219 L. Insigne 27
... ... ... ... ...
18177 18177 238550 R. Roache 18
18178 18178 243158 L. Wahlstedt 18
18179 18179 246243 J. Williams 17
18180 18180 221669 M. Hurst 22
18181 18181 245734 C. Maher 17
18182 18182 246001 Y. Góez 18
18183 18183 53748 K. Pilkington 44
18184 18184 241657 D. Horton 18
18185 18185 243961 E. Tweed 19
18186 18186 240917 Zhang Yufeng 20
18187 18187 240158 C. Ehlich 19
18188 18188 240927 L. Collins 17
18189 18189 240160 A. Kaltner 18
18190 18190 245569 L. Watkins 18
18191 18191 245570 J. Norville-Williams 18
18192 18192 245571 S. Squire 18
18193 18193 244823 N. Fuentes 18
18194 18194 245862 J. Milli 18
18195 18195 243582 S. Griffin 18
18196 18196 238477 K. Fujikawa 19
18197 18197 246167 D. Holland 18
18198 18198 242844 J. Livesey 18
18199 18199 244677 M. Baldisimo 18
18200 18200 231381 J. Young 18
18201 18201 243413 D. Walsh 18
18202 18202 238813 J. Lundstram 19
18203 18203 243165 N. Christoffersson 19
18204 18204 241638 B. Worman 16
18205 18205 246268 D. Walker-Rice 17
18206 18206 246269 G. Nugent 16
Photo Nationality \
0 https://cdn.sofifa.org/players/4/19/158023.png Argentina
1 https://cdn.sofifa.org/players/4/19/20801.png Portugal
2 https://cdn.sofifa.org/players/4/19/190871.png Brazil
3 https://cdn.sofifa.org/players/4/19/193080.png Spain
4 https://cdn.sofifa.org/players/4/19/192985.png Belgium
5 https://cdn.sofifa.org/players/4/19/183277.png Belgium
6 https://cdn.sofifa.org/players/4/19/177003.png Croatia
7 https://cdn.sofifa.org/players/4/19/176580.png Uruguay
8 https://cdn.sofifa.org/players/4/19/155862.png Spain
9 https://cdn.sofifa.org/players/4/19/200389.png Slovenia
10 https://cdn.sofifa.org/players/4/19/188545.png Poland
11 https://cdn.sofifa.org/players/4/19/182521.png Germany
12 https://cdn.sofifa.org/players/4/19/182493.png Uruguay
13 https://cdn.sofifa.org/players/4/19/168542.png Spain
14 https://cdn.sofifa.org/players/4/19/215914.png France
15 https://cdn.sofifa.org/players/4/19/211110.png Argentina
16 https://cdn.sofifa.org/players/4/19/202126.png England
17 https://cdn.sofifa.org/players/4/19/194765.png France
18 https://cdn.sofifa.org/players/4/19/192448.png Germany
19 https://cdn.sofifa.org/players/4/19/192119.png Belgium
20 https://cdn.sofifa.org/players/4/19/189511.png Spain
21 https://cdn.sofifa.org/players/4/19/179813.png Uruguay
22 https://cdn.sofifa.org/players/4/19/167495.png Germany
23 https://cdn.sofifa.org/players/4/19/153079.png Argentina
24 https://cdn.sofifa.org/players/4/19/138956.png Italy
25 https://cdn.sofifa.org/players/4/19/231747.png France
26 https://cdn.sofifa.org/players/4/19/209331.png Egypt
27 https://cdn.sofifa.org/players/4/19/200145.png Brazil
28 https://cdn.sofifa.org/players/4/19/198710.png Colombia
29 https://cdn.sofifa.org/players/4/19/198219.png Italy
... ... ...
18177 https://cdn.sofifa.org/players/4/19/238550.png Republic of Ireland
18178 https://cdn.sofifa.org/players/4/19/243158.png Sweden
18179 https://cdn.sofifa.org/players/4/19/246243.png England
18180 https://cdn.sofifa.org/players/4/19/221669.png Scotland
18181 https://cdn.sofifa.org/players/4/19/245734.png Republic of Ireland
18182 https://cdn.sofifa.org/players/4/19/246001.png Colombia
18183 https://cdn.sofifa.org/players/4/19/53748.png England
18184 https://cdn.sofifa.org/players/4/19/241657.png England
18185 https://cdn.sofifa.org/players/4/19/243961.png Republic of Ireland
18186 https://cdn.sofifa.org/players/4/19/240917.png China PR
18187 https://cdn.sofifa.org/players/4/19/240158.png Germany
18188 https://cdn.sofifa.org/players/4/19/240927.png Wales
18189 https://cdn.sofifa.org/players/4/19/240160.png Germany
18190 https://cdn.sofifa.org/players/4/19/245569.png England
18191 https://cdn.sofifa.org/players/4/19/245570.png England
18192 https://cdn.sofifa.org/players/4/19/245571.png England
18193 https://cdn.sofifa.org/players/4/19/244823.png Chile
18194 https://cdn.sofifa.org/players/4/19/245862.png Italy
18195 https://cdn.sofifa.org/players/4/19/243582.png Republic of Ireland
18196 https://cdn.sofifa.org/players/4/19/238477.png Japan
18197 https://cdn.sofifa.org/players/4/19/246167.png Republic of Ireland
18198 https://cdn.sofifa.org/players/4/19/242844.png England
18199 https://cdn.sofifa.org/players/4/19/244677.png Canada
18200 https://cdn.sofifa.org/players/4/19/231381.png Scotland
18201 https://cdn.sofifa.org/players/4/19/243413.png Republic of Ireland
18202 https://cdn.sofifa.org/players/4/19/238813.png England
18203 https://cdn.sofifa.org/players/4/19/243165.png Sweden
18204 https://cdn.sofifa.org/players/4/19/241638.png England
18205 https://cdn.sofifa.org/players/4/19/246268.png England
18206 https://cdn.sofifa.org/players/4/19/246269.png England
Flag Overall Potential \
0 https://cdn.sofifa.org/flags/52.png 94 94
1 https://cdn.sofifa.org/flags/38.png 94 94
2 https://cdn.sofifa.org/flags/54.png 92 93
3 https://cdn.sofifa.org/flags/45.png 91 93
4 https://cdn.sofifa.org/flags/7.png 91 92
5 https://cdn.sofifa.org/flags/7.png 91 91
6 https://cdn.sofifa.org/flags/10.png 91 91
7 https://cdn.sofifa.org/flags/60.png 91 91
8 https://cdn.sofifa.org/flags/45.png 91 91
9 https://cdn.sofifa.org/flags/44.png 90 93
10 https://cdn.sofifa.org/flags/37.png 90 90
11 https://cdn.sofifa.org/flags/21.png 90 90
12 https://cdn.sofifa.org/flags/60.png 90 90
13 https://cdn.sofifa.org/flags/45.png 90 90
14 https://cdn.sofifa.org/flags/18.png 89 90
15 https://cdn.sofifa.org/flags/52.png 89 94
16 https://cdn.sofifa.org/flags/14.png 89 91
17 https://cdn.sofifa.org/flags/18.png 89 90
18 https://cdn.sofifa.org/flags/21.png 89 92
19 https://cdn.sofifa.org/flags/7.png 89 90
20 https://cdn.sofifa.org/flags/45.png 89 89
21 https://cdn.sofifa.org/flags/60.png 89 89
22 https://cdn.sofifa.org/flags/21.png 89 89
23 https://cdn.sofifa.org/flags/52.png 89 89
24 https://cdn.sofifa.org/flags/27.png 89 89
25 https://cdn.sofifa.org/flags/18.png 88 95
26 https://cdn.sofifa.org/flags/111.png 88 89
27 https://cdn.sofifa.org/flags/54.png 88 90
28 https://cdn.sofifa.org/flags/56.png 88 89
29 https://cdn.sofifa.org/flags/27.png 88 88
... ... ... ...
18177 https://cdn.sofifa.org/flags/25.png 48 69
18178 https://cdn.sofifa.org/flags/46.png 48 65
18179 https://cdn.sofifa.org/flags/14.png 48 64
18180 https://cdn.sofifa.org/flags/42.png 48 58
18181 https://cdn.sofifa.org/flags/25.png 48 66
18182 https://cdn.sofifa.org/flags/56.png 48 65
18183 https://cdn.sofifa.org/flags/14.png 48 48
18184 https://cdn.sofifa.org/flags/14.png 48 55
18185 https://cdn.sofifa.org/flags/25.png 48 59
18186 https://cdn.sofifa.org/flags/155.png 47 64
18187 https://cdn.sofifa.org/flags/21.png 47 59
18188 https://cdn.sofifa.org/flags/50.png 47 62
18189 https://cdn.sofifa.org/flags/21.png 47 61
18190 https://cdn.sofifa.org/flags/14.png 47 67
18191 https://cdn.sofifa.org/flags/14.png 47 65
18192 https://cdn.sofifa.org/flags/14.png 47 64
18193 https://cdn.sofifa.org/flags/55.png 47 64
18194 https://cdn.sofifa.org/flags/27.png 47 65
18195 https://cdn.sofifa.org/flags/25.png 47 67
18196 https://cdn.sofifa.org/flags/163.png 47 61
18197 https://cdn.sofifa.org/flags/25.png 47 61
18198 https://cdn.sofifa.org/flags/14.png 47 70
18199 https://cdn.sofifa.org/flags/70.png 47 69
18200 https://cdn.sofifa.org/flags/42.png 47 62
18201 https://cdn.sofifa.org/flags/25.png 47 68
18202 https://cdn.sofifa.org/flags/14.png 47 65
18203 https://cdn.sofifa.org/flags/46.png 47 63
18204 https://cdn.sofifa.org/flags/14.png 47 67
18205 https://cdn.sofifa.org/flags/14.png 47 66
18206 https://cdn.sofifa.org/flags/14.png 46 66
Club ... Marking StandingTackle SlidingTackle \
0 FC Barcelona ... 33.0 28.0 26.0
1 Juventus ... 28.0 31.0 23.0
2 Paris Saint-Germain ... 27.0 24.0 33.0
3 Manchester United ... 15.0 21.0 13.0
4 Manchester City ... 68.0 58.0 51.0
5 Chelsea ... 34.0 27.0 22.0
6 Real Madrid ... 60.0 76.0 73.0
7 FC Barcelona ... 62.0 45.0 38.0
8 Real Madrid ... 87.0 92.0 91.0
9 Atlético Madrid ... 27.0 12.0 18.0
10 FC Bayern München ... 34.0 42.0 19.0
11 Real Madrid ... 72.0 79.0 69.0
12 Atlético Madrid ... 90.0 89.0 89.0
13 Manchester City ... 59.0 53.0 29.0
14 Chelsea ... 90.0 91.0 85.0
15 Juventus ... 23.0 20.0 20.0
16 Tottenham Hotspur ... 56.0 36.0 38.0
17 Atlético Madrid ... 59.0 47.0 48.0
18 FC Barcelona ... 25.0 13.0 10.0
19 Real Madrid ... 20.0 18.0 16.0
20 FC Barcelona ... 90.0 86.0 80.0
21 Paris Saint-Germain ... 52.0 45.0 39.0
22 FC Bayern München ... 17.0 10.0 11.0
23 Manchester City ... 30.0 20.0 12.0
24 Juventus ... 93.0 93.0 90.0
25 Paris Saint-Germain ... 34.0 34.0 32.0
26 Liverpool ... 38.0 43.0 41.0
27 Real Madrid ... 88.0 90.0 87.0
28 FC Bayern München ... 52.0 41.0 44.0
29 Napoli ... 51.0 24.0 22.0
... ... ... ... ... ...
18177 Blackpool ... 18.0 16.0 11.0
18178 Dalkurd FF ... 16.0 11.0 10.0
18179 Northampton Town ... 42.0 51.0 49.0
18180 St. Johnstone FC ... 12.0 15.0 16.0
18181 Bray Wanderers ... 43.0 49.0 45.0
18182 Atlético Nacional ... 44.0 42.0 46.0
18183 Cambridge United ... 15.0 15.0 13.0
18184 Lincoln City ... 47.0 49.0 53.0
18185 Derry City ... 39.0 39.0 48.0
18186 Beijing Renhe FC ... 53.0 41.0 51.0
18187 SpVgg Unterhaching ... 40.0 42.0 42.0
18188 Newport County ... 33.0 38.0 41.0
18189 SpVgg Unterhaching ... 28.0 15.0 22.0
18190 Cambridge United ... 35.0 44.0 47.0
18191 Cambridge United ... 45.0 42.0 46.0
18192 Cambridge United ... 41.0 41.0 44.0
18193 Unión Española ... 41.0 48.0 48.0
18194 Lecce ... 6.0 10.0 11.0
18195 Waterford FC ... 44.0 37.0 48.0
18196 Júbilo Iwata ... 41.0 44.0 54.0
18197 Cork City ... 41.0 47.0 38.0
18198 Burton Albion ... 15.0 11.0 13.0
18199 Vancouver Whitecaps FC ... 48.0 49.0 49.0
18200 Swindon Town ... 15.0 17.0 14.0
18201 Waterford FC ... 44.0 47.0 53.0
18202 Crewe Alexandra ... 40.0 48.0 47.0
18203 Trelleborgs FF ... 22.0 15.0 19.0
18204 Cambridge United ... 32.0 13.0 11.0
18205 Tranmere Rovers ... 20.0 25.0 27.0
18206 Tranmere Rovers ... 40.0 43.0 50.0
GKDiving GKHandling GKKicking GKPositioning GKReflexes \
0 6.0 11.0 15.0 14.0 8.0
1 7.0 11.0 15.0 14.0 11.0
2 9.0 9.0 15.0 15.0 11.0
3 90.0 85.0 87.0 88.0 94.0
4 15.0 13.0 5.0 10.0 13.0
5 11.0 12.0 6.0 8.0 8.0
6 13.0 9.0 7.0 14.0 9.0
7 27.0 25.0 31.0 33.0 37.0
8 11.0 8.0 9.0 7.0 11.0
9 86.0 92.0 78.0 88.0 89.0
10 15.0 6.0 12.0 8.0 10.0
11 10.0 11.0 13.0 7.0 10.0
12 6.0 8.0 15.0 5.0 15.0
13 6.0 15.0 7.0 6.0 12.0
14 15.0 12.0 10.0 7.0 10.0
15 5.0 4.0 4.0 5.0 8.0
16 8.0 10.0 11.0 14.0 11.0
17 14.0 8.0 14.0 13.0 14.0
18 87.0 85.0 88.0 85.0 90.0
19 85.0 91.0 72.0 86.0 88.0
20 5.0 8.0 13.0 9.0 13.0
21 12.0 5.0 13.0 13.0 10.0
22 90.0 86.0 91.0 87.0 87.0
23 13.0 15.0 6.0 11.0 14.0
24 3.0 3.0 2.0 4.0 3.0
25 13.0 5.0 7.0 11.0 6.0
26 14.0 14.0 9.0 11.0 14.0
27 13.0 14.0 16.0 12.0 12.0
28 15.0 15.0 15.0 5.0 14.0
29 8.0 4.0 14.0 9.0 10.0
... ... ... ... ... ...
18177 6.0 9.0 11.0 7.0 12.0
18178 47.0 46.0 50.0 45.0 51.0
18179 14.0 11.0 7.0 11.0 8.0
18180 45.0 49.0 50.0 50.0 45.0
18181 8.0 10.0 12.0 9.0 10.0
18182 9.0 15.0 15.0 8.0 6.0
18183 45.0 48.0 44.0 49.0 46.0
18184 12.0 5.0 12.0 14.0 15.0
18185 6.0 11.0 9.0 5.0 8.0
18186 15.0 7.0 14.0 6.0 8.0
18187 13.0 12.0 11.0 15.0 12.0
18188 5.0 12.0 8.0 13.0 10.0
18189 15.0 5.0 14.0 12.0 8.0
18190 13.0 7.0 14.0 10.0 8.0
18191 15.0 13.0 6.0 14.0 12.0
18192 11.0 11.0 8.0 12.0 13.0
18193 6.0 10.0 6.0 12.0 11.0
18194 52.0 52.0 52.0 40.0 44.0
18195 13.0 14.0 12.0 7.0 13.0
18196 10.0 12.0 6.0 11.0 8.0
18197 13.0 6.0 9.0 10.0 15.0
18198 46.0 52.0 58.0 42.0 48.0
18199 7.0 7.0 9.0 14.0 15.0
18200 11.0 15.0 12.0 12.0 11.0
18201 9.0 10.0 9.0 11.0 13.0
18202 10.0 13.0 7.0 8.0 9.0
18203 10.0 9.0 9.0 5.0 12.0
18204 6.0 5.0 10.0 6.0 13.0
18205 14.0 6.0 14.0 8.0 9.0
18206 10.0 15.0 9.0 12.0 9.0
Release Clause range_of_age
0 €226.5M More than 30
1 €127.1M More than 30
2 €228.1M 25-30
3 €138.6M 25-30
4 €196.4M 25-30
5 €172.1M 25-30
6 €137.4M More than 30
7 €164M More than 30
8 €104.6M More than 30
9 €144.5M Not defiend
10 €127.1M 25-30
11 €156.8M 25-30
12 €90.2M More than 30
13 €111M More than 30
14 €121.3M 25-30
15 €153.5M 20-25
16 €160.7M 20-25
17 €165.8M 25-30
18 €123.3M 25-30
19 €113.7M 25-30
20 €105.6M 25-30
21 €111M More than 30
22 €62.7M More than 30
23 €119.3M Not defiend
24 €44.6M More than 30
25 €166.1M Less than 20
26 €137.3M 25-30
27 €126.4M 25-30
28 NaN 25-30
29 €105.4M 25-30
... ... ...
18177 €193K Less than 20
18178 €94K Less than 20
18179 €119K Less than 20
18180 €78K 20-25
18181 €109K Less than 20
18182 €101K Less than 20
18183 NaN More than 30
18184 €78K Less than 20
18185 €88K Less than 20
18186 €167K Not defiend
18187 €66K Less than 20
18188 €143K Less than 20
18189 €125K Less than 20
18190 €165K Less than 20
18191 €119K Less than 20
18192 €119K Less than 20
18193 €99K Less than 20
18194 €109K Less than 20
18195 €153K Less than 20
18196 €113K Less than 20
18197 €88K Less than 20
18198 €165K Less than 20
18199 €175K Less than 20
18200 €143K Less than 20
18201 €153K Less than 20
18202 €143K Less than 20
18203 €113K Less than 20
18204 €165K Less than 20
18205 €143K Less than 20
18206 €165K Less than 20
[18207 rows x 90 columns]
###Markdown
is there any missing data?
###Code
fifa19_player_df.isna().any()
###Output
_____no_output_____
###Markdown
Cleaning Data since we are intrested in some columns, so we will choose wanted columns.
###Code
fifa19_filtered = fifa19_player_df.filter(['Name','Nationality','Club','Release Clause','Weight', 'Height','range_of_age', 'Wage' ,'Overall'], axis=1)
fifa19_filtered.head()
###Output
_____no_output_____
###Markdown
Now, we want to convert str column to numeric type by using function below
###Code
def convert_stringtonumber(price):
"""
This function converts from price values in string type to float type numbers
Parameter:
price(str): Price values in string type with M & K as abbreviation for Million and Thousands respectively
Returns:
float: A float number represents the numerical value of the input parameter price(str)
"""
if price[-1] == 'K':
return int(price[1:-1])*1000
elif price[-1] == 'M':
return float(price[1:-1])*1000000
else:
return float(price[1:])
# creating another columns for the engineeered columns with function converting the strings to number
fifa19_filtered['Wage_Number'] = fifa19_filtered['Wage'].map(lambda x: convert_stringtonumber(x))
# One-Hot Encoding for Categorical variables such as Club, Nationality, Preferred Positions
# One-hot encode the feature: "Club" , "Nationality"
lE = LabelEncoder()
fifa19_filtered['Club_onehot_encode'] = lE.fit_transform(fifa19_filtered['Club'].astype(str))
fifa19_filtered['Nationality_onehot_encode'] = lE.fit_transform(fifa19_filtered['Nationality'].astype(str))
fifa19_filtered[['Club_onehot_encode', 'Nationality_onehot_encode']].head()
###Output
_____no_output_____
###Markdown
first we need to replace "'" in height with "," to help us in converting data type in weight, replace "lbs" to empty
###Code
fifa19_filtered.Height = fifa19_filtered.Height.str.replace("'",".")
fifa19_filtered.Weight = fifa19_filtered.Weight.str.replace("lbs","")
###Output
_____no_output_____
###Markdown
Change data type to appropriate one and fill na with mean, it is the best choose because we need their record and we can not fill it with zero since no height and weight in zero !!.
###Code
fifa19_filtered.Height = fifa19_filtered.Height.astype('float64')
fifa19_filtered.Weight = fifa19_filtered.Weight.astype('float64')
fifa19_filtered.Height = fifa19_filtered.Height.fillna(fifa19_filtered.Height.mean())
fifa19_filtered.Weight = fifa19_filtered.Weight.fillna(fifa19_filtered.Weight.mean())
fifa19_filtered.info()
fifa19_filtered.head()
###Output
_____no_output_____
###Markdown
Visualization
###Code
fifa19_filtered.nunique()
###Output
_____no_output_____
###Markdown
1-What is the distrbuation of player's age?
###Code
plt.figure(figsize=(20,10))
sns.countplot(fifa19_filtered['range_of_age'], palette='rocket')
plt.xlabel("Range of Age", fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Most of player's age are between 20-30 . 2-What is the distrbuation of player's height ?
###Code
fifa19_filtered.Height.hist()
###Output
_____no_output_____
###Markdown
Most of players their height is between 5.50-6.50 but there are lots of players 5.25 3-What is the distrbuation of player's weight ?
###Code
fifa19_filtered.Weight.hist()
###Output
_____no_output_____
###Markdown
Most of players their weight is between 150-180 4-Which clubs have the highest average player wage?
###Code
fifa19_filtered.groupby('Club').mean()['Wage_Number'].sort_values(ascending = False).head(10)
fifa19_filtered.groupby('Club').mean()['Wage_Number'].sort_values(ascending = False)[:10].plot.bar()
plt.title("Top 10 clubs with the highest average wage in FIFA 19\n");
###Output
_____no_output_____
###Markdown
Real Madrid and Barcelona player's get a highest average of wage 5-What is the rate of overall of players?
###Code
plt.figure(figsize=(20,10))
sns.countplot(fifa19_filtered['Overall'], palette='rocket')
plt.show()
###Output
_____no_output_____
###Markdown
overall of player is a Player Attributes. We can notice the highest is 66 6-Who is palyer has the highest overall?
###Code
fifa_best_players = pd.DataFrame.copy(fifa19_filtered.sort_values(by = 'Overall' , ascending = False ).head(10))
plt.figure(1 , figsize = (18 , 5))
sns.barplot(x ='Name' , y = 'Overall' , data = fifa_best_players ,palette='Set1')
plt.ylim(85,96)
plt.show()
###Output
_____no_output_____ |
Past/DSS/Classification/02.data_creaning.ipynb | ###Markdown
데이터 전처리 Scaling- `scale(X)`: 기본 스케일. 평균과 표준편차 사용- `robust_scale(X)`: 중앙값(median)과 IQR(interquartile range) 사용. 아웃라이어의 영향을 최소화- `minmax_scale(X)`: 최대/최소값이 각각 1, 0이 되도록 스케일링- `maxabs_scale(X)`: 최대절대값과 0이 각각 1, 0이 되도록 스케일링
###Code
from sklearn.preprocessing import scale, robust_scale, minmax_scale, maxabs_scale
x = (np.arange(8, dtype=np.float) - 3).reshape(-1, 1)
x = np.vstack([x, [20]]) # outlier
df = pd.DataFrame(np.hstack([x, scale(x), robust_scale(x), minmax_scale(x), maxabs_scale(x)]),
columns=["x", "scale(x)", "robust_scale(x)", "minmax_scale(x)", "maxabs_scale(x)"])
df
from sklearn.datasets import load_iris
iris = load_iris()
data1 = iris.data
data2 = scale(iris.data)
print("old mean:", np.mean(data1, axis=0))
print("old std: ", np.std(data1, axis=0))
print("new mean:", np.mean(data2, axis=0))
print("new std: ", np.std(data2, axis=0))
###Output
old mean: [5.84333333 3.054 3.75866667 1.19866667]
old std: [0.82530129 0.43214658 1.75852918 0.76061262]
new mean: [-1.69031455e-15 -1.63702385e-15 -1.48251781e-15 -1.62314606e-15]
new std: [1. 1. 1. 1.]
###Markdown
그러나 주로 Scaler 클래스로 구현해야 함- why? test data 도 같은 기준으로 스케일링 해주어야 함!1. 클래스 객체 생성1. `fit()` 메서드와 트레이닝 데이터 사용하여 변환 계수 추정1. `transform()` 메서드를 사용하여 실제 자료 변환 (for test data)또는 `fit_transform()` 메서드로 2,3 동시 실행 (for train data)
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(data1)
data2 = scaler.transform(data1)
data1.std(), data2.std()
###Output
_____no_output_____
###Markdown
--- Nomalization개별 데이터의 크기는 모두 같게 만들기 위한 변환. 따라서 개별 데이터에 대해 서로 다른 변환 계수가 적용됨- for what? - 다차원 독립 변수 벡터가 있을 때, 각 벡터 원소들의 **상대적 크기만** 중요한 경우에 사용 - ex. 5개의 영화에 대한 평점을 줄 때, (1, 1, 2, 3, 1)이나 (2, 2, 4, 5, 2)나 같은 것으로 보기 위해서 normalization
###Code
from sklearn.preprocessing import normalize
x = np.vstack([np.arange(5, dtype=float) - 20, np.arange(5, dtype=float) - 2]).T
y1 = scale(x)
y2 = normalize(x)
print("original x:\n", x)
print("scale:\n", y1)
print("norms (scale)\n", np.linalg.norm(y1, axis=1))
print("normlize:\n", y2)
print("norms (normalize)\n", np.linalg.norm(y2, axis=1))
from sklearn.datasets import load_iris
iris = load_iris()
data1 = iris.data[:,:2]
data3 = normalize(data1)
sns.jointplot(data1[:,0], data1[:,1])
plt.show()
sns.jointplot(data3[:,0], data3[:,1])
plt.show()
###Output
_____no_output_____
###Markdown
--- Encoding카테고리 값이나 텍스트 정보를 처리가 쉬운 정수로 변환1 - 7 1. One-Hot-Encoder`fit`메서드를 호출하면 다음과 같은 속성 지정- `n_values`: 최대 클래스 수- `feature_indices_`: 입력이 벡터인 경우 각 원소를 나타내는 slice정보- `active_featrues_`: 실제로 사용된 클래스들
###Code
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder()
X = np.array([[0], [1], [2]])
X
ohe.fit(X)
print("최대 클래스 수 :", ohe.n_values_, "개")
print("입력이 벡터인 경우, 각 원소를 나타내는 slice정보 :", ohe.feature_indices_ ,"이 경우, 하나가 통으로")
print("실제 사용된 클래스들 :", ohe.active_features_)
ohe.transform(X).toarray() # toarray를 통해 일반적 배열로 바꿈 / 원래는 sparse matrix
X2 = np.array([[0, 0, 4], [1, 1, 0], [0, 2, 1], [1, 0, 2]])
X2
ohe.fit(X2)
print("최대 클래스 수 : 각", ohe.n_values_, "개")
print("입력이 벡터인 경우, 각 원소를 나타내는 slice정보 :", ohe.feature_indices_)
print("실제 사용된 클래스들 :", ohe.active_features_)
ohe.transform(X2).toarray() # toarray를 통해 일반적 배열로 바꿈 / 원래는 sparse matrix
###Output
_____no_output_____
###Markdown
2. Imputer누락된 정보를 채우는 변환- `missing_values`: 누락 정보- `strategy` : 채우는 방법. 디폴트는 `"mean"` - `"mean"`, `"median"`, `"most_freqent"` >월마트 프로젝트의 경우? `상품 정보 누락`- 위 세가지 방법으로 정보를 채우거나, 혹은 아예 다른 other로 만들어서 채울 수도 있을 듯- 그러면 other들을 산 사람들을 판별할 때, 실제로는 다른 상품을 샀어도 같은 상품을 샀다고 오해할 가능성이 높아짐 - 그것은 위 세가지 방법도 마찬가지. 아예 빼는 것이 나은가? 하지만 실제로 **같은 상품**일수도 있다.
###Code
from sklearn.preprocessing import Imputer
imp = Imputer(missing_values='NaN', strategy='median', axis=0)
imp.fit_transform([[1, 2], [np.nan, 3], [7, 6]])
###Output
_____no_output_____
###Markdown
3. Binarizerthreshold (기준점) 값을 기준으로 결과를 0, 1로 구분한다. 디폴트 threshold는 0.
###Code
from sklearn.preprocessing import Binarizer
X = [[ 1., -1., 2.],
[ 2., 0., 0.],
[ 0., 1., -1.]]
binarizer = Binarizer().fit(X)
binarizer.transform(X) # 0 보다 크면 1로 구분
binarizer = Binarizer(threshold=1.1)
binarizer.transform(X) # 1.1 보다 크면 1로 구분. (1이 0이 됨)
###Output
_____no_output_____
###Markdown
--- 4. PolynomialFeatures입력값 $x$를 다항식으로 변환 `=default`- `degree =2`: 차수- `interation_only =False` : interation항 생성여부- `include_bias =True`: 상수항 생성 여부http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html
###Code
from sklearn.preprocessing import PolynomialFeatures
X = np.arange(6).reshape(3, 2)
X
poly = PolynomialFeatures(2) # 객체 생성
poly.fit_transform(X) # 변환
poly = PolynomialFeatures(interaction_only=True, include_bias=False) # 1차 + 인터렉션`만` 생성
poly.fit_transform(X)
###Output
_____no_output_____
###Markdown
5. FunctionTransformer입력값 $x$를 다항식이 아닌 사용자가 원하는 함수를 사용하여 변환
###Code
from sklearn.preprocessing import FunctionTransformer
def kernel(X): # 함수는 만들어주어야 하는 듯.
x0 = X[:, :1]
x1 = X[:, 1:2]
x2 = X[:, 2:3]
X_new = np.hstack([x0, 2 * x1, x2 ** 2, np.log(x1)])
return X_new
X = np.arange(12).reshape(4, 3)
X
kernel(X)
FunctionTransformer(kernel).fit_transform(X) # pandas apply와 비슷
###Output
_____no_output_____
###Markdown
6. Label Encoding더미 변수가 아닌 $0 \;\;\text{~}\;\; K-1$ 까지의 정수로 변환- `classes_` :변환 규칙 확인- `inverse_transform` : 역변환
###Code
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder() # 객체 생성
y = ['A', 'B', 'A', 'A', 'B', 'C', 'C', 'A', 'C', 'B']
# le.fit(y) # 변환
print(le.fit_transform(y)) # 역시 fit_transform을 지원한다.
le.classes_
###Output
[0 1 0 0 1 2 2 0 2 1]
###Markdown
0, 1, 2가 원래는 위 리스트와 같다는 규칙
###Code
y2 = le.transform(y)
y2
le.inverse_transform(y2) #역변환
###Output
/home/mk/anaconda3/lib/python3.6/site-packages/sklearn/preprocessing/label.py:151: DeprecationWarning: The truth value of an empty array is ambiguous. Returning False, but in future this will result in an error. Use `array.size > 0` to check that an array is not empty.
if diff:
###Markdown
Label Binarizerone-hot-encoder와 유사하지만 독립변수가 아닌 **종속 변수 값을 인코딩**하는데 사용
###Code
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer() # 객체 생성
y = ['A', 'B', 'A', 'A', 'B', 'C', 'C', 'A', 'C', 'B']
print(lb.fit_transform(y)) # 역시 fit_transform을 지원한다.
lb.classes_ # 물론 fit, transform 따로 가능(for test data)
lb.inverse_transform(lb.transform(y)) # 역변환
###Output
_____no_output_____ |
notebooks/variation-of-parameters.ipynb | ###Markdown
Solve the nonhomogeneous differential equation $y'' + y = \sec t + \tan t + t$ using variation of parameters
###Code
yc, p = nth_order_const_coeff(1, 0, 1)
p.display()
yp, p = variation_of_parameters(yc, sec(t) + tan(t) + t)
p.display()
###Output
_____no_output_____
###Markdown
Solve the nonhomogeneous differential equation $y'' - 12 y' + 36y = t^{-2} e^{6t} + 14t^2$ using variation of parameters
###Code
yc, p = nth_order_const_coeff(1, -12, 36)
p.display()
yp, p = variation_of_parameters(yc, t ** (-2) * exp(6 * t) + 14 * t ** 2)
p.display()
###Output
_____no_output_____ |
M220P/my_notebooks/NOTES - Chapter 2 Intro - Paging - Basic Aggregation.ipynb | ###Markdown
Chapter 2 (Intro, Paging, Basic Aggregation) Intro
###Code
uri = "mongodb+srv://m220student:[email protected]/test"
import pymongo
from pprint import pprint
client = pymongo.MongoClient(uri)
list(client.list_databases())
client.list_database_names()
mflix = client.mflix
mflix.list_collection_names()
comments = mflix.comments
comments.find_one()
movies = mflix.movies
mflix.command("dbstats")
mflix.command("collstats", "movies").keys()
movies.count_documents({"$text" : {'$search': "Indiana"}})
# Exact match format has to be quited in double-quotes, not single
# quotes.
findparams = {"$text" : {'$search': '"Indiana Jones"'}}
items = list(movies.find(findparams, {'_id': 0, 'title': 1}).limit(2))
items
aggregate_params = [{'$match': findparams}, {'$limit': 50}, {'$project': {'_id': 0, 'title': 1, 'year': 1}}, {'$sort': {'year': pymongo.DESCENDING}}]
items = list(movies.aggregate(aggregate_params))
items
findparams
list(movies.aggregate(aggregate_params + [{'$count': 'num_movies'}]))
list(movies.aggregate(aggregate_params + [{'$skip': 5}]))
###Output
_____no_output_____
###Markdown
Basic Aggregation
###Code
movies.count_documents({})
pipeline = [
{'$match': {'directors': 'Sam Raimi'}},
{'$project': {'_id': 0, 'title': 1, 'imdb.rating': 1}},
{'$group': {'_id': 0, 'avg_rating': {'$avg': '$imdb.rating'}}}
]
out = list(movies.aggregate(pipeline))
len(out)
out
x = [1, 2]
x.append(3, 4)
x
###Output
_____no_output_____ |
uncertainty/uncertainty.ipynb | ###Markdown
$\quad$Uncertainty in Mathematical Models ENGSCI263: Engineering Science Design I$\,$ Department of Engineering Science $\,$
###Code
# imports and environment: this cell must be executed before any other in the notebook
%matplotlib inline
from uncertainty263 import*
###Output
_____no_output_____
###Markdown
When using mathematical models to describe the real-world, we must contend with a staggering amount of **ignorance**1. Indeed, mathematical modelling as an activity is motivated by our own ignorance. Perhaps we are interested in the unknown value of some system parameter? Or maybe we harbour a general concern about the unpredictable future? Or, turning it around, if we already **know** the answer, what is the point of modelling?**Minimizing** the effects of error and uncertainty, and articulating **quantitatively** that which remains, are nontrivial undertakings. Too often, these steps are neglected: a model prediction is made that, in the absence of any disclosure of uncertainty, is **implied to be perfect**. Vague caveats might sometimes be offered about the prediction's quality in the face of some model simplification or assumed parameter value. However, the greatest confidence is attained when model predictions are attached **probabilistic bounds**, e.g., "*by 2024, the largest earthquake triggered by gas extraction operations has a 90% likelihood of lying in the interval [3.8, 4.6], with the most likely outcome being a M 4.2*".Error and uncertainty enter into the modelling process **at all stages**, for instance: - When a model is first developed and we are required to make **simplifying assumptions** (structural error).- When **imperfect or incomplete measurements** (observation error) are used to calibrate the model.- When the model is used to **make a prediction** of the future (prediction uncertainty), even though its parameters are **not completely constrained** (parameter uncertainty).Because uncertainty can never be eliminated entirely, it is important that rigorous methods are developed to handle it. For this module, you will be assessed on your ability to, for a specific scenario, **identify** where significant uncertainty could arise and, where possible, **quantify** its impact on modelling outcomes.1 Note, this is not the type of ignorance that carries negative connotations, i.e., obstinate closed-mindedness, an indifference to (or even a conscious intent to ignore) fact or reason. Rather, we are referring to the gaps in our knowledge: always present, but recognised, and with a desire to minimise. 1 Observational error When a quantity, $y_i$, is measured twice in succession, $\tilde{y}_i^{(1)}$ and $\tilde{y}_i^{(2)}$, it is unlikely that the same value will be obtained. Assuming that the true2 underlying value, $y_{i,0}$, is not changing, differences between repeated measurements, $\tilde{y}_i^{(j)}$ are generally attributed to **random error**, $\epsilon$, that reflects instrument limitations or uncontrollable variables. In the absence of systematic error, the measurement process is expressed algebraically\begin{equation}\tilde{y}_i^{(j)}=y_{i,0}+\epsilon,\end{equation}where $\epsilon$ is a normally distributed random variable with zero mean and variance, $\sigma_i^2$, i.e., $\epsilon\sim\mathcal{N}(0,\sigma_i^2)$. While any particular measurement might be contaminated with significant random error, as we take the mean of **repeated measurements**, the effects of measurement error will be averaged out.We could replace the true value of $y_i$ above with the model, $f(x_i;\boldsymbol{\theta})$, to obtain a model for the observation process\begin{equation}\tilde{y}_i (\boldsymbol{\theta})=f(x_i;\boldsymbol{\theta})+\mathcal{N}(0,\sigma_i^2 ),\end{equation}which is a normal distribution centred on the model prediction, $f(x_i;\boldsymbol{\theta})$, i.e., $P(\tilde{y}_i│\boldsymbol{\theta})\sim\mathcal{N}(f(x_i;\boldsymbol{\theta}),\sigma_i^2)$. The notation $P(\tilde{y}_i│\boldsymbol{\theta})$ should be read (slowly, several times over in your head) "*the probability of obtaining an observation, $\tilde{y}_i$, given a model of the process, $f(\cdot)$, which has the parameters, $\boldsymbol{\theta}$*" and it expresses the idea that "*the (underlying, true world) parameters **determine** the observations*". **Observations** are the critical ingredient in model calibration. They serve as a sort of target that the model should be able to, at least approximately, replicate. What should we do then when the target is fuzzy or partially obscured by observational error? Observations provide **information** about parameters, and the purpose of calibration is to extract that information. Thus a consequence of observational error is to place **limits** on what can be learned about the parameters.2 We shall use the subscript 0 to indicate a quantity is the "*true*", i.e., real world, value. 1.1 A Linear Example ***We shall return to the example below throughout this notebook.***Consider some physical process that happens to be well-described by a linear model. Linear models have two parameters: a gradient, $m$, and $y$-intercept, $c$, i.e., $y_i=f(x_i;m,c)=mx_i+c$. For the purposes of demonstration, we shall cast ourselves in the role of **Omniscient Supreme Being**, in which case we are able to “know” the true parameter values: $m=2$ and $c=3$, or $\boldsymbol{\theta}_0=[2.0,3.0]$. However, the less-enlightened humans on the ground3 don’t have access to this information, and must instead satisfy themselves by making measurements and developing a model. Being government employees, their instruments are not wildly sensitive ($\sigma_i^2=0.1$) and there is only budget to make ten observations, $\tilde{y}_i$. Remember: as almighty overlords, we can see both the true model (blue line) and the observations (open circles), but the poor humans can only see the latter.3 Simple-minded plebs.***Execute the cell below and answer the questions.***
###Code
observation_error()
# Move the slider N_obs to show more or less observations.
# What do the plotted normal distributions represent?
# Click the "True Process" checkbox. What does the centre of each normal distribution correspond to?
# Click the "Best Model" checkbox. Do the Best and True models agree? Why or why not?
# Click "ROLL THE DICE" and consider (i) the observations, (ii) the Best Model, and (iii) the True Process.
# Which ones have changed and which ones have stayed the same?
# Move the variance slider.
# How does agreement between Best and True models change as the variance decreases?
# Move the N_obs slider.
# How does agreement between Best and True models change as the more observations are taken?
###Output
_____no_output_____
###Markdown
2 Parameter uncertainty To make a prediction with a model, a conscious choice must first be made about the values of **parameters** used as model inputs. However, parameters values are never known with **absolute certainty**. Instead, they are better characterised by a distribution4, which expresses our **belief about the true value** for that parameter. The two most common distributions are the uniform and normal distributions. Distributions express our ignorance. The **uniform** distribution bounds the value of a parameter, $\boldsymbol{\theta}_i$, between minimum and maximum values, $[\boldsymbol{\theta}_{min},\boldsymbol{\theta}_{max}]$ and expresses the notion "*the true parameter value is somewhere in this range, but I have no idea where*". The **normal** distribution goes further, expressing what we think is the most likely value of the parameter (the mean) but also attaching a degree of uncertainty (the variance). Rarely, though, will the true parameter value turn out to be exactly the mean5. 4 For some parameters, the distribution might be quite narrow, e.g., the value of gravity. But even gravitational constants are only known to a finite level of precision.5 A pregnant woman’s "due date" is the **most-likely** date of birth given the estimated date of conception and a distribution for the gestation time of a human foetus. That being said, only 4% of babies are actually born on their due date. So "most-likely" is certainly not the same as "likely".***Execute the cell below and answer the questions.***
###Code
priors()
# What is the area under the uniform distribution (left)?
# How does the height of the uniform distribution change as you slide the limits left and right?
# What is the area under the normal distribution (right)?
# Does a uniform distribution categorically rule out any parameter values? What about the normal distribution?
###Output
_____no_output_____
###Markdown
When a parameter distribution is presented by the modeller based solely on consultation of the relevant literature or by drawing on their own expert knowledge, then it is called a **prior**. In Bayesian terms, the prior is a quantitative statement of belief (or ignorance) about a parameter’s value. When a set of observations, $\tilde{y}_i$, is used to provide further constraints on the parameter’s value (say, through model calibration), then we obtain a new distribution, called the **posterior**. Again, this is a quantitative statement of belief that, now, incorporates the relevant observations alongside our prior notions.For the case of a uniform prior ("*I have no idea what the value of this parameter is...*"), a reformulation of the weighted sum-of-squares objective function, $S(\boldsymbol{\theta})$, provides a simple way to determine the posterior parameter distribution. Defining $S(\boldsymbol{\theta})$ as6:\begin{equation} S(\boldsymbol{\theta})=\sum\limits_{i=1}^N\frac{1}{\sigma_i^2}\left(\tilde{y}_i-f(x_i;\boldsymbol{\theta})\right)^2,\end{equation}then the posterior distribution over multi-dimensional parameter space7 is \begin{equation} P(\boldsymbol{\theta}|\tilde{y}_i)=A e^{-S(\boldsymbol{\theta})/2},\end{equation} where $A$ is a normalising constant for the probability distribution. A proper understanding of what $P(\boldsymbol{\theta}|\tilde{y}_i)$ actually represents requires a little quiet introspection on the following points:1. **Information** about the true parameters, $\boldsymbol{\theta}_0$, is obtained by making **measurements**, $\tilde{y}_i$.2. All measurements, even the most meticulous ones, contain some amount of **error**, $\epsilon$.3. Therefore, the true parameters, $\boldsymbol{\theta}_0$, can **never be determined** exactly.4. However, we can find parameters, $\hat{\boldsymbol{\theta}}_0$, that give a model **most consistent** with the data (found by minimising $S(\boldsymbol{\theta})$, i.e., calibration).5. We concede that $\hat{\boldsymbol{\theta}}_0$ is not equal to $\boldsymbol{\theta}_0$, although it is **hopefully close**.6. Therefore, it is incumbent upon us to consider the multitude of other parameter sets, $\boldsymbol{\theta}$, that are near to $\hat{\boldsymbol{\theta}}_0$, and thus are also **possible candidates** for $\boldsymbol{\theta}_0$.The equation for $\tilde{y}_i (\boldsymbol{\theta})$ in Section 1 expresses a forward model for the measurement process: parameters ($\boldsymbol{\theta}$) determine observations ($\tilde{y}_i$). The posterior distribution $P(\boldsymbol{\theta}|\tilde{y}_i)$ expresses exactly the **opposite**, that observations determine8 the parameters, or "*what is the probability that the true parameter values are $\boldsymbol{\theta}$, given that I have made measurements, $\tilde{y}_i$*"9. Again, because $P(\boldsymbol{\theta}│\tilde{y}_i)$ is a distribution, it expresses both our knowledge ("*These values are likely...*") and our ignorance ("*...but we can't choose just one*").6 Note, compared to the definition in the Calibration notebook, the weights on each term are the reciprocal of the variance for the associated measurement error. Fun fact to annoy your friends: when a set of measurements have different error, $\sigma_i^2$, they are said to be **heteroskedastic**.7 For a derivation, see Appendix A. This is not examinable.8 We don’t mean "determine" in the **deterministic** sense, i.e., to cause the parameters to take on particular values. Rather, we refer to the **inference** sense, i.e., to provide the information necessary to constrain the parameter values.9 You have probably heard the old trope "*we cannot prove a hypothesis, only falsify it*". The posterior distribution has a nice interpretation in this context: each parameter set represents the hypothesis "*this is the true parameter set, $\boldsymbol{\theta}_0$*". By computing the posterior, we falsify (probabilistically) some of those hypotheses. 2.1 Computing posterior parameter distributions According to the definition above, to compute the posterior we first need to calculate the **sum-of-squares objective function**, $S(\boldsymbol{\theta})$, a surface over multi-dimensional parameter space. For our purposes, it is sufficient to approximate this surface by computing $S(\boldsymbol{\theta}_k)$, where $\boldsymbol{\theta}_k$ are the points of a **discretised grid**. This is a grid search10, a brute force method that nevertheless parallelises well11. Once $S(\boldsymbol{\theta}_k)$ has been computed, it can be substituted into equation for the posterior to obtain the discretised probability density function. The coefficient $A$ is determined by numerical integration of $e^{-S(\boldsymbol{\theta})/2}$ using a suitable quadrature scheme. Computing $P(\boldsymbol{\theta}_k |\tilde{y}_i)$ on a grid of parameters is **equivalent** to the assumption of a uniform prior: we are implicitly saying "*the likelihood of $\boldsymbol{\theta}_k$ being the true parameter, $\boldsymbol{\theta}_0$, is related only to its goodness-of-fit with the data and not to any prior notions I have about what $\boldsymbol{\theta}_0$ should be.*" Once $P(\boldsymbol{\theta}_k |\tilde{y}_i)$ has been computed, it should be **inspected** to assess whether the parameter grid is sufficiently large: $P(\boldsymbol{\theta}_k |\tilde{y}_i)$ should be **small** at the edges of the grid.10 A grid search was used to generate contours of $S(\boldsymbol{\theta})$ for Figs. 1 and 2 in the Calibration notes, and some of the figures in the first two labs.11 Computing $S(\boldsymbol{\theta}_1)$, which involves a forward run of the model for parameters $\boldsymbol{\theta}_1$, does not depend on knowing $S(\boldsymbol{\theta})$ at any other point. Contrast this to the gradient descent algorithm: we cannot compute $\boldsymbol{\theta}_4$ without first knowing $\boldsymbol{\theta}_3$. 2.2 Marginal parameter distributions Suppose there are two parameter – let’s consider $c$ and $k$ again, for the case of damped car suspension – in which case $P(c,k|\tilde{y}_i)$ is a surface over 2D space. If we were able to determine the value of one of the parameters, say $c=c_0$, by other methods, the uncertain distribution of $k$ that remains is found by taking a slice through $P(c,k|\tilde{y}_i)$ at $c=c_0$ (see Fig. 2). Another question we might ask runs along the lines: "*what is the uncertain distribution of $k$ given the uncertainty of $c$?*" or, alternatively "*yes, $c$ is uncertain, but I don’t care; I just need to know the uncertainty of $k$.*" This is called the **marginal distribution** of $k$ and is obtained by integrating $P(c,k|\tilde{y}_i)$ over $c$\begin{equation}P(k|\tilde{y}_i)=\int P(c,k|\tilde{y}_i)dc.\end{equation} An illustration of marginal distributions and their relation to the posterior is shown in Fig. 1. **Figure 1: A sample objective function (left) over $c$-$k$ parameter space, and the corresponding posterior and marginal distributions (right). Note how $P(\boldsymbol{\theta}|\tilde{y}_i)$ goes to zero at the edges of the grid, indicating the distribution is well resolved.** 2.3 A Linear Example ***Recall the example from Section 1.1.***Our group of human scientists have satisfied themselves that the physical process described previously is probably linear in nature and, therefore, a model of the form $y_i=mx_i+c$ will provide a suitable fit to the data. Using **gradient descent** calibration, they quickly determine that the best-fitting model has $m=2.2$ and $c=3.1$, i.e., $\hat{\boldsymbol{\theta}}_0=[2.2,3.1]$. Clearly, this is different to the true value, $\boldsymbol{\theta}_0=[2.0,3.0]$ (Section 1.1), but that was expected.To gain a more in-depth understanding of the possible candidate models different from $\hat{\boldsymbol{\theta}}_0$, a grid search is undertaken to first determine $S(\boldsymbol{\theta})$ and then subsequently compute a posterior, $P(\boldsymbol{\theta}|\tilde{y}_i)$. The resulting distribution can be seen by running the cell below. Remember, the posterior distribution is making two important points:1. We don’t currently know, in fact we will never know, the true parameter, $\boldsymbol{\theta}_0$. Even our best guess, the least-squares fit, $\hat{\boldsymbol{\theta}}_0$, will be wrong to some degree.2. Nevertheless, we can determine which parameter combinations are consistent with the data, and rank their likelihood relative to each other.Given these caveats, it is comforting to see that the true value, $\boldsymbol{\theta}_0$, is near enough to the centre of mass of $P(\boldsymbol{\theta}|\tilde{y}_i)$ that it is among the likely candidates.***Execute the cell below and answer the questions.***
###Code
Nm = 41
Nc = 41
posterior(Nm,Nc)
# The righthand figure is computed by a grid search over c-m parameter space.
# Use the sliders to fit different models (left) and explore parameter space (right).
# The best-fit model is not the same as the true process. Why?
# Does the best-fit model or the true process correspond to the higher value on the righthand plot?
# The righthand inset shows the posterior as a surface. The shape is 2D Gaussian.
# If we were to fit a set of major and minor axes to the posterior, would they align with the c- and m- axes?
# Change the values of Nm and Nc in the cell above and rerun. How many grid samples are required to
# 'properly' estimate the posterior?
# What tradeoffs are there in estimating the posterior?
###Output
_____no_output_____
###Markdown
Note how the posterior - a 2D Gaussian distribution - is **rotated** with major and minor axes mis-aligned to the $c$- and $m$-axes. This indicates that the parameters $c$ and $m$ are **correlated** or, in this case, anti-correlated. In simpler terms, if we attempt to fit a steeper line (higher $m$) then we need a smaller intercept (smaller $c$) to obtain a reasonable match with the data. In the context of **inverse modelling**, where the entire exercise is motivated by a desire to determine $\boldsymbol{\theta}$, then the posterior parameter distribution (and the corresponding marginal distributions) communicate appropriately both our **best estimate** of $\boldsymbol{\theta}$ (the vector of values that maximise $P(\boldsymbol{\theta}|\tilde{y}_i))$ as well as our **uncertainty**. However, when the goal is to make a prediction of the future, then model calibration is only an intermediate step. We must now consider how uncertainty in the parameters results in uncertainty in the predictions. 3 Prediction uncertainty If we were **absolutely certain** that we had discovered the true values of all parameters, $\boldsymbol{\theta}_0$, then we could simply plug these into the forward model, $f(x_f;\boldsymbol{\theta}_0)$, to obtain a [rock solid prediction of the future](https://media.socastsrm.com/wordpress/wp-content/blogs.dir/697/files/2017/12/THE-ROCK-PRESIDENT-1.jpg), $y_f$. Unfortunately, when it comes to modelling, absolute certainty is in short supply. In fact, we saw in the previous section that, when observational error is allowed for, our best estimate, $\hat{\boldsymbol{\theta}}_0$, is always going to be wrong to some degree, which means the prediction, $f(x_f;\hat{\boldsymbol{\theta}}_0)$ **is always going to be wrong** as well. Oh dear. Not a great start.If we were absolutely **averse to risk**, then we might take a no-surprises approach and say "*what are all the possible parameter sets, $\boldsymbol{\theta}$? Let’s predict the future for each of these, and then at least we’ll have an idea of the possible outcomes.*" This is equivalent to sampling parameter sets from a uniform prior and yields, not one, but an [**ensemble of models**](https://xkcd.com/1885/) with an ensemble of outcomes. In the context of, say, climate modelling, this ensemble of outcomes might tell us that the 21st century could see anywhere between 5$^{\circ}$C of warming and 2$^{\circ}$C of cooling, although neither outcome is ranked more or less likely than the other. In fact though, we do know that some predictions are **more likely** to come true than others, because some of the parameter sets are more likely to be correct than others. For instance, suppose we have used some observations to compute a posterior distribution for the parameters, $P(\boldsymbol{\theta}|\tilde{y}_i)$, from which we learn that $\boldsymbol{\theta}_1$ is twice as likely to be the true parameter set than $\boldsymbol{\theta}_2$. Then, we would imagine the outcome $y_{f,1}=f(x_f;\boldsymbol{\theta}_1)$ to, similarly, be twice as likely as the outcome $y_{f,2}=f(x_f;\boldsymbol{\theta}_2)$. We shall use this reasoning as the basis for constructing a **future forecast**12, which is simply a set of predictions with a probability attached to each.To construct the forecast, we take samples of possible parameter sets from the posterior distribution, $P(\boldsymbol{\theta}|\tilde{y}_i)$ and then compute a prediction for each. The posterior is not uniform but rather is weighted toward the most-likely parameter combinations and, therefore, the predictions too will be **weighted** towards the most-likely outcomes. As more and more samples are computed, a histogram of the predictions – the forecast – will begin to resemble a probability distribution, $P(y_f|\tilde{y}_i)$, which can be subsequently interrogated for its statistical properties. Of particular interest are the median outcome and an **interval** centred on it that encloses some substantial proportion of the outcomes13. **How many** samples from the posterior should be used to generate the forecast? The answer depends on how long the forward model takes to run, the number of computers available for running models14, and the diminishing returns in resolving $P(y_f |\tilde{y}_i)$.12 Here, we define the term forecast to mean a probabilistic prediction. However, take care, the terminology may mean different things to different people.13 90% is good number, although a high stakes outcome may call for a wider interval.14 Like the grid search, this step parallelises well 3.1 A Linear Example ***Recall the example from Section 2.3.***The stupid humans wish to use their calibrated model and understanding of the posterior to make a prediction. For this rather contrived scenario, they will attempt to predict the physical process at $x_f=3.0$, which might be regarded as "*about twice as far into the future than the period we have data for, [0-1]*". The true future outcome is $y_{f,0}=f(x_f;\boldsymbol{\theta}_0 )=9$, but this is only learned long after the prediction is made. The prediction made using just the best-fit model is $y_f=f(x_f;\hat{\boldsymbol{\theta}}_0)=9.75$. Not bad for mere humans! But also wrong.To be more rigorous, it is desirable to develop some sort of confidence in the prediction, $y_f$. As per Section 3, sample parameter sets are chosen15 from the posterior distribution developed in Section 2.3, and a prediction is computed for each. Compiling a histogram of 2000 such samples – a forecast of the future – we can see that the 90% confidence interval is rather wide, [6, 13], but it **does contain** the true future outcome. 15 For this step, I fitted a multivariate normal distribution and sampled from it using the `scipy.stats.multivariate_normal` class in Python. In practical situations, the posterior may not have a nice shape and alternative sampling methods might be required (e.g., bootstrapping).***Execute the cell below and answer the questions.***
###Code
var = 0.1
prediction(var)
# The figures above show:
# (left) the modelled process
# (middle) the sampled posterior
# (right) the model forecast
# Add some more samples using the slider.
# How does the forecast change (width and shape) as you add more samples?
# Extend the forecast out using the slider for xf.
# How does the forecast change (width and shape) as you consider more distant futures?
# Increase the variance on the observation by changing 'var' and rerunning the cell. Try 0.01 and 1.0.
# How do the posterior and forecast change as you increase the observation error?
###Output
_____no_output_____
###Markdown
4 Structural error While definitions of **structural (or model) error** can be a little grey, here we are referring to any item falling under the rather broad umbrella of "*improper model formulation*" or "*deviations between model and reality*". This includes all model assumptions related to: - The governing physics, e.g., the differential equations being solved neglect gravity, which in practice has some influence on the outcome, but in this case is **assumed to be minor**.- Time and space discretization, e.g., to solve the problem in a reasonable amount of time, we often must solve on a grid comprising a finite number of points and advancing the solution with finite sized time steps. Anything in between might be inferred by an interpolation scheme, but is **not explicitly captured** by the model.- Represented features and homogenization. For instance, in a continuum model we might choose to **exclude** a finite-sized feature that has different properties to the rest of the continuum (an example from geothermal reservoir modelling is to not include a fault in the model). In practice, no continuum is perfectly homogeneous in its properties – all materials contain defects, imperfections or innate variability. However, characterizing and modelling heterogeneous materials is a non-trivial undertaking.Assessment of the size or impact of structural error often amounts to assessing the consequences of model assumptions (see Design notes, Section 4.3). In some cases, this requires running the model with the assumption relaxed and satisfying oneself that the behaviour or predictions are not substantially altered.The above assumptions – in fact any assumption made consciously, articulated or not – fall into a category of ignorance termed **known unknowns**. These are the things we know we don’t know or, as James Clerk Maxwell16 put it, "*thoroughly conscious ignorance*". Observation error, parameter and prediction uncertainty also fall under this heading.More difficult to deal with are the **unknown unknowns**, i.e., the things we don’t know that we don’t know17. This includes the physical processes we’re not aware are operating or that we have interpreted incorrectly, important features or structures that cannot be directly observed (again, unknown faults and fractures are a big challenge in modelling reservoirs), or bad information we have been provided that we actually think is good (data, expert knowledge). There is little that can be done to guard against unknown unknowns beyond a rigorous, professional approach to problem solving, i.e., gathering data, considering different modelling options, providing impartial self-critique, and maintaining an open-minded, but sceptical, outlook.16 19th century titan of physics, populariser of dimensional analysis, originator of the hipster beard.17 When you are first born, everything is an unknown unknown, which perhaps explains why infants find mathematical modelling quite challenging, hyuk hyuk hyuk. 4.1 A Linear Example ***Recall the example from Section 3.1.***Another group of scientists have developed a different model. They know without a doubt18 that the true process is logarithmic and is described by a relationship of the form: $y_i=a\,\ln(x_i-x_0)+b$. Having three parameters instead of two, their model is clearly more sophisticated than the linear one19. Undertaking a similar analysis of the data, $\tilde{y}_i$, they compute a best-fit model, compute the posterior parameter distribution and sample from it to obtain a forecast of the future (Sections 2-3). This is implemented in the cell below, alongside the forecast from the first group, who used a linear model. The 90% forecast interval for the logarithmic model excludes the true outcome by a large margin.How did the second group get it so wrong? They followed the same procedure as the first group: fitting a model to the data, exploring parameter space to construct a posterior, and then sampling from it to construct a forecast. Indeed, with respect to the observations, $\tilde{y}_i$, the best-fit logarithmic model has a smaller objective function ($S(\hat{\boldsymbol{\theta}}_0 )=4.2$) than the best-fit linear model ($6.5$), and on that basis we might prefer the former.The error they made occurred early on during model selection when the underlying physics were assumed to be logarithmic in nature. The particular choice may have been motivated by the data ("*the logarithmic model fits $\tilde{y}_i$ better*") or by an incorrect analysis of the physics. In either case, structural error was introduced during model development that, in the end, had a deleterious effect of future predictions.18 "*It is very difficult to find a black cat in a dark room. Especially when there is no cat.*" - [Proverb](https://i.kym-cdn.com/photos/images/original/001/153/173/152.jpg).19 Albeit, less correct.***Execute the cell below and answer the questions (be patient, this one takes a few seconds to setup).***
###Code
structural()
# The cell above offers three alternative models (including the log model described above).
# - Switch between the models using the dropdown menu.
# - Zoom in to see how each fits the data.
# - Use the slider to change the prediction window.
# Rank the FOUR models (incl. linear) in terms of fit to the data.
# (the legend on the lefthand figures gives the obj. func. value S)
# If 'ability to fit the data' was the ONLY criteria to choose a model,
# which would you choose?
# Of the three competing models, which ones successfully 'predict' the future at xf = 3?
# Describe the magnitude of the prediction error for alternative models as xf increases.
# What (real-world) actions could you take to help choose between competing models?
###Output
_____no_output_____ |
Section-07-Variable-Transformation/07.03-Gaussian-transformation-feature-engine.ipynb | ###Markdown
Gaussian Transformation with Feature-EngineScikit-learn has recently released transformers to do Gaussian mappings as they call the variable transformations. The PowerTransformer allows to do Box-Cox and Yeo-Johnson transformation. With the FunctionTransformer, we can specify any function we want.The transformers per se, do not allow to select columns, but we can do so using a third transformer, the ColumnTransformerAnother thing to keep in mind is that Scikit-learn transformers return NumPy arrays, and not dataframes, so we need to be mindful of the order of the columns not to mess up with our features. ImportantBox-Cox and Yeo-Johnson transformations need to learn their parameters from the data. Therefore, as always, before attempting any transformation it is important to divide the dataset into train and test set.In this demo, I will not do so for simplicity, but when using this transformation in your pipelines, please make sure you do so. In this demoWe will see how to implement variable transformations using Scikit-learn and the House Prices dataset.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
from feature_engine import variable_transformers as vt
data = pd.read_csv('../houseprice.csv')
data.head()
###Output
_____no_output_____
###Markdown
Plots to assess normalityTo visualise the distribution of the variables, we plot a histogram and a Q-Q plot. In the Q-Q pLots, if the variable is normally distributed, the values of the variable should fall in a 45 degree line when plotted against the theoretical quantiles. We discussed this extensively in Section 3 of this course.
###Code
# plot the histograms to have a quick look at the variable distribution
# histogram and Q-Q plots
def diagnostic_plots(df, variable):
# function to plot a histogram and a Q-Q plot
# side by side, for a certain variable
plt.figure(figsize=(15,6))
plt.subplot(1, 2, 1)
df[variable].hist()
plt.subplot(1, 2, 2)
stats.probplot(df[variable], dist="norm", plot=plt)
plt.show()
diagnostic_plots(data, 'LotArea')
diagnostic_plots(data, 'GrLivArea')
###Output
_____no_output_____
###Markdown
LogTransformer
###Code
lt = vt.LogTransformer(variables = ['LotArea', 'GrLivArea'])
lt.fit(data)
# variables that will be transformed
lt.variables
data_tf = lt.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
###Output
_____no_output_____
###Markdown
ReciprocalTransformer
###Code
rt = vt.ReciprocalTransformer(variables = ['LotArea', 'GrLivArea'])
rt.fit(data)
data_tf = rt.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
###Output
_____no_output_____
###Markdown
ExponentialTransformer
###Code
et = vt.PowerTransformer(variables = ['LotArea', 'GrLivArea'])
et.fit(data)
data_tf = et.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
###Output
_____no_output_____
###Markdown
BoxCoxTransformer
###Code
bct = vt.BoxCoxTransformer(variables = ['LotArea', 'GrLivArea'])
bct.fit(data)
# these are the exponents for the BoxCox transformation
bct.lambda_dict_
data_tf = bct.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
###Output
_____no_output_____
###Markdown
Yeo-Johnson TransformerYeo-Johnson Transformer will be available in the next release of Feauture-Engine!!!
###Code
yjt = vt.YeoJohnsonTransformer(variables = ['LotArea', 'GrLivArea'])
yjt.fit(data)
# these are the exponents for the Yeo-Johnson transformation
yjt.lambda_dict_
data_tf = yjt.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
###Output
_____no_output_____
###Markdown
Gaussian Transformation with Feature-EngineScikit-learn has recently released transformers to do Gaussian mappings as they call the variable transformations. The PowerTransformer allows to do Box-Cox and Yeo-Johnson transformation. With the FunctionTransformer, we can specify any function we want.The transformers per se, do not allow to select columns, but we can do so using a third transformer, the ColumnTransformerAnother thing to keep in mind is that Scikit-learn transformers return NumPy arrays, and not dataframes, so we need to be mindful of the order of the columns not to mess up with our features. ImportantBox-Cox and Yeo-Johnson transformations need to learn their parameters from the data. Therefore, as always, before attempting any transformation it is important to divide the dataset into train and test set.In this demo, I will not do so for simplicity, but when using this transformation in your pipelines, please make sure you do so. In this demoWe will see how to implement variable transformations using Scikit-learn and the House Prices dataset.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import feature_engine.transformation as vt
data = pd.read_csv('../houseprice.csv')
data.head()
###Output
_____no_output_____
###Markdown
Plots to assess normalityTo visualise the distribution of the variables, we plot a histogram and a Q-Q plot. In the Q-Q pLots, if the variable is normally distributed, the values of the variable should fall in a 45 degree line when plotted against the theoretical quantiles. We discussed this extensively in Section 3 of this course.
###Code
# plot the histograms to have a quick look at the variable distribution
# histogram and Q-Q plots
def diagnostic_plots(df, variable):
# function to plot a histogram and a Q-Q plot
# side by side, for a certain variable
plt.figure(figsize=(15,6))
plt.subplot(1, 2, 1)
df[variable].hist()
plt.subplot(1, 2, 2)
stats.probplot(df[variable], dist="norm", plot=plt)
plt.show()
diagnostic_plots(data, 'LotArea')
diagnostic_plots(data, 'GrLivArea')
###Output
_____no_output_____
###Markdown
LogTransformer
###Code
lt = vt.LogTransformer(variables = ['LotArea', 'GrLivArea'])
lt.fit(data)
# variables that will be transformed
lt.variables
data_tf = lt.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
###Output
_____no_output_____
###Markdown
ReciprocalTransformer
###Code
rt = vt.ReciprocalTransformer(variables = ['LotArea', 'GrLivArea'])
rt.fit(data)
data_tf = rt.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
###Output
_____no_output_____
###Markdown
ExponentialTransformer
###Code
et = vt.PowerTransformer(variables = ['LotArea', 'GrLivArea'])
et.fit(data)
data_tf = et.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
###Output
_____no_output_____
###Markdown
BoxCoxTransformer
###Code
bct = vt.BoxCoxTransformer(variables = ['LotArea', 'GrLivArea'])
bct.fit(data)
# these are the exponents for the BoxCox transformation
bct.lambda_dict_
data_tf = bct.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
###Output
_____no_output_____
###Markdown
Yeo-Johnson TransformerYeo-Johnson Transformer will be available in the next release of Feauture-Engine!!!
###Code
yjt = vt.YeoJohnsonTransformer(variables = ['LotArea', 'GrLivArea'])
yjt.fit(data)
# these are the exponents for the Yeo-Johnson transformation
yjt.lambda_dict_
data_tf = yjt.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
###Output
_____no_output_____
###Markdown
Gaussian Transformation with Feature-EngineScikit-learn has recently released transformers to do Gaussian mappings as they call the variable transformations. The PowerTransformer allows to do Box-Cox and Yeo-Johnson transformation. With the FunctionTransformer, we can specify any function we want.The transformers per se, do not allow to select columns, but we can do so using a third transformer, the ColumnTransformerAnother thing to keep in mind is that Scikit-learn transformers return NumPy arrays, and not dataframes, so we need to be mindful of the order of the columns not to mess up with our features. ImportantBox-Cox and Yeo-Johnson transformations need to learn their parameters from the data. Therefore, as always, before attempting any transformation it is important to divide the dataset into train and test set.In this demo, I will not do so for simplicity, but when using this transformation in your pipelines, please make sure you do so. In this demoWe will see how to implement variable transformations using Scikit-learn and the House Prices dataset.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import feature_engine.transformation as vt
data = pd.read_csv('../houseprice.csv')
data.head()
###Output
_____no_output_____
###Markdown
Plots to assess normalityTo visualise the distribution of the variables, we plot a histogram and a Q-Q plot. In the Q-Q pLots, if the variable is normally distributed, the values of the variable should fall in a 45 degree line when plotted against the theoretical quantiles. We discussed this extensively in Section 3 of this course.
###Code
# plot the histograms to have a quick look at the variable distribution
# histogram and Q-Q plots
def diagnostic_plots(df, variable):
# function to plot a histogram and a Q-Q plot
# side by side, for a certain variable
plt.figure(figsize=(15,6))
plt.subplot(1, 2, 1)
df[variable].hist()
plt.subplot(1, 2, 2)
stats.probplot(df[variable], dist="norm", plot=plt)
plt.show()
diagnostic_plots(data, 'LotArea')
diagnostic_plots(data, 'GrLivArea')
###Output
_____no_output_____
###Markdown
LogTransformer
###Code
lt = vt.LogTransformer(variables = ['LotArea', 'GrLivArea'])
lt.fit(data)
# variables that will be transformed
lt.variables_
data_tf = lt.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
###Output
_____no_output_____
###Markdown
ReciprocalTransformer
###Code
rt = vt.ReciprocalTransformer(variables = ['LotArea', 'GrLivArea'])
rt.fit(data)
data_tf = rt.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
###Output
_____no_output_____
###Markdown
ExponentialTransformer
###Code
et = vt.PowerTransformer(variables = ['LotArea', 'GrLivArea'])
et.fit(data)
data_tf = et.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
###Output
_____no_output_____
###Markdown
BoxCoxTransformer
###Code
bct = vt.BoxCoxTransformer(variables = ['LotArea', 'GrLivArea'])
bct.fit(data)
# these are the exponents for the BoxCox transformation
bct.lambda_dict_
data_tf = bct.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
###Output
_____no_output_____
###Markdown
Yeo-Johnson TransformerYeo-Johnson Transformer will be available in the next release of Feauture-Engine!!!
###Code
yjt = vt.YeoJohnsonTransformer(variables = ['LotArea', 'GrLivArea'])
yjt.fit(data)
# these are the exponents for the Yeo-Johnson transformation
yjt.lambda_dict_
data_tf = yjt.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
###Output
_____no_output_____ |
examples/LPKT/LPKT.ipynb | ###Markdown
Learning Process-consistent Knowledge Tracing(LPKT)This notebook will show you how to train and use the LPKT.First, we will show how to get the data (here we use assistment-2017 as the dataset).Then we will show how to train a LPKT and perform the parameters persistence.At last, we will show how to load the parameters from the file and evaluate on the test dataset.The script version could be found in [LPKT.py](LPKT.ipynb) Data PreparationBefore we process the data, we need to first acquire the dataset which is shown in [prepare_dataset.ipynb](prepare_dataset.ipynb)
###Code
import numpy as np
from load_data import DATA
def generate_q_matrix(path, n_skill, n_problem, gamma=0):
with open(path, 'r', encoding='utf-8') as f:
for line in f:
problem2skill = eval(line)
q_matrix = np.zeros((n_problem + 1, n_skill + 1)) + gamma
for p in problem2skill.keys():
q_matrix[p][problem2skill[p]] = 1
return q_matrix
batch_size = 32
n_at = 1326
n_it = 2839
n_question = 102
n_exercise = 3162
seqlen = 500
d_k = 128
d_a = 50
d_e = 128
q_gamma = 0.03
dropout = 0.2
dat = DATA(seqlen=seqlen, separate_char=',')
train_data = dat.load_data('../../data/anonymized_full_release_competition_dataset/train.txt')
test_data = dat.load_data('../../data/anonymized_full_release_competition_dataset/test.txt')
q_matrix = generate_q_matrix(
'../../data/anonymized_full_release_competition_dataset/problem2skill',
n_question, n_exercise,
q_gamma
)
###Output
_____no_output_____
###Markdown
Training and Persistence
###Code
import logging
logging.getLogger().setLevel(logging.INFO)
from EduKTM import LPKT
lpkt = LPKT(n_at, n_it, n_exercise, n_question, d_a, d_e, d_k, q_matrix, batch_size, dropout)
lpkt.train(train_data, test_data, epoch=2, lr=0.003)
lpkt.save("lpkt.params")
###Output
Training: 100%|██████████| 61/61 [17:57<00:00, 17.66s/it]
Testing: 0%| | 0/26 [00:00<?, ?it/s]
###Markdown
Loading and Testing
###Code
lpkt.load("lpkt.params")
_, auc, accuracy = lpkt.eval(test_data)
print("auc: %.6f, accuracy: %.6f" % (auc, accuracy))
###Output
INFO:root:load parameters from lpkt.params
Testing: 100%|██████████| 26/26 [01:18<00:00, 3.04s/it]
###Markdown
Deep Knowledge Tracing Plus (LPKT)This notebook will show you how to train and use the LPKT.First, we will show how to get the data (here we use assistment-2017 as the dataset).Then we will show how to train a LPKT and perform the parameters persistence.At last, we will show how to load the parameters from the file and evaluate on the test dataset.The script version could be found in [LPKT.py](LPKT.ipynb) Data PreparationBefore we process the data, we need to first acquire the dataset which is shown in [prepare_dataset.ipynb](prepare_dataset.ipynb)
###Code
import numpy as np
from load_data import DATA
def generate_q_matrix(path, n_skill, n_problem, gamma=0):
with open(path, 'r', encoding='utf-8') as f:
for line in f:
problem2skill = eval(line)
q_matrix = np.zeros((n_problem + 1, n_skill + 1)) + gamma
for p in problem2skill.keys():
q_matrix[p][problem2skill[p]] = 1
return q_matrix
batch_size = 32
n_at = 1326
n_it = 2839
n_question = 102
n_exercise = 3162
seqlen = 500
d_k = 128
d_a = 50
d_e = 128
q_gamma = 0.03
dropout = 0.2
dat = DATA(seqlen=seqlen, separate_char=',')
train_data = dat.load_data('../../data/anonymized_full_release_competition_dataset/train.txt')
test_data = dat.load_data('../../data/anonymized_full_release_competition_dataset/test.txt')
q_matrix = generate_q_matrix(
'../../data/anonymized_full_release_competition_dataset/problem2skill',
n_question, n_exercise,
q_gamma
)
###Output
_____no_output_____
###Markdown
Training and Persistence
###Code
import logging
logging.getLogger().setLevel(logging.INFO)
from EduKTM import LPKT
lpkt = LPKT(n_at, n_it, n_exercise, n_question, d_a, d_e, d_k, q_matrix, batch_size, dropout)
lpkt.train(train_data, test_data, epoch=2, lr=0.003)
lpkt.save("lpkt.params")
###Output
Training: 100%|██████████| 61/61 [17:57<00:00, 17.66s/it]
Testing: 0%| | 0/26 [00:00<?, ?it/s]
###Markdown
Loading and Testing
###Code
lpkt.load("lpkt.params")
_, auc, accuracy = lpkt.eval(test_data)
print("auc: %.6f, accuracy: %.6f" % (auc, accuracy))
###Output
INFO:root:load parameters from lpkt.params
Testing: 100%|██████████| 26/26 [01:18<00:00, 3.04s/it]
###Markdown
Deep Knowledge Tracing Plus (LPKT)This notebook will show you how to train and use the LPKT.First, we will show how to get the data (here we use assistment-2017 as the dataset).Then we will show how to train a LPKT and perform the parameters persistence.At last, we will show how to load the parameters from the file and evaluate on the test dataset.The script version could be found in [LPKT.py](LPKT.ipynb) Data PreparationBefore we process the data, we need to first acquire the dataset which is shown in [prepare_dataset.ipynb](prepare_dataset.ipynb)
###Code
import numpy as np
from load_data import DATA
def generate_q_matrix(path, n_skill, n_problem, gamma=0):
with open(path, 'r', encoding='utf-8') as f:
for line in f:
problem2skill = eval(line)
q_matrix = np.zeros((n_problem + 1, n_skill + 1)) + gamma
for p in problem2skill.keys():
q_matrix[p][problem2skill[p]] = 1
return q_matrix
batch_size = 64
n_at = 9632
n_it = 2890
n_question = 102
n_exercise = 3162
seqlen = 500
d_k = 128
d_a = 50
d_e = 128
q_gamma = 0.3
dropout = 0.2
dat = DATA(seqlen=seqlen, separate_char=',')
train_data = dat.load_data('../../data/anonymized_full_release_competition_dataset/train.txt')
test_data = dat.load_data('../../data/anonymized_full_release_competition_dataset/test.txt')
q_matrix = generate_q_matrix(
'../../data/anonymized_full_release_competition_dataset/problem2skill',
n_question, n_exercise,
q_gamma
)
###Output
_____no_output_____
###Markdown
Training and Persistence
###Code
import logging
logging.getLogger().setLevel(logging.INFO)
from EduKTM import LPKT
lpkt = LPKT(n_at, n_it, n_exercise, n_question, d_a, d_e, d_k, q_matrix, batch_size, dropout)
lpkt.train(train_data, test_data, epoch=2)
lpkt.save("lpkt.params")
###Output
Training: 100%|██████████| 31/31 [16:34<00:00, 32.09s/it]
Testing: 0%| | 0/13 [00:00<?, ?it/s]
###Markdown
Loading and Testing
###Code
lpkt.load("lpkt.params")
_, auc, accuracy = lpkt.eval(test_data)
print("auc: %.6f, accuracy: %.6f" % (auc, accuracy))
###Output
INFO:root:load parameters from lpkt.params
Testing: 100%|██████████| 13/13 [01:20<00:00, 6.23s/it] |
notebooks/cleaning_data_in_r_annotated.ipynb | ###Markdown
Cleaning Data in R Live TrainingWelcome to this hands-on training where you'll identify issues in a dataset and clean it from start to finish using R. It's often said that data scientists spend 80% of their time cleaning and manipulating data and only about 20% of their time analyzing it, so cleaning data is an important skill to master!In this session, you will:- Examine a dataset and identify its problem areas, and what needs to be done to fix them.-Convert between data types to make analysis easier.- Correct inconsistencies in categorical data.- Deal with missing data.- Perform data validation to ensure every value makes sense. **The Dataset**The dataset we'll use is a CSV file named `nyc_airbnb.csv`, which contains data on [*Airbnb*](https://www.airbnb.com/) listings in New York City. It contains the following columns:- `listing_id`: The unique identifier for a listing- `name`: The description used on the listing- `host_id`: Unique identifier for a host- `host_name`: Name of host- `nbhood_full`: Name of borough and neighborhood- `coordinates`: Coordinates of listing _(latitude, longitude)_- `room_type`: Type of room - `price`: Price per night for listing- `nb_reviews`: Number of reviews received - `last_review`: Date of last review- `reviews_per_month`: Average number of reviews per month- `availability_365`: Number of days available per year- `avg_rating`: Average rating (from 0 to 5)- `avg_stays_per_month`: Average number of stays per month- `pct_5_stars`: Percent of reviews that were 5-stars- `listing_added`: Date when listing was added
###Code
### Explain Google Colabs
### Explain Jupyter notebook format - originally for writing and running Python code,
##### but you can use lots of different languages, so today we'll be using R inside of this Jupyter notebook
### Run a cell using shift+enter or command/ctrl+enter
### Add a new cell using +Code button in top left
### Co-labs already has many Tidyverse packages pre-installed, so we only need to install the non-tidyverse packages we'll be using.
# Install non-tidyverse packages
install.packages("visdat")
# Load packages
library(readr)
library(dplyr)
library(stringr)
library(visdat)
library(tidyr)
library(ggplot2)
library(forcats)
# Load dataset
airbnb <- read_csv("https://raw.githubusercontent.com/datacamp/cleaning-data-in-r-live-training/master/assets/nyc_airbnb.csv")
# Examine the first few rows
head(airbnb)
###Output
_____no_output_____
###Markdown
**Diagnosing data cleaning problems**We'll need to get a good look at the data frame in order to identify any problems that may cause issues during an analysis. There are a variety of functions (both from base R and `dplyr`) that can help us with this:- `head()` to look at the first few rows of the data- `glimpse()` to get a summary of the variables' data types- `summary()` to compute summary statistics of each variable and display the number of missing values- `duplicated()` to find duplicates
###Code
# Print the first few rows of data
head(airbnb)
###Output
_____no_output_____
###Markdown
- **Observation 1:** The `coordinates` column contains multiple pieces of information: both latitude and longitude.- **Observation 2:** The `price` column is formatted with an unnecessary `$`.
###Code
# Inspect data types
glimpse(airbnb)
###Output
_____no_output_____
###Markdown
- **Observation 3:** Columns like `coordinates` and `price` are factors instead of numeric values.- **Observation 4:** Columns with dates like `last_review` and `listing_added` are factors instead of the `Date` data type.
###Code
# Examine summary statistics and missing values
summary(airbnb)
###Output
_____no_output_____
###Markdown
- **Observation 5:** There are 2075 missing values in `reviews_per_month`, `avg_rating`, `nb_stays`, and `pct_5_stars`.- **Observation 6:** The max of `avg_rating` is above 5 (out of range value)- **Observation 7:** There are inconsistencies in the categories of `room_type`, i.e. `"Private"`, `"Private room"`, and `"PRIVATE ROOM"`.
###Code
# Count data with duplicated listing_id
airbnb %>%
filter(duplicated(listing_id)) %>%
count()
###Output
_____no_output_____
###Markdown
*A note on the `%>%` operator:*This is an operator commonly used in the Tidyverse to make code more readable. The `%>%` takes the result of whatever is before it and inserts it as the first argument in the subsequent function.We could do this exact same counting operation using the following, but the function calls aren't in the order they're being executed, which makes it difficult to understand what's going on. The `%>%` allows us to write the functions in the order that they're executed.```rcount(filter(airbnb, duplicated(listing_id)))``` - **Observation 8:** There are 20 rows whose `listing_id` already appeared earlier in the dataset (duplicates). **What do we need to do?****Data type issues**- **Task 1:** Split `coordinates` into latitude and longitude and convert `numeric` data type.- **Task 2:** Remove `$`s from `price` column and convert to `numeric`.- **Task 3:** Convert `last_review` and `listing_added` to `Date`.**Text & categorical data issues**- **Task 4:** Split `nbhood_full` into separate neighborhood and borough columns.- **Task 5:** Collapse the categories of `room_type` so that they're consistent.**Data range issues**- **Task 6:** Fix the `avg_rating` column so it doesn't exceed `5`.**Missing data issues**- **Task 7:** Further investigate the missing data and decide how to handle them.**Duplicate data issues**- **Task 8:** Further investigate duplicate data points and decide how to handle them.***But also...***- We need to validate our data using various sanity checks ---Q&A--- **Cleaning the data** **Data type issues**
###Code
# Reminder: what does the data look like?
head(airbnb)
###Output
_____no_output_____
###Markdown
**Task 1:** Split `coordinates` into latitude and longitude and convert `numeric` data type. - `str_remove_all()` removes all instances of a substring from a string.- `str_split()` will split a string into multiple pieces based on a separation string.- `as.data.frame()` converts an object into a data frame. It automatically converts any strings to `factor`s, which is not what we want in this case, so we'll stop this behavior using `stringsAsFactors = FALSE`.- `rename()` takes arguments of the format `new_col_name = old_col_name` and renames the columns as such.
###Code
# Create lat_lon columns
lat_lon <- airbnb$coordinates %>%
# Remove left parentheses
str_remove_all(fixed("(")) %>% # Why do we use fixed()?
# Remove right parentheses
str_remove_all(fixed(")")) %>% ########
# Split latitude and longitude # simplify = TRUE turns it into a matrix instead of a list
str_split(", ", simplify = TRUE) %>% #########
# Convert from matrix to data frame
as.data.frame(stringsAsFactors = FALSE) %>% ########
# Rename columns
rename(latitude = V1, longitude = V2) ######
# Then assign to lat_lon
###Output
_____no_output_____
###Markdown
- `cbind()` stands for column bind, which sticks two data frames together horizontally.(***ROWS MUST BE IN SAME ORDER***)
###Code
# Assign it to dataset
airbnb <- airbnb %>%
# Combine lat_lon with original data frame
cbind(lat_lon) %>% ######
# Convert to numeric
mutate(latitude = as.numeric(latitude),
longitude = as.numeric(longitude)) %>%
# Remove coordinates column
select(-coordinates)
###Output
_____no_output_____
###Markdown
**Task 2:** Remove `$`s from `price` column and convert to `numeric`.
###Code
# Remove $ and convert to numeric
### We can use the same functions we just used!
price_clean <- airbnb$price %>%
str_remove_all(fixed("$")) %>%
as.numeric()
###Output
_____no_output_____
###Markdown
Notice we get a warning here that values are being converted to `NA`, so before we move on, we need to look into this further to ensure that the values are actually missing and we're not losing data by mistake.Let's take a look at the values of `price`.
###Code
# Look at values of price
airbnb %>%
count(price, sort = TRUE)
###Output
_____no_output_____
###Markdown
It looks like we have a non-standard representation of `NA` here, `$NA`, so these are getting coerced to `NA`s. This is the behavior we want, so we can ignore the warning.
###Code
# Add to data frame
airbnb <- airbnb %>%
mutate(price = price_clean)
###Output
_____no_output_____
###Markdown
**Task 3:** Convert `last_review` and `listing_added` to `Date`.Conversion to `Date` is done using `as.Date()`, which takes in a `format` argument. The `format` argument allows us to convert lots of different formats of dates to a `Date` type, like "January 1, 2020" or "01-01-2020". There are special symbols that we use to specify this. Here are a few of them, but you can find all the possible ones by typing `?strptime` into your console.A date like "21 Oct 2020" would be in the format `"%d %b %Y"`.
###Code
# Look up date formatting symbols
?strptime
# Examine first rows of date columns
airbnb %>%
select(last_review, listing_added) %>%
head()
# Convert strings to Dates
airbnb <- airbnb %>%
mutate(last_review = as.Date(last_review, format = "%m/%d/%Y"),
listing_added = as.Date(listing_added, format = "%m/%d/%Y"))
###Output
_____no_output_____
###Markdown
---Q&A--- **Text & categorical data issues** **Task 4:** Split `nbhood_full` into separate `nbhood` and `borough` columns.
###Code
# Split borough and neighborhood
### This is just like when we split the coordinates
borough_nbhood <- airbnb$nbhood_full %>%
# Split column
str_split(", ", simplify = TRUE) %>%
# Convert from matrix to data frame
as.data.frame() %>%
# Rename columns
rename(borough = V1, nbhood = V2)
# Assign to airbnb
airbnb <- airbnb %>%
# Combine borough_nbhood with data
cbind(borough_nbhood) %>%
# Remove nbhood_full
select(-nbhood_full)
###Output
_____no_output_____
###Markdown
**Task 5:** Collapse the categories of `room_type` so that they're consistent.
###Code
# Count categories of room_type
airbnb %>%
count(room_type)
###Output
_____no_output_____
###Markdown
- `stringr::str_to_lower()` converts strings to all lowercase, so `"PRIVATE ROOM"` becomes `"private room"`. This saves us the pain of having to go through the dataset and find each different capitalized variation of `"private room"`.- `forcats::fct_collapse()` will combine multiple categories into one, which is useful when there are a few different values that mean the same thing.
###Code
# Collapse categorical variables
room_type_clean <- airbnb$room_type %>%
# Change all to lowercase
str_to_lower() %>% ###########
# Collapse categories
fct_collapse(private_room = c("private", "private room"),
entire_place = c("entire home/apt", "home"),
shared_room = "shared room")
# Add to data frame
airbnb <- airbnb %>%
mutate(room_type = room_type_clean)
###Output
_____no_output_____
###Markdown
---Q&A--- **Data range issues** **Task 6:** Fix the `avg_rating` column so it doesn't exceed `5`.
###Code
# How many places with avg_rating above 5?
airbnb %>%
filter(avg_rating > 5) %>%
count()
##### Since theree are only a few, we can just print them to see them
# What does the data for these places look like?
airbnb %>%
filter(avg_rating > 5)
# Remove the rows with rating > 5
airbnb <- airbnb %>%
filter(avg_rating <= 5 | is.na(avg_rating)) # NA is not <= 5, and we don't want to get rid of all missing values before having
# time to properly look at them, so include this in filter.
###Output
_____no_output_____
###Markdown
**Missing data issues** **Task 7:** Further investigate the missing data and decide how to handle them.When dealing with missing data, it's important to understand what type of missingness we might have in our data. Oftentimes, missing data can be related to other dynamics in the dataset and requires some domain knowledge to deal with them.The `visdat` package is useful for investigating missing data.
###Code
# See summary again
summary(airbnb)
# Visualize missingness
airbnb %>%
# Focus only on columns with missing values
select(price, last_review, reviews_per_month, avg_rating, avg_stays_per_month) %>%
# Visualize missing data
vis_miss()
###Output
_____no_output_____
###Markdown
It looks like missingness of `last_review`, `reviews_per_month`, `avg_rating`, and `avg_stays_per_month` are related. This suggests that these are places that have never been visited before (therefore have no ratings, reviews, or stays).However, `price` is unrelated to the other columns, so we'll need to take a different approach for that.
###Code
# Sanity check that our hypothesis is correct
##### Are there any rows with at least 1 review, but missing reviews_per_month/avg_stays_per_month?
airbnb %>%
filter(nb_reviews != 0,
is.na(reviews_per_month))
airbnb %>%
filter(nb_reviews != 0,
is.na(avg_stays_per_month))
### There are none, so this means all listings with missing reviews_per_month/avg_stays_per_month have 0 reviews
###Output
_____no_output_____
###Markdown
Now that we have a bit of evidence, we'll assume our hypothesis is true.- We'll set any missing values in `reviews_per_month` or `avg_stays_per_month` to `0`. - Use `tidyr::replace_na()`- We'll leave `last_review` and `avg_rating` as `NA`.- We'll create a `logical` (`TRUE`/`FALSE`) column called `is_visited`, indicating whether or not the listing has been visited before.
###Code
# Replace missing data
airbnb <- airbnb %>%
# Replace missing values in reviews_per_month or avg_stays_per_month with 0
replace_na(list(reviews_per_month = 0, avg_stays_per_month = 0)) %>%
# Create is_visited
mutate(is_visited = !is.na(avg_rating))
###Output
_____no_output_____
###Markdown
**Treating the `price` column**There are lots of ways we could do this- Remove all rows with missing price values- Fill in missing prices with the overall average price- Fill in missing prices based on other columns like `borough` or `room_type`**Let's examine the relationship between `room_type` and `price`.**
###Code
# Create a boxplot showing the distribution of price for each room_type
ggplot(airbnb, aes(x = room_type, y = price)) +
geom_boxplot() + ###### Dataset has a few very high values, let's set a limit on the y-axis to get a better look at the distributions
ylim(0, 1000)
###Output
_____no_output_____
###Markdown
We'll use *median* to summarize the `price` for each `room_type` since the distributions have a number of outliers, and median is more robust to outliers than mean.We'll use `ifelse()`, which takes arguments of the form: `ifelse(condition, value if true, value if false)`.
###Code
# Use a grouped mutate to fill in missing prices with median of their room_type
airbnb %>%
group_by(room_type) %>%
mutate(price_filled = ifelse(is.na(price), median(price, na.rm = TRUE), price)) %>%
# Look at the values we filled in to make sure it looks how we want
filter(is.na(price)) %>%
select(listing_id, description, room_type, price, price_filled)
# Overwrite price column in original data frame
airbnb <- airbnb %>%
group_by(room_type) %>%
mutate(price = ifelse(is.na(price), median(price, na.rm = TRUE), price)) %>%
ungroup()
###Output
_____no_output_____
###Markdown
**Duplicate data issues** **Task 8:** Further investigate duplicate data points and decide how to handle them.
###Code
# Find duplicated listing_ids
duplicate_ids <- airbnb %>%
count(listing_id) %>%
filter(n > 1)
# Why do we do this instead of using duplicated?
# This will give us *all duplicates*, whereas duplicated() will return FALSE for the first occurrence of a row
# Look at duplicated data
airbnb %>%
filter(listing_id %in% duplicate_ids$listing_id) %>% ###### This is tricky to see duplicates since they're not next to each other, so:
arrange(listing_id)
###Output
_____no_output_____
###Markdown
***Full duplicates***: All values match.- To handle these, we can just remove all copies but one using `dplyr::distinct()`***Partial duplicates***: Identifying values (like `listing_id`) match, but one or more of the others don't. Here, we have inconsistent values in `price`, `avg_rating`, and `listing_added`.- We can remove them, pick a random copy to keep, or aggregate any inconsistent values. We'll aggregate using `mean()` for `price` and `avg_rating`, and `max()` for `listing_added`.
###Code
# Remove full duplicates
airbnb <- airbnb %>%
distinct()
# Aggregate partial duplicates using grouped mutate
airbnb <- airbnb %>%
group_by(listing_id) %>%
# Overwrite columns with aggregations
mutate(price = mean(price),
avg_rating = mean(avg_rating),
listing_added = max(listing_added)) %>%
# Remove duplicates based only on listing_id
distinct(listing_id, .keep_all = TRUE) #.keep_all=FALSE will remove all the other columns except listing_id
# Check that no duplicates remain
airbnb %>%
count(listing_id) %>%
filter(n > 1)
###Output
_____no_output_____ |
book/pandas/02-Reading Data.ipynb | ###Markdown
Table of Contents1 Reading Data into Pandas1.1 Read CSV1.2 Read Excel Sheet1.3 Read Multiple Excel Sheets1.4 From URL1.5 Modify Dataset1.6 Read Biological Data(.txt)1.7 Read Biological Data(.tsv)1.8 Read HTML Data1.8.1 About the Author Reading Data into Pandas
###Code
# conventional way to import pandas
import pandas as pd
# Set options for displaying rows and columns
###Output
_____no_output_____
###Markdown
Read CSV
###Code
# read data from csv file
corona = pd.read_csv("../data/covid-19_cleaned_data.csv")
# Examine first few rows
corona.head()
###Output
_____no_output_____
###Markdown
Read Excel Sheet
###Code
# read data from excel file
movies = pd.read_excel("../data/movies.xls")
# examine first few rows
movies.head()
###Output
_____no_output_____
###Markdown
Read Multiple Excel Sheets
###Code
import xlrd
# Import xlsx file and store each sheet in to a df list
xl_file = pd.ExcelFile("../data/data.xls",)
# Dictionary comprehension
dfs = {sheet_name: xl_file.parse(sheet_name) for sheet_name in xl_file.sheet_names}
# Data from each sheet can be accessed via key
keylist = list(dfs.keys())
# Examine the sheet name
keylist[1:10]
# Accessing first sheet
dfs[keylist[0]]
###Output
_____no_output_____
###Markdown
From URL
###Code
# read a dataset of Chipotle orders directly from a URL and store the results in a DataFrame
orders = pd.read_table('http://bit.ly/chiporders')
# examine the first 5 rows
orders.head()
# examine the last 5 rows
orders.tail()
# examine the first `n` number of rows
orders.head(10)
# examine the last `n` number of rows
orders.tail(10)
###Output
_____no_output_____
###Markdown
Modify Dataset
###Code
# read a dataset of movie reviewers (modifying the default parameter values for read_table)
users = pd.read_table('http://bit.ly//movieusers')
# examine the first 5 rows
users.head()
# DataFrame looks ugly. let's modify the default parameter for read_table
user_cols = ['user_id', 'age', 'gender', 'occupation', 'zip_code']
users = pd.read_table('http://bit.ly//movieusers', sep='|' , header=None, names=user_cols)
# now take a look at modified dataset
users.head()
###Output
_____no_output_____
###Markdown
Read Biological Data(.txt)
###Code
# read text/csv data into pandas
chrom = pd.read_csv("../data/Encode_HMM_data.txt", delimiter="\t", header=None)
# Examine first few rows
chrom.head()
# it's not much better to see. so we have to modify this dataset
cols_name = ['chrom', 'start', 'stop', 'type']
chrom = pd.read_csv("../data/Encode_HMM_data.txt", delimiter="\t", header=None, names=cols_name)
# now examine first few rows
chrom.head()
###Output
_____no_output_____
###Markdown
Read Biological Data(.tsv)
###Code
pokemon = pd.read_csv("../data/pokemon.tsv", sep="\t")
pokemon.head()
###Output
_____no_output_____
###Markdown
Read HTML Data
###Code
# Read HTML data from web
url = 'https://www.fdic.gov/bank/individual/failed/banklist.html'
data = pd.io.html.read_html(url)
# Check type
type(data)
# access data
data[0]
###Output
_____no_output_____ |
A.Y. 2020-2021/5_ScikitLearn_Tutorial.ipynb | ###Markdown
Table of Contents- **Data Representation**- **Supervised Learning** - Classification Example - Regression Example- **Unsupervised Learning** - Dimensionality Reduction: PCA - Clustering: K-means- **Recap: Scikit-learn's estimator interface**- **Evaluation Metrics**- **Validation**- **Cross-Validation**- **Overfitting, Underfitting and Model Selection**- **Feature Selection** - K-Best selection- **Pipelines** From the [sklearn tutorial](https://ogrisel.github.io/scikit-learn.org/sklearn-tutorial/tutorial/astronomy/general_concepts.html) "Machine Learning 101: General Concepts" [**scikit-learn**](http://scikit-learn.org) is a Python package designed to give access to well-known **machine learning algorithms within Python** code, through a clean, well-thought-out API. It has been built by hundreds of contributors from around the world, and is used across industry and academia.scikit-learn is built upon Python's [NumPy](http://www.numpy.org/) (Numerical Python) and [SciPy](http://www.scipy.org/) (Scientific Python) libraries, which enable efficient in-core numerical and scientific computation within Python. As such, scikit-learn is not specifically designed for extremely large datasets, though there is some work in this area.
###Code
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Data RepresentationMost machine learning algorithms implemented in scikit-learn expect a **two-dimensional array or matrix** `X`, usually represented as a NumPy ndarray. The expected shape of `X` is `(n_samples, n_features)`.* `n_samples`: The number of samples, where each sample is an item to process (e.g., classify). A sample can be a document, a picture, a sound, a video, a row in database or CSV file, or whatever you can describe with a fixed set of quantitative traits.* `n_features`: The number of features or distinct traits that can be used to describe each item in a quantitative manner. Features are generally real-valued, but may be boolean or discrete-valued in some cases.The number of features must be fixed in advance. However it can be very high dimensional (e.g. millions of features) with most of them being zeros for a given sample. In this case we may use `scipy.sparse` matrices instead of NumPy arrays so as to make the data fit in memory.The *supervised* machine learning algorithms implemented in scikit-learn also expect a **one-dimensional array** `y` with shape `(n_samples,)`. This array associated a target class to every sample in the input `X`. As an example, we will explore the **Iris dataset**. The machine learning community often uses a simple flowers database where each row in the database (or CSV file) is a set of measurements of an individual iris flower. Each sample in this dataset is described by 4 features and can belong to one of three target classes:**Features in the Iris dataset:** * sepal length in cm* sepal width in cm* petal length in cm* petal width in cm**Target classes to predict:** * Iris Setosa* Iris Versicolour* Iris VirginicaScikit-Learn embeds a copy of the Iris CSV file along with a helper function to load it into NumPy arrays:
###Code
from sklearn.datasets import load_iris
iris = load_iris()
type(iris)
dir(iris)
###Output
_____no_output_____
###Markdown
The features of each sample flower are stored in the `data` attribute of the dataset:
###Code
n_samples, n_features = iris.data.shape
print(n_samples)
print(n_features)
print(iris.data[0])
###Output
150
4
[5.1 3.5 1.4 0.2]
###Markdown
The information about the class of each sample is stored in the `target` attribute of the dataset:
###Code
print(iris.target)
###Output
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2]
###Markdown
This data is four dimensional, but we can visualize two of the dimensions at a time using a simple scatter-plot:
###Code
x_index = 0
y_index = 1
# this formatter will label the colorbar with the correct target names
formatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)])
plt.scatter(iris.data[:, x_index], iris.data[:, y_index], c=iris.target, cmap=plt.cm.get_cmap('Paired', 3))
plt.colorbar(ticks=[0, 1, 2], format=formatter)
plt.clim(-0.5, 2.5)
plt.xlabel(iris.feature_names[x_index])
plt.ylabel(iris.feature_names[y_index])
plt.show()
###Output
_____no_output_____
###Markdown
The previous example deals with features that are readily available in a structured dataset with rows and columns of numerical or categorical values.However, most of the produced data is not readily available in a structured representation such as SQL, CSV, XML, JSON or RDF.How to turn unstructed data (e.g. text documents) items into arrays of numerical features?A solution for **text documents** consists in counting the frequency of each word or pair of consecutive words in each document. This approach is called Bag of Words.Note: we include other file formats such as HTML and PDF in this category: an ad-hoc preprocessing step is required to extract the plain text in UTF-8 encoding for instance. Supervised LearningA supervised learning algorithm makes the distinction between the raw observed data `X` with shape `(n_samples, n_features)` and some label given to the model while training by some teacher. In scikit-learn this array is often noted `y` and has generally the shape `(n_samples,)`.After training, the fitted model does no longer expect the `y` as an input: it will try to predict the most likely labels `y_new` for a new set of samples `X_new`.Depending on the nature of the target `y`, supervised learning can be given different names:* If `y` has values in a fixed set of **categorical outcomes** (represented by integers) the task to predict `y` is called **classification**.* If `y` has **floating point values** (e.g. to represent a price, a temperature, a size...), the task to predict `y` is called **regression**. Classification ExampleK nearest neighbors (kNN) is one of the simplest learning strategies: given a new, unknown observation, look up in your reference database which ones have the closest features and assign the predominant class.Let's try it out on our iris classification problem:
###Code
from sklearn import neighbors, datasets
iris = datasets.load_iris()
X, y = iris.data, iris.target
# create the model
knn = neighbors.KNeighborsClassifier(n_neighbors=5)
# fit the model
knn.fit(X, y)
# What kind of iris has 3cm x 5cm sepal and 4cm x 2cm petal?
# call the "predict" method:
result = knn.predict([[3, 5, 4, 2],])
print(iris.target_names[result])
###Output
['versicolor']
###Markdown
Regression ExampleOne of the simplest regression problems is fitting a line to data.
###Code
# Create some simple data
import numpy as np
np.random.seed(0)
X = np.random.random(size=(20, 1))
y = 3 * X.squeeze() + 2 + np.random.randn(20)
plt.plot(X.squeeze(), y, 'o')
plt.show()
print(X.shape)
###Output
(20, 1)
###Markdown
Note that model.fit() typically expects a 2D arrayYou must reshape your data- using array.reshape(-1, 1) if your data has a single feature- using array.reshape(1, -1) if it contains a single sample
###Code
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X, y)
# Plot the data and the model prediction
X_fit = np.linspace(0, 1, 100)[:, np.newaxis]
y_fit = model.predict(X_fit)
plt.plot(X, y, 'o')
plt.plot(X_fit, y_fit)
plt.show()
###Output
_____no_output_____
###Markdown
Scikit-learn also has some more sophisticated models, which can respond to finer features in the data:
###Code
# Fit a DecisionTree
from sklearn.tree import DecisionTreeRegressor
model1 = DecisionTreeRegressor(max_depth = 2)
model1.fit(X, y)
# Plot the data and the model prediction
X_fit = np.linspace(0, 1, 100)[:, np.newaxis]
y1_fit = model1.predict(X_fit)
plt.plot(X.squeeze(), y, 'o')
plt.plot(X_fit.squeeze(), y1_fit)
plt.show()
###Output
_____no_output_____
###Markdown
Unsupervised LearningUnsupervised learning addresses a different sort of problem. Here the data has no labels, and we are interested in finding similarities between the objects in question. An unsupervised learning algorithm only uses a single set of observations `X` with shape `(n_samples, n_features)` and does not use any kind of labels.Unsupervised learning comprises tasks such as *dimensionality reduction* and *clustering*. For example, in the Iris data discussed above, we can use unsupervised methods to determine combinations of the measurements which best display the structure of the data. Sometimes the two may even be combined: e.g. Unsupervised learning can be used to find useful features in heterogeneous data, and then these features can be used within a supervised framework. Dimensionality Reduction: PCAPrinciple Component Analysis (PCA) is a dimensionality reduction technique that can find the combinations of variables that explain the most variance.Consider the Iris dataset. It cannot be visualized in a single 2D plot, as it has 4 features. We are going to extract 2 combinations of sepal and petal dimensions to visualize it.
###Code
PCA?
X, y = iris.data, iris.target
from sklearn.decomposition import PCA
pca = PCA(n_components=0.95)
pca.fit(X)
X_reduced = pca.transform(X)
print("Reduced dataset shape:", X_reduced.shape)
plt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y, cmap='Paired')
plt.show()
###Output
Reduced dataset shape: (150, 2)
###Markdown
Clustering: K-meansClustering groups together observations that are homogeneous with respect to a given criterion, finding ''clusters'' in the data.Note that these clusters will uncover relevant hidden structure of the data only if the criterion used highlights it.
###Code
from sklearn.cluster import KMeans
k_means = KMeans(n_clusters=3, random_state=0) # Fixing the RNG in kmeans
k_means.fit(X)
y_pred = k_means.predict(X)
plt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y_pred, cmap='Paired')
plt.show()
###Output
_____no_output_____
###Markdown
Exercise- generate N clusters, each from a distinct random gaussian distribution (remember np.random.randn?), in a bi-dimensional attribute space,- apply DBSCAN algorithm,- apply the k-dist heuristic to find a proper parameter configuration.
###Code
# Possible solution
from sklearn.neighbors import NearestNeighbors
from matplotlib import pyplot as plt
from sklearn.datasets import make_biclusters
from sklearn.cluster import DBSCAN
data,lab,_ = make_biclusters((200,2), 2, noise=0.1, minval=0, maxval=1,random_state = 42)
minpts = 4
nbrs = NearestNeighbors(n_neighbors=minpts).fit(data)
distances, indices = nbrs.kneighbors(data)
k_dist = [x[-1] for x in distances]
dbclustering = DBSCAN(0.075,min_samples = 4)
labels = dbclustering.fit_predict(data)
f,ax = plt.subplots(1,3,figsize = (15,5))
ax[0].set_title('original data')
ax[0].scatter(data[:,0],data[:,1],c = lab[0])
ax[1].set_title('k-dist plot for k = minpts = 4')
ax[1].plot(sorted(k_dist))
ax[1].set_xlabel('object index after sorting by k-distance')
ax[1].set_ylabel('k-distance')
ax[2].set_title('cluster labels')
ax[2].scatter(data[:,0],data[:,1],c = labels)
###Output
_____no_output_____
###Markdown
Recap: Scikit-learn's estimator interfaceScikit-learn strives to have a uniform interface across all methods, and we have seen examples of these above. Every algorithm is exposed in scikit-learn via an **estimator** object. Given a scikit-learn estimator object named model, the following methods are available:* Available in **all estimators**: - `model.fit()`: fit training data. - For supervised learning applications, this accepts two arguments: the data `X` and the labels `y` (e.g. `model.fit(X, y)`). - For unsupervised learning applications, this accepts only a single argument, the data `X` (e.g. `model.fit(X)`).* Available in **supervised estimators**: - `model.predict()`: given a trained model, predict the label of a new set of data. This method accepts one argument, the new data `X_new` (e.g. `model.predict(X_new)`), and returns the learned label for each object in the array. - `model.predict_proba()`: For classification problems, some estimators also provide this method, which returns the probability that a new observation has each categorical label. In this case, the label with the highest probability is returned by `model.predict()`. - `model.score()`: for classification or regression problems, most estimators implement a score method. Scores are between 0 and 1, with a larger score indicating a better fit.* Available in **unsupervised estimators**: - `model.predict()`: predict labels in clustering algorithms. - `model.transform()`: given an unsupervised model, transform new data into the new basis. This also accepts one argument X_new, and returns the new representation of the data based on the unsupervised model. - `model.fit_transform()`: some estimators implement this method, which more efficiently performs a fit and a transform on the same input data. Evaluation MetricsMachine learning models are often used to predict the outcomes of a classification problem. Predictive models rarely predict everything perfectly, so there are many performance metrics that can be used to analyze our models.When you run a prediction on your data to distinguish among two classes (*positive* and *negative* classes, for simplicity), your results can be broken down into 4 parts:* **True Positives**: data in class *positive* that the model predicts will be in class *positive*;* **True Negatives**: data in class *negative* that the model predicts will be in class *negative*;* **False Positives**: data in class *negative* that the model predicts will be in class *positive*;* **False Negatives**: data in class *positive* that the model predicts will be in class *negative*.The most common performance metrics in this binary classification scenario are the following:* **accuracy**: the fraction of observations (both positive and negative) predicted correctly:$$ Accuracy = \frac{(TP+TN)}{(TP+FP+TN+FN)} $$* **recall**: the fraction of positive observations that are predicted correctly:$$ Recall = \frac{TP}{(TP+FN)} $$* **precision**: the fraction of of predicted positive observations that are actually positive:$$ Precision = \frac{TP}{(TP+FP)} $$* **f1-score**: a composite measure that combines both precision and recall:$$ F_1 = \frac{2 \cdot P \cdot R}{(P+R)}$$The **confusion matrix** is useful for quickly calculating precision and recall given the predicted labels from a model. A confusion matrix for binary classification shows the four different outcomes: true positive, false positive, true negative, and false negative. The actual values form the columns, and the predicted values (labels) form the rows. The intersection of the rows and columns show one of the four outcomes.  Validation Consider the digits example (8x8 images of handwritten digits). How might we check how well our model fits the data? The WRONG way
###Code
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data
y = digits.target
print(digits.data.shape)
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X, y)
y_pred = knn.predict(X)
print(f"{np.sum(y == y_pred)} / {len(y)} correct")
###Output
(1797, 64)
1797 / 1797 correct
###Markdown
It seems we have a perfect classifier! **What's wrong with this?**Learning the parameters of a prediction function and testing it on the same data is a **methodological mistake**: a model that would just repeat the labels of the samples that it has just seen would have a perfect score but would fail to predict anything useful on yet-unseen data.A better way to test a model is to **use a hold-out set which doesn't participate in the training**, using scikit-learn's train/test split utility.
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Now we train on the training data, and validate on the test data:
###Code
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(f"{np.sum(y_test == y_pred)} / { len(y_test)} correct")
###Output
444 / 450 correct
###Markdown
This gives us a more reliable estimate of how our model is doing.The metric we're using here, comparing the number of matches to the total number of samples, is known as the **accuracy score**, and can be computed using the following routine:
###Code
from sklearn.metrics import accuracy_score, classification_report
print(accuracy_score(y_test, y_pred))
###Output
0.9866666666666667
###Markdown
Let's quantify the classifier's prediction performance by generating a scikit-learn **classification report**:
###Code
print(classification_report(y_test,y_pred))
###Output
precision recall f1-score support
0 1.00 1.00 1.00 47
1 1.00 1.00 1.00 41
2 1.00 0.98 0.99 45
3 0.93 1.00 0.97 57
4 1.00 1.00 1.00 36
5 0.97 0.97 0.97 38
6 1.00 1.00 1.00 47
7 1.00 1.00 1.00 42
8 1.00 1.00 1.00 47
9 0.98 0.92 0.95 50
accuracy 0.99 450
macro avg 0.99 0.99 0.99 450
weighted avg 0.99 0.99 0.99 450
###Markdown
Confusion Matrix: from the [documentation](https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.htmlsphx-glr-auto-examples-model-selection-plot-confusion-matrix-py)
###Code
from sklearn.metrics import plot_confusion_matrix
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
titles_options = [("Confusion matrix, without normalization", None),
("Normalized confusion matrix", 'true')]
for title, normalize in titles_options:
fig,ax = plt.subplots(1,1,figsize = (8,8))
disp = plot_confusion_matrix(knn, X_test, y_test,
display_labels=range(10),
cmap=plt.cm.Blues,
normalize=normalize,
ax = ax)
disp.ax_.set_title(title)
print(title)
print(disp.confusion_matrix)
plt.show()
###Output
Confusion matrix, without normalization
[[47 0 0 0 0 0 0 0 0 0]
[ 0 41 0 0 0 0 0 0 0 0]
[ 0 0 44 1 0 0 0 0 0 0]
[ 0 0 0 57 0 0 0 0 0 0]
[ 0 0 0 0 36 0 0 0 0 0]
[ 0 0 0 0 0 37 0 0 0 1]
[ 0 0 0 0 0 0 47 0 0 0]
[ 0 0 0 0 0 0 0 42 0 0]
[ 0 0 0 0 0 0 0 0 47 0]
[ 0 0 0 3 0 1 0 0 0 46]]
Normalized confusion matrix
[[1. 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
[0. 1. 0. 0. 0. 0. 0. 0. 0. 0. ]
[0. 0. 0.98 0.02 0. 0. 0. 0. 0. 0. ]
[0. 0. 0. 1. 0. 0. 0. 0. 0. 0. ]
[0. 0. 0. 0. 1. 0. 0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.97 0. 0. 0. 0.03]
[0. 0. 0. 0. 0. 0. 1. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. ]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. ]
[0. 0. 0. 0.06 0. 0.02 0. 0. 0. 0.92]]
###Markdown
Cross Validation One problem with validation sets is that you "lose" some of the data. Above, we've only used 3/4 of the data for the training, and used 1/4 for the validation. Another option is to use **$K$-fold cross-validation**, where we split the data into $K$ chunks and perform $K$ fits, where each chunk gets a turn as the validation set:
###Code
from sklearn.model_selection import cross_val_score
cv = cross_val_score(KNeighborsClassifier(3), X, y, cv=10)
print(cv.mean())
###Output
0.9766325263811299
###Markdown
In standard $K$-fold cross-validation, we partition the data into $K$ subsets, called **folds**. Then, we iteratively train the algorithm on $k-1$ folds while using the remaining fold as the test set (called the “**holdout fold**”): Overfitting, Underfitting and Model SelectionThe issues associated with validation and cross-validation are some of the most important aspects of the practice of machine learning. Selecting the optimal model for your data is vital, and is a piece of the problem that is not often appreciated by machine learning practitioners.Of core importance is the following question: **if our estimator is underperforming, how should we move forward?**How do we address this question?* Use simpler or more complicated model?* Add more features to each observed data point?* Add more training samples?The answer is often counter-intuitive. In particular, sometimes using a more complicated model will give worse results. Also, sometimes adding training data will not improve your results. **Generalization** refers to how well the concepts learned by a machine learning model apply to *specific examples not seen by the model when it was learning*.The goal of a good machine learning model is to *generalize well from the training data to any data* from the problem domain. This allows us to make predictions in the future on data the model has never seen.There is a terminology used in machine learning when we talk about how well a machine learning model learns and generalizes to new data, namely *underfitting* and *overfitting*.Underfitting and overfitting are the two biggest causes for poor performance of machine learning algorithms.**Underfitting** refers to a model that can neither model the training data nor generalize to new data.An underfit machine learning model is not a suitable model and will be obvious as it will have poor performance on the training data. Underfitting is often not discussed as it is easy to detect given a good performance metric. The remedy is to move on and try alternate machine learning algorithms.**Overfitting** refers to a model that models the training data too well.Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. This means that the noise or random fluctuations in the training data is picked up and learned as concepts by the model. The problem is that these concepts do not apply to new data and negatively impact the models ability to generalize. To illustrate overfitting, we'll work with a simple linear regression problem.
###Code
def test_func(x, err=0.5):
y = 10 - 1. / (x + 0.1)
if err > 0:
y = np.random.normal(y, err)
return y
def make_data(N=40, error=1.0, random_seed=1):
# randomly sample the data
np.random.seed(1)
X = np.random.random(N)[:, np.newaxis]
y = test_func(X.ravel(), error) # ravel returns a contiguous flattened array.
return X, y
X, y = make_data(40, error=1)
plt.scatter(X.ravel(), y)
plt.show()
###Output
_____no_output_____
###Markdown
Now we want to perform a regression on this data, using the scikit-learn built-in linear regression function to compute a fit:
###Code
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
model = LinearRegression()
model.fit(X, y)
X_test = np.linspace(-0.1, 1.1, 500).reshape(500,1)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title(f"mean squared error on training data: {mean_squared_error(model.predict(X), y):.3f}")
plt.show()
###Output
_____no_output_____
###Markdown
We have fit a straight line to the data, but clearly this model is not a good choice: *this model under-fits the data.*We try to improve this by creating a more complicated model. We can do this by adding degrees of freedom, and computing a polynomial regression over the inputs. Let's make a convenience routine to do this:
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
def PolynomialRegression(degree=2, **kwargs): # **kwargs is used to pass the variable keyword arguments to the function
return make_pipeline(PolynomialFeatures(degree),
LinearRegression(**kwargs))
###Output
_____no_output_____
###Markdown
Let see what happens with a 2-degrees and a 30-degrees polynomial regression:
###Code
model = PolynomialRegression(2)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title(f"mean squared error on training data: {mean_squared_error(model.predict(X), y):.3f}")
plt.show()
model = PolynomialRegression(30)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.ylim([0,12])
plt.title(f"mean squared error on training data: {mean_squared_error(model.predict(X), y):.3f}")
plt.show()
###Output
_____no_output_____
###Markdown
In the first case (2 degrees) we have a much better fit, while in the second case (30 degrees), this model is not a good choice again.We can use cross-validation to get a better handle on how the model fit is working. Let's do this here.For a visual assessment of the performance, we will use the `validation_curve` utility. To make things more clear, we'll use a slightly larger dataset.
###Code
X, y = make_data(120, error=1.0)
plt.scatter(X, y);
plt.show()
from sklearn.model_selection import validation_curve
def rms_error(model, X, y):
y_pred = model.predict(X)
return np.sqrt(np.mean((y - y_pred) ** 2))
degree = np.arange(0, 30)
val_train, val_test = validation_curve(PolynomialRegression(), X, y,
param_name = 'polynomialfeatures__degree',
param_range = degree,
cv=7,
scoring=rms_error)
def plot_with_err(x, data, **kwargs):
mu, std = data.mean(1), data.std(1)
lines = plt.plot(x, mu, '-', **kwargs)
plt.fill_between(x, mu - std, mu + std, edgecolor='none',
facecolor=lines[0].get_color(), alpha=0.2)
plot_with_err(degree, val_train, label='training scores')
plot_with_err(degree, val_test, label='validation scores')
plt.ylim([0,5])
plt.xlabel('degree'); plt.ylabel('rms error')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Notice the trend here, which is common for this type of plot.* For a small model complexity, the training error and validation error are very similar. This indicates that the model is *under-fitting the data*: it doesn't have enough complexity to represent the data.* As the model complexity grows, the training and validation scores diverge. This indicates that the model is *over-fitting the data*: it has so much flexibility, that it fits the noise rather than the underlying trend. * Note that the training score (nearly) always improves with model complexity. This is because a more complicated model can fit the noise better, so the model improves. The validation data generally has a sweet spot, which here is around 5 terms. Feature SelectionThe dataset we want to feed into our machine learning model could include a vast amount of features. Among them, there can be *redundant* as well as *irrelevant* features. **Redundant features** convey the same information contained in other features, while **irrelevant features** regard information useless for the learning process.While a domain expert could recognize such features, the process will be long or almost impossible to be carried out by hand. The **feature selection** methods aim at reducing automatically the number of features in a dataset without negatively impacting the predictive power of the learned model.Three benefits of performing feature selection before modeling your data are:* **Reduces overfitting**: less redundant data means less opportunity to make decisions based on noise.* **Improves accuracy**: less misleading data means modeling accuracy improves.* **Reduces training time**: less data means that algorithms train faster.The classes in the `sklearn.feature_selection` module can be used for feature selection/dimensionality reduction on sample sets.The simplest baseline approach to feature selection is `VarianceThreshold`. It removes all features whose variance doesn’t meet some threshold. By default, it removes all zero-variance features, i.e. features that have the same value in all samples.
###Code
from sklearn.feature_selection import VarianceThreshold
X = np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0], [0, 1, 1], [0, 1, 0], [0, 1, 1]])
print(X)
feat_selector = VarianceThreshold(threshold=0.16)
X_sel = feat_selector.fit_transform(X)
print(X_sel)
###Output
[[0 0 1]
[0 1 0]
[1 0 0]
[0 1 1]
[0 1 0]
[0 1 1]]
[[0 1]
[1 0]
[0 0]
[1 1]
[1 0]
[1 1]]
###Markdown
K-Best selection`SelectKBest` removes all but the $k$ highest scoring features, in terms of $\chi^2$
###Code
from sklearn.feature_selection import SelectKBest, chi2
feat_selector = SelectKBest(chi2,k=2)
print(iris.data.shape)
X_sel = feat_selector.fit_transform(iris.data, iris.target)
print(X_sel.shape)
###Output
(150, 4)
(150, 2)
###Markdown
The `SelectKBest` object takes as input a scoring function that returns univariate scores. As scoring function, you may use:- For regression: `f_regression`, `mutual_info_regression`- For classification: `chi2`, `f_classif`, `mutual_info_classif`The methods based on **F-test** estimate the degree of linear dependency between two random variables. On the other hand, **mutual information** methods can capture any kind of statistical dependency, but being nonparametric, they require more samples for accurate estimation. Pipelines- from the [User guide](https://scikit-learn.org/stable/modules/compose.htmlpipeline) Pipeline can be used to **chain multiple estimators into one**. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:- Convenience and encapsulation: you only have to call **fit and predict once on your data** to fit a whole sequence of estimators.- Joint parameter selection: you can **grid search over parameters of all estimators** in the pipeline at once.- Safety: pipelines help **avoid leaking statistics from your test data into the trained model in cross-validation**, by ensuring that the same samples are used to train the transformers and predictors.
###Code
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
from sklearn.decomposition import PCA
estimators = [('reduce_dim', PCA(n_components=0.95)), ('clf', SVC())]
pipe = Pipeline(estimators)
pipe
###Output
_____no_output_____
###Markdown
If properly built, a pipeline can be passed to the `cross_val_score` function, since pipelines are special instances of estimators:
###Code
X, y = iris.data, iris.target
cross_val_score(pipe,X,y,cv = 6)
###Output
_____no_output_____
###Markdown
How to access to a specific step of the pipeline?
###Code
pipe.steps[0][1]
###Output
_____no_output_____ |
tutorial/10-briefTourOfTheStandardLibrary.ipynb | ###Markdown
标准库简介 10.1 操作系统接口os模块提供了许多函数用于与操作系统交互。
###Code
import os
print(os.getcwd()) # 返回当前工作目录
os.chdir('.') # 更换当前工作目录
os.system('echo tk') # 通过系统执行shell
###Output
/Users/tiankai/ownGIT/python_chinese_documentation/tutorial
###Markdown
请不要使用from os import \*,这样会导入os.open()函数,这和文件打开的open()函数冲突。
###Code
# 功能函数
dir(os)
help(os)
###Output
Help on module os:
NAME
os - OS routines for NT or Posix depending on what system we're on.
DESCRIPTION
This exports:
- all functions from posix or nt, e.g. unlink, stat, etc.
- os.path is either posixpath or ntpath
- os.name is either 'posix' or 'nt'
- os.curdir is a string representing the current directory (always '.')
- os.pardir is a string representing the parent directory (always '..')
- os.sep is the (or a most common) pathname separator ('/' or '\\')
- os.extsep is the extension separator (always '.')
- os.altsep is the alternate pathname separator (None or '/')
- os.pathsep is the component separator used in $PATH etc
- os.linesep is the line separator in text files ('\r' or '\n' or '\r\n')
- os.defpath is the default search path for executables
- os.devnull is the file path of the null device ('/dev/null', etc.)
Programs that import and use 'os' stand a better chance of being
portable between different platforms. Of course, they must then
only use functions that are defined by all platforms (e.g., unlink
and opendir), and leave all pathname manipulation to os.path
(e.g., split and join).
CLASSES
builtins.Exception(builtins.BaseException)
builtins.OSError
builtins.object
posix.DirEntry
builtins.tuple(builtins.object)
stat_result
statvfs_result
terminal_size
posix.times_result
posix.uname_result
class DirEntry(builtins.object)
| Methods defined here:
|
| __fspath__(self, /)
| Returns the path for the entry.
|
| __repr__(self, /)
| Return repr(self).
|
| inode(self, /)
| Return inode of the entry; cached per entry.
|
| is_dir(self, /, *, follow_symlinks=True)
| Return True if the entry is a directory; cached per entry.
|
| is_file(self, /, *, follow_symlinks=True)
| Return True if the entry is a file; cached per entry.
|
| is_symlink(self, /)
| Return True if the entry is a symbolic link; cached per entry.
|
| stat(self, /, *, follow_symlinks=True)
| Return stat_result object for the entry; cached per entry.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| name
| the entry's base filename, relative to scandir() "path" argument
|
| path
| the entry's full path name; equivalent to os.path.join(scandir_path, entry.name)
error = class OSError(Exception)
| Base class for I/O related errors.
|
| Method resolution order:
| OSError
| Exception
| BaseException
| object
|
| Methods defined here:
|
| __init__(self, /, *args, **kwargs)
| Initialize self. See help(type(self)) for accurate signature.
|
| __reduce__(...)
| Helper for pickle.
|
| __str__(self, /)
| Return str(self).
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| characters_written
|
| errno
| POSIX exception code
|
| filename
| exception filename
|
| filename2
| second exception filename
|
| strerror
| exception strerror
|
| ----------------------------------------------------------------------
| Methods inherited from BaseException:
|
| __delattr__(self, name, /)
| Implement delattr(self, name).
|
| __getattribute__(self, name, /)
| Return getattr(self, name).
|
| __repr__(self, /)
| Return repr(self).
|
| __setattr__(self, name, value, /)
| Implement setattr(self, name, value).
|
| __setstate__(...)
|
| with_traceback(...)
| Exception.with_traceback(tb) --
| set self.__traceback__ to tb and return self.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from BaseException:
|
| __cause__
| exception cause
|
| __context__
| exception context
|
| __dict__
|
| __suppress_context__
|
| __traceback__
|
| args
class stat_result(builtins.tuple)
| stat_result(iterable=(), /)
|
| stat_result: Result from stat, fstat, or lstat.
|
| This object may be accessed either as a tuple of
| (mode, ino, dev, nlink, uid, gid, size, atime, mtime, ctime)
| or via the attributes st_mode, st_ino, st_dev, st_nlink, st_uid, and so on.
|
| Posix/windows: If your platform supports st_blksize, st_blocks, st_rdev,
| or st_flags, they are available as attributes only.
|
| See os.stat for more information.
|
| Method resolution order:
| stat_result
| builtins.tuple
| builtins.object
|
| Methods defined here:
|
| __reduce__(...)
| Helper for pickle.
|
| __repr__(self, /)
| Return repr(self).
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| st_atime
| time of last access
|
| st_atime_ns
| time of last access in nanoseconds
|
| st_birthtime
| time of creation
|
| st_blksize
| blocksize for filesystem I/O
|
| st_blocks
| number of blocks allocated
|
| st_ctime
| time of last change
|
| st_ctime_ns
| time of last change in nanoseconds
|
| st_dev
| device
|
| st_flags
| user defined flags for file
|
| st_gen
| generation number
|
| st_gid
| group ID of owner
|
| st_ino
| inode
|
| st_mode
| protection bits
|
| st_mtime
| time of last modification
|
| st_mtime_ns
| time of last modification in nanoseconds
|
| st_nlink
| number of hard links
|
| st_rdev
| device type (if inode device)
|
| st_size
| total size, in bytes
|
| st_uid
| user ID of owner
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| n_fields = 22
|
| n_sequence_fields = 10
|
| n_unnamed_fields = 3
|
| ----------------------------------------------------------------------
| Methods inherited from builtins.tuple:
|
| __add__(self, value, /)
| Return self+value.
|
| __contains__(self, key, /)
| Return key in self.
|
| __eq__(self, value, /)
| Return self==value.
|
| __ge__(self, value, /)
| Return self>=value.
|
| __getattribute__(self, name, /)
| Return getattr(self, name).
|
| __getitem__(self, key, /)
| Return self[key].
|
| __getnewargs__(self, /)
|
| __gt__(self, value, /)
| Return self>value.
|
| __hash__(self, /)
| Return hash(self).
|
| __iter__(self, /)
| Implement iter(self).
|
| __le__(self, value, /)
| Return self<=value.
|
| __len__(self, /)
| Return len(self).
|
| __lt__(self, value, /)
| Return self<value.
|
| __mul__(self, value, /)
| Return self*value.
|
| __ne__(self, value, /)
| Return self!=value.
|
| __rmul__(self, value, /)
| Return value*self.
|
| count(self, value, /)
| Return number of occurrences of value.
|
| index(self, value, start=0, stop=9223372036854775807, /)
| Return first index of value.
|
| Raises ValueError if the value is not present.
class statvfs_result(builtins.tuple)
| statvfs_result(iterable=(), /)
|
| statvfs_result: Result from statvfs or fstatvfs.
|
| This object may be accessed either as a tuple of
| (bsize, frsize, blocks, bfree, bavail, files, ffree, favail, flag, namemax),
| or via the attributes f_bsize, f_frsize, f_blocks, f_bfree, and so on.
|
| See os.statvfs for more information.
|
| Method resolution order:
| statvfs_result
| builtins.tuple
| builtins.object
|
| Methods defined here:
|
| __reduce__(...)
| Helper for pickle.
|
| __repr__(self, /)
| Return repr(self).
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| f_bavail
|
| f_bfree
|
| f_blocks
|
| f_bsize
|
| f_favail
|
| f_ffree
|
| f_files
|
| f_flag
|
| f_frsize
|
| f_fsid
|
| f_namemax
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| n_fields = 11
|
| n_sequence_fields = 10
|
| n_unnamed_fields = 0
|
| ----------------------------------------------------------------------
| Methods inherited from builtins.tuple:
|
| __add__(self, value, /)
| Return self+value.
|
| __contains__(self, key, /)
| Return key in self.
|
| __eq__(self, value, /)
| Return self==value.
|
| __ge__(self, value, /)
| Return self>=value.
|
| __getattribute__(self, name, /)
| Return getattr(self, name).
|
| __getitem__(self, key, /)
| Return self[key].
|
| __getnewargs__(self, /)
|
| __gt__(self, value, /)
| Return self>value.
|
| __hash__(self, /)
| Return hash(self).
|
| __iter__(self, /)
| Implement iter(self).
|
| __le__(self, value, /)
| Return self<=value.
|
| __len__(self, /)
| Return len(self).
|
| __lt__(self, value, /)
| Return self<value.
|
| __mul__(self, value, /)
| Return self*value.
|
| __ne__(self, value, /)
| Return self!=value.
|
| __rmul__(self, value, /)
| Return value*self.
|
| count(self, value, /)
| Return number of occurrences of value.
|
| index(self, value, start=0, stop=9223372036854775807, /)
| Return first index of value.
|
| Raises ValueError if the value is not present.
class terminal_size(builtins.tuple)
| terminal_size(iterable=(), /)
|
| A tuple of (columns, lines) for holding terminal window size
|
| Method resolution order:
| terminal_size
| builtins.tuple
| builtins.object
|
| Methods defined here:
|
| __reduce__(...)
| Helper for pickle.
|
| __repr__(self, /)
| Return repr(self).
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| columns
| width of the terminal window in characters
|
| lines
| height of the terminal window in characters
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| n_fields = 2
|
| n_sequence_fields = 2
|
| n_unnamed_fields = 0
|
| ----------------------------------------------------------------------
| Methods inherited from builtins.tuple:
|
| __add__(self, value, /)
| Return self+value.
|
| __contains__(self, key, /)
| Return key in self.
|
| __eq__(self, value, /)
| Return self==value.
|
| __ge__(self, value, /)
| Return self>=value.
|
| __getattribute__(self, name, /)
| Return getattr(self, name).
|
| __getitem__(self, key, /)
| Return self[key].
|
| __getnewargs__(self, /)
|
| __gt__(self, value, /)
| Return self>value.
|
| __hash__(self, /)
| Return hash(self).
|
| __iter__(self, /)
| Implement iter(self).
|
| __le__(self, value, /)
| Return self<=value.
|
| __len__(self, /)
| Return len(self).
|
| __lt__(self, value, /)
| Return self<value.
|
| __mul__(self, value, /)
| Return self*value.
|
| __ne__(self, value, /)
| Return self!=value.
|
| __rmul__(self, value, /)
| Return value*self.
|
| count(self, value, /)
| Return number of occurrences of value.
|
| index(self, value, start=0, stop=9223372036854775807, /)
| Return first index of value.
|
| Raises ValueError if the value is not present.
class times_result(builtins.tuple)
| times_result(iterable=(), /)
|
| times_result: Result from os.times().
|
| This object may be accessed either as a tuple of
| (user, system, children_user, children_system, elapsed),
| or via the attributes user, system, children_user, children_system,
| and elapsed.
|
| See os.times for more information.
|
| Method resolution order:
| times_result
| builtins.tuple
| builtins.object
|
| Methods defined here:
|
| __reduce__(...)
| Helper for pickle.
|
| __repr__(self, /)
| Return repr(self).
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| children_system
| system time of children
|
| children_user
| user time of children
|
| elapsed
| elapsed time since an arbitrary point in the past
|
| system
| system time
|
| user
| user time
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| n_fields = 5
|
| n_sequence_fields = 5
|
| n_unnamed_fields = 0
|
| ----------------------------------------------------------------------
| Methods inherited from builtins.tuple:
|
| __add__(self, value, /)
| Return self+value.
|
| __contains__(self, key, /)
| Return key in self.
|
| __eq__(self, value, /)
| Return self==value.
|
| __ge__(self, value, /)
| Return self>=value.
|
| __getattribute__(self, name, /)
| Return getattr(self, name).
|
| __getitem__(self, key, /)
| Return self[key].
|
| __getnewargs__(self, /)
|
| __gt__(self, value, /)
| Return self>value.
|
| __hash__(self, /)
| Return hash(self).
|
| __iter__(self, /)
| Implement iter(self).
|
| __le__(self, value, /)
| Return self<=value.
|
| __len__(self, /)
| Return len(self).
|
| __lt__(self, value, /)
| Return self<value.
|
| __mul__(self, value, /)
| Return self*value.
|
| __ne__(self, value, /)
| Return self!=value.
|
| __rmul__(self, value, /)
| Return value*self.
|
| count(self, value, /)
| Return number of occurrences of value.
|
| index(self, value, start=0, stop=9223372036854775807, /)
| Return first index of value.
|
| Raises ValueError if the value is not present.
class uname_result(builtins.tuple)
| uname_result(iterable=(), /)
|
| uname_result: Result from os.uname().
|
| This object may be accessed either as a tuple of
| (sysname, nodename, release, version, machine),
| or via the attributes sysname, nodename, release, version, and machine.
|
| See os.uname for more information.
|
| Method resolution order:
| uname_result
| builtins.tuple
| builtins.object
|
| Methods defined here:
|
| __reduce__(...)
| Helper for pickle.
|
| __repr__(self, /)
| Return repr(self).
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| machine
| hardware identifier
|
| nodename
| name of machine on network (implementation-defined)
|
| release
| operating system release
|
| sysname
| operating system name
|
| version
| operating system version
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| n_fields = 5
|
| n_sequence_fields = 5
|
| n_unnamed_fields = 0
|
| ----------------------------------------------------------------------
| Methods inherited from builtins.tuple:
|
| __add__(self, value, /)
| Return self+value.
|
| __contains__(self, key, /)
| Return key in self.
|
| __eq__(self, value, /)
| Return self==value.
|
| __ge__(self, value, /)
| Return self>=value.
|
| __getattribute__(self, name, /)
| Return getattr(self, name).
|
| __getitem__(self, key, /)
| Return self[key].
|
| __getnewargs__(self, /)
|
| __gt__(self, value, /)
| Return self>value.
|
| __hash__(self, /)
| Return hash(self).
|
| __iter__(self, /)
| Implement iter(self).
|
| __le__(self, value, /)
| Return self<=value.
|
| __len__(self, /)
| Return len(self).
|
| __lt__(self, value, /)
| Return self<value.
|
| __mul__(self, value, /)
| Return self*value.
|
| __ne__(self, value, /)
| Return self!=value.
|
| __rmul__(self, value, /)
| Return value*self.
|
| count(self, value, /)
| Return number of occurrences of value.
|
| index(self, value, start=0, stop=9223372036854775807, /)
| Return first index of value.
|
| Raises ValueError if the value is not present.
FUNCTIONS
WCOREDUMP(status, /)
Return True if the process returning status was dumped to a core file.
WEXITSTATUS(status)
Return the process return code from status.
WIFCONTINUED(status)
Return True if a particular process was continued from a job control stop.
Return True if the process returning status was continued from a
job control stop.
WIFEXITED(status)
Return True if the process returning status exited via the exit() system call.
WIFSIGNALED(status)
Return True if the process returning status was terminated by a signal.
WIFSTOPPED(status)
Return True if the process returning status was stopped.
WSTOPSIG(status)
Return the signal that stopped the process that provided the status value.
WTERMSIG(status)
Return the signal that terminated the process that provided the status value.
_exit(status)
Exit to the system with specified status, without normal exit processing.
abort()
Abort the interpreter immediately.
This function 'dumps core' or otherwise fails in the hardest way possible
on the hosting operating system. This function never returns.
access(path, mode, *, dir_fd=None, effective_ids=False, follow_symlinks=True)
Use the real uid/gid to test for access to a path.
path
Path to be tested; can be string, bytes, or a path-like object.
mode
Operating-system mode bitfield. Can be F_OK to test existence,
or the inclusive-OR of R_OK, W_OK, and X_OK.
dir_fd
If not None, it should be a file descriptor open to a directory,
and path should be relative; path will then be relative to that
directory.
effective_ids
If True, access will use the effective uid/gid instead of
the real uid/gid.
follow_symlinks
If False, and the last element of the path is a symbolic link,
access will examine the symbolic link itself instead of the file
the link points to.
dir_fd, effective_ids, and follow_symlinks may not be implemented
on your platform. If they are unavailable, using them will raise a
NotImplementedError.
Note that most operations will use the effective uid/gid, therefore this
routine can be used in a suid/sgid environment to test if the invoking user
has the specified access to the path.
chdir(path)
Change the current working directory to the specified path.
path may always be specified as a string.
On some platforms, path may also be specified as an open file descriptor.
If this functionality is unavailable, using it raises an exception.
chflags(path, flags, follow_symlinks=True)
Set file flags.
If follow_symlinks is False, and the last element of the path is a symbolic
link, chflags will change flags on the symbolic link itself instead of the
file the link points to.
follow_symlinks may not be implemented on your platform. If it is
unavailable, using it will raise a NotImplementedError.
chmod(path, mode, *, dir_fd=None, follow_symlinks=True)
Change the access permissions of a file.
path
Path to be modified. May always be specified as a str, bytes, or a path-like object.
On some platforms, path may also be specified as an open file descriptor.
If this functionality is unavailable, using it raises an exception.
mode
Operating-system mode bitfield.
dir_fd
If not None, it should be a file descriptor open to a directory,
and path should be relative; path will then be relative to that
directory.
follow_symlinks
If False, and the last element of the path is a symbolic link,
chmod will modify the symbolic link itself instead of the file
the link points to.
It is an error to use dir_fd or follow_symlinks when specifying path as
an open file descriptor.
dir_fd and follow_symlinks may not be implemented on your platform.
If they are unavailable, using them will raise a NotImplementedError.
chown(path, uid, gid, *, dir_fd=None, follow_symlinks=True)
Change the owner and group id of path to the numeric uid and gid.\
path
Path to be examined; can be string, bytes, a path-like object, or open-file-descriptor int.
dir_fd
If not None, it should be a file descriptor open to a directory,
and path should be relative; path will then be relative to that
directory.
follow_symlinks
If False, and the last element of the path is a symbolic link,
stat will examine the symbolic link itself instead of the file
the link points to.
path may always be specified as a string.
On some platforms, path may also be specified as an open file descriptor.
If this functionality is unavailable, using it raises an exception.
If dir_fd is not None, it should be a file descriptor open to a directory,
and path should be relative; path will then be relative to that directory.
If follow_symlinks is False, and the last element of the path is a symbolic
link, chown will modify the symbolic link itself instead of the file the
link points to.
It is an error to use dir_fd or follow_symlinks when specifying path as
an open file descriptor.
dir_fd and follow_symlinks may not be implemented on your platform.
If they are unavailable, using them will raise a NotImplementedError.
chroot(path)
Change root directory to path.
close(fd)
Close a file descriptor.
closerange(fd_low, fd_high, /)
Closes all file descriptors in [fd_low, fd_high), ignoring errors.
confstr(name, /)
Return a string-valued system configuration variable.
cpu_count()
Return the number of CPUs in the system; return None if indeterminable.
This number is not equivalent to the number of CPUs the current process can
use. The number of usable CPUs can be obtained with
``len(os.sched_getaffinity(0))``
ctermid()
Return the name of the controlling terminal for this process.
device_encoding(fd)
Return a string describing the encoding of a terminal's file descriptor.
The file descriptor must be attached to a terminal.
If the device is not a terminal, return None.
dup(fd, /)
Return a duplicate of a file descriptor.
dup2(fd, fd2, inheritable=True)
Duplicate file descriptor.
execl(file, *args)
execl(file, *args)
Execute the executable file with argument list args, replacing the
current process.
execle(file, *args)
execle(file, *args, env)
Execute the executable file with argument list args and
environment env, replacing the current process.
execlp(file, *args)
execlp(file, *args)
Execute the executable file (which is searched for along $PATH)
with argument list args, replacing the current process.
execlpe(file, *args)
execlpe(file, *args, env)
Execute the executable file (which is searched for along $PATH)
with argument list args and environment env, replacing the current
process.
execv(path, argv, /)
Execute an executable path with arguments, replacing current process.
path
Path of executable file.
argv
Tuple or list of strings.
execve(path, argv, env)
Execute an executable path with arguments, replacing current process.
path
Path of executable file.
argv
Tuple or list of strings.
env
Dictionary of strings mapping to strings.
execvp(file, args)
execvp(file, args)
Execute the executable file (which is searched for along $PATH)
with argument list args, replacing the current process.
args may be a list or tuple of strings.
execvpe(file, args, env)
execvpe(file, args, env)
Execute the executable file (which is searched for along $PATH)
with argument list args and environment env , replacing the
current process.
args may be a list or tuple of strings.
fchdir(fd)
Change to the directory of the given file descriptor.
fd must be opened on a directory, not a file.
Equivalent to os.chdir(fd).
fchmod(fd, mode)
Change the access permissions of the file given by file descriptor fd.
Equivalent to os.chmod(fd, mode).
fchown(fd, uid, gid)
Change the owner and group id of the file specified by file descriptor.
Equivalent to os.chown(fd, uid, gid).
fdopen(fd, *args, **kwargs)
# Supply os.fdopen()
fork()
Fork a child process.
Return 0 to child process and PID of child to parent process.
forkpty()
Fork a new process with a new pseudo-terminal as controlling tty.
Returns a tuple of (pid, master_fd).
Like fork(), return pid of 0 to the child process,
and pid of child to the parent process.
To both, return fd of newly opened pseudo-terminal.
fpathconf(fd, name, /)
Return the configuration limit name for the file descriptor fd.
If there is no limit, return -1.
fsdecode(filename)
Decode filename (an os.PathLike, bytes, or str) from the filesystem
encoding with 'surrogateescape' error handler, return str unchanged. On
Windows, use 'strict' error handler if the file system encoding is
'mbcs' (which is the default encoding).
fsencode(filename)
Encode filename (an os.PathLike, bytes, or str) to the filesystem
encoding with 'surrogateescape' error handler, return bytes unchanged.
On Windows, use 'strict' error handler if the file system encoding is
'mbcs' (which is the default encoding).
fspath(path)
Return the file system path representation of the object.
If the object is str or bytes, then allow it to pass through as-is. If the
object defines __fspath__(), then return the result of that method. All other
types raise a TypeError.
fstat(fd)
Perform a stat system call on the given file descriptor.
Like stat(), but for an open file descriptor.
Equivalent to os.stat(fd).
fstatvfs(fd, /)
Perform an fstatvfs system call on the given fd.
Equivalent to statvfs(fd).
fsync(fd)
Force write of fd to disk.
ftruncate(fd, length, /)
Truncate a file, specified by file descriptor, to a specific length.
fwalk(top='.', topdown=True, onerror=None, *, follow_symlinks=False, dir_fd=None)
Directory tree generator.
This behaves exactly like walk(), except that it yields a 4-tuple
dirpath, dirnames, filenames, dirfd
`dirpath`, `dirnames` and `filenames` are identical to walk() output,
and `dirfd` is a file descriptor referring to the directory `dirpath`.
The advantage of fwalk() over walk() is that it's safe against symlink
races (when follow_symlinks is False).
If dir_fd is not None, it should be a file descriptor open to a directory,
and top should be relative; top will then be relative to that directory.
(dir_fd is always supported for fwalk.)
Caution:
Since fwalk() yields file descriptors, those are only valid until the
next iteration step, so you should dup() them if you want to keep them
for a longer period.
Example:
import os
for root, dirs, files, rootfd in os.fwalk('python/Lib/email'):
print(root, "consumes", end="")
print(sum([os.stat(name, dir_fd=rootfd).st_size for name in files]),
end="")
print("bytes in", len(files), "non-directory files")
if 'CVS' in dirs:
dirs.remove('CVS') # don't visit CVS directories
get_blocking(...)
get_blocking(fd) -> bool
Get the blocking mode of the file descriptor:
False if the O_NONBLOCK flag is set, True if the flag is cleared.
get_exec_path(env=None)
Returns the sequence of directories that will be searched for the
named executable (similar to a shell) when launching a process.
*env* must be an environment variable dict or None. If *env* is None,
os.environ will be used.
get_inheritable(fd, /)
Get the close-on-exe flag of the specified file descriptor.
get_terminal_size(...)
Return the size of the terminal window as (columns, lines).
The optional argument fd (default standard output) specifies
which file descriptor should be queried.
If the file descriptor is not connected to a terminal, an OSError
is thrown.
This function will only be defined if an implementation is
available for this system.
shutil.get_terminal_size is the high-level function which should
normally be used, os.get_terminal_size is the low-level implementation.
getcwd()
Return a unicode string representing the current working directory.
getcwdb()
Return a bytes string representing the current working directory.
getegid()
Return the current process's effective group id.
getenv(key, default=None)
Get an environment variable, return None if it doesn't exist.
The optional second argument can specify an alternate default.
key, default and the result are str.
getenvb(key, default=None)
Get an environment variable, return None if it doesn't exist.
The optional second argument can specify an alternate default.
key, default and the result are bytes.
geteuid()
Return the current process's effective user id.
getgid()
Return the current process's group id.
getgrouplist(...)
getgrouplist(user, group) -> list of groups to which a user belongs
Returns a list of groups to which a user belongs.
user: username to lookup
group: base group id of the user
getgroups()
Return list of supplemental group IDs for the process.
getloadavg()
Return average recent system load information.
Return the number of processes in the system run queue averaged over
the last 1, 5, and 15 minutes as a tuple of three floats.
Raises OSError if the load average was unobtainable.
getlogin()
Return the actual login name.
getpgid(pid)
Call the system call getpgid(), and return the result.
getpgrp()
Return the current process group id.
getpid()
Return the current process id.
getppid()
Return the parent's process id.
If the parent process has already exited, Windows machines will still
return its id; others systems will return the id of the 'init' process (1).
getpriority(which, who)
Return program scheduling priority.
getsid(pid, /)
Call the system call getsid(pid) and return the result.
getuid()
Return the current process's user id.
initgroups(...)
initgroups(username, gid) -> None
Call the system initgroups() to initialize the group access list with all of
the groups of which the specified username is a member, plus the specified
group id.
isatty(fd, /)
Return True if the fd is connected to a terminal.
Return True if the file descriptor is an open file descriptor
connected to the slave end of a terminal.
kill(pid, signal, /)
Kill a process with a signal.
killpg(pgid, signal, /)
Kill a process group with a signal.
lchflags(path, flags)
Set file flags.
This function will not follow symbolic links.
Equivalent to chflags(path, flags, follow_symlinks=False).
lchmod(path, mode)
Change the access permissions of a file, without following symbolic links.
If path is a symlink, this affects the link itself rather than the target.
Equivalent to chmod(path, mode, follow_symlinks=False)."
lchown(path, uid, gid)
Change the owner and group id of path to the numeric uid and gid.
This function will not follow symbolic links.
Equivalent to os.chown(path, uid, gid, follow_symlinks=False).
link(src, dst, *, src_dir_fd=None, dst_dir_fd=None, follow_symlinks=True)
Create a hard link to a file.
If either src_dir_fd or dst_dir_fd is not None, it should be a file
descriptor open to a directory, and the respective path string (src or dst)
should be relative; the path will then be relative to that directory.
If follow_symlinks is False, and the last element of src is a symbolic
link, link will create a link to the symbolic link itself instead of the
file the link points to.
src_dir_fd, dst_dir_fd, and follow_symlinks may not be implemented on your
platform. If they are unavailable, using them will raise a
NotImplementedError.
listdir(path=None)
Return a list containing the names of the files in the directory.
path can be specified as either str, bytes, or a path-like object. If path is bytes,
the filenames returned will also be bytes; in all other circumstances
the filenames returned will be str.
If path is None, uses the path='.'.
On some platforms, path may also be specified as an open file descriptor;\
the file descriptor must refer to a directory.
If this functionality is unavailable, using it raises NotImplementedError.
The list is in arbitrary order. It does not include the special
entries '.' and '..' even if they are present in the directory.
lockf(fd, command, length, /)
Apply, test or remove a POSIX lock on an open file descriptor.
fd
An open file descriptor.
command
One of F_LOCK, F_TLOCK, F_ULOCK or F_TEST.
length
The number of bytes to lock, starting at the current position.
lseek(fd, position, how, /)
Set the position of a file descriptor. Return the new position.
Return the new cursor position in number of bytes
relative to the beginning of the file.
lstat(path, *, dir_fd=None)
Perform a stat system call on the given path, without following symbolic links.
Like stat(), but do not follow symbolic links.
Equivalent to stat(path, follow_symlinks=False).
major(device, /)
Extracts a device major number from a raw device number.
makedev(major, minor, /)
Composes a raw device number from the major and minor device numbers.
makedirs(name, mode=511, exist_ok=False)
makedirs(name [, mode=0o777][, exist_ok=False])
Super-mkdir; create a leaf directory and all intermediate ones. Works like
mkdir, except that any intermediate path segment (not just the rightmost)
will be created if it does not exist. If the target directory already
exists, raise an OSError if exist_ok is False. Otherwise no exception is
raised. This is recursive.
minor(device, /)
Extracts a device minor number from a raw device number.
mkdir(path, mode=511, *, dir_fd=None)
Create a directory.
If dir_fd is not None, it should be a file descriptor open to a directory,
and path should be relative; path will then be relative to that directory.
dir_fd may not be implemented on your platform.
If it is unavailable, using it will raise a NotImplementedError.
The mode argument is ignored on Windows.
mkfifo(path, mode=438, *, dir_fd=None)
Create a "fifo" (a POSIX named pipe).
If dir_fd is not None, it should be a file descriptor open to a directory,
and path should be relative; path will then be relative to that directory.
dir_fd may not be implemented on your platform.
If it is unavailable, using it will raise a NotImplementedError.
mknod(path, mode=384, device=0, *, dir_fd=None)
Create a node in the file system.
Create a node in the file system (file, device special file or named pipe)
at path. mode specifies both the permissions to use and the
type of node to be created, being combined (bitwise OR) with one of
S_IFREG, S_IFCHR, S_IFBLK, and S_IFIFO. If S_IFCHR or S_IFBLK is set on mode,
device defines the newly created device special file (probably using
os.makedev()). Otherwise device is ignored.
If dir_fd is not None, it should be a file descriptor open to a directory,
and path should be relative; path will then be relative to that directory.
dir_fd may not be implemented on your platform.
If it is unavailable, using it will raise a NotImplementedError.
nice(increment, /)
Add increment to the priority of process and return the new priority.
open(path, flags, mode=511, *, dir_fd=None)
Open a file for low level IO. Returns a file descriptor (integer).
If dir_fd is not None, it should be a file descriptor open to a directory,
and path should be relative; path will then be relative to that directory.
dir_fd may not be implemented on your platform.
If it is unavailable, using it will raise a NotImplementedError.
openpty()
Open a pseudo-terminal.
Return a tuple of (master_fd, slave_fd) containing open file descriptors
for both the master and slave ends.
pathconf(path, name)
Return the configuration limit name for the file or directory path.
If there is no limit, return -1.
On some platforms, path may also be specified as an open file descriptor.
If this functionality is unavailable, using it raises an exception.
pipe()
Create a pipe.
Returns a tuple of two file descriptors:
(read_fd, write_fd)
popen(cmd, mode='r', buffering=-1)
# Supply os.popen()
pread(fd, length, offset, /)
Read a number of bytes from a file descriptor starting at a particular offset.
Read length bytes from file descriptor fd, starting at offset bytes from
the beginning of the file. The file offset remains unchanged.
putenv(name, value, /)
Change or add an environment variable.
pwrite(fd, buffer, offset, /)
Write bytes to a file descriptor starting at a particular offset.
Write buffer to fd, starting at offset bytes from the beginning of
the file. Returns the number of bytes writte. Does not change the
current file offset.
read(fd, length, /)
Read from a file descriptor. Returns a bytes object.
readlink(...)
readlink(path, *, dir_fd=None) -> path
Return a string representing the path to which the symbolic link points.
If dir_fd is not None, it should be a file descriptor open to a directory,
and path should be relative; path will then be relative to that directory.
dir_fd may not be implemented on your platform.
If it is unavailable, using it will raise a NotImplementedError.
readv(fd, buffers, /)
Read from a file descriptor fd into an iterable of buffers.
The buffers should be mutable buffers accepting bytes.
readv will transfer data into each buffer until it is full
and then move on to the next buffer in the sequence to hold
the rest of the data.
readv returns the total number of bytes read,
which may be less than the total capacity of all the buffers.
register_at_fork(*, before=None, after_in_child=None, after_in_parent=None)
Register callables to be called when forking a new process.
before
A callable to be called in the parent before the fork() syscall.
after_in_child
A callable to be called in the child after fork().
after_in_parent
A callable to be called in the parent after fork().
'before' callbacks are called in reverse order.
'after_in_child' and 'after_in_parent' callbacks are called in order.
remove(path, *, dir_fd=None)
Remove a file (same as unlink()).
If dir_fd is not None, it should be a file descriptor open to a directory,
and path should be relative; path will then be relative to that directory.
dir_fd may not be implemented on your platform.
If it is unavailable, using it will raise a NotImplementedError.
removedirs(name)
removedirs(name)
Super-rmdir; remove a leaf directory and all empty intermediate
ones. Works like rmdir except that, if the leaf directory is
successfully removed, directories corresponding to rightmost path
segments will be pruned away until either the whole path is
consumed or an error occurs. Errors during this latter phase are
ignored -- they generally mean that a directory was not empty.
rename(src, dst, *, src_dir_fd=None, dst_dir_fd=None)
Rename a file or directory.
If either src_dir_fd or dst_dir_fd is not None, it should be a file
descriptor open to a directory, and the respective path string (src or dst)
should be relative; the path will then be relative to that directory.
src_dir_fd and dst_dir_fd, may not be implemented on your platform.
If they are unavailable, using them will raise a NotImplementedError.
renames(old, new)
renames(old, new)
Super-rename; create directories as necessary and delete any left
empty. Works like rename, except creation of any intermediate
directories needed to make the new pathname good is attempted
first. After the rename, directories corresponding to rightmost
path segments of the old name will be pruned until either the
whole path is consumed or a nonempty directory is found.
Note: this function can fail with the new directory structure made
if you lack permissions needed to unlink the leaf directory or
file.
replace(src, dst, *, src_dir_fd=None, dst_dir_fd=None)
Rename a file or directory, overwriting the destination.
If either src_dir_fd or dst_dir_fd is not None, it should be a file
descriptor open to a directory, and the respective path string (src or dst)
should be relative; the path will then be relative to that directory.
src_dir_fd and dst_dir_fd, may not be implemented on your platform.
If they are unavailable, using them will raise a NotImplementedError."
rmdir(path, *, dir_fd=None)
Remove a directory.
If dir_fd is not None, it should be a file descriptor open to a directory,
and path should be relative; path will then be relative to that directory.
dir_fd may not be implemented on your platform.
If it is unavailable, using it will raise a NotImplementedError.
scandir(path=None)
Return an iterator of DirEntry objects for given path.
path can be specified as either str, bytes, or a path-like object. If path
is bytes, the names of yielded DirEntry objects will also be bytes; in
all other circumstances they will be str.
If path is None, uses the path='.'.
sched_get_priority_max(policy)
Get the maximum scheduling priority for policy.
sched_get_priority_min(policy)
Get the minimum scheduling priority for policy.
sched_yield()
Voluntarily relinquish the CPU.
sendfile(...)
sendfile(out, in, offset, count) -> byteswritten
sendfile(out, in, offset, count[, headers][, trailers], flags=0)
-> byteswritten
Copy count bytes from file descriptor in to file descriptor out.
set_blocking(...)
set_blocking(fd, blocking)
Set the blocking mode of the specified file descriptor.
Set the O_NONBLOCK flag if blocking is False,
clear the O_NONBLOCK flag otherwise.
set_inheritable(fd, inheritable, /)
Set the inheritable flag of the specified file descriptor.
setegid(egid, /)
Set the current process's effective group id.
seteuid(euid, /)
Set the current process's effective user id.
setgid(gid, /)
Set the current process's group id.
setgroups(groups, /)
Set the groups of the current process to list.
setpgid(pid, pgrp, /)
Call the system call setpgid(pid, pgrp).
setpgrp()
Make the current process the leader of its process group.
setpriority(which, who, priority)
Set program scheduling priority.
setregid(rgid, egid, /)
Set the current process's real and effective group ids.
setreuid(ruid, euid, /)
Set the current process's real and effective user ids.
setsid()
Call the system call setsid().
setuid(uid, /)
Set the current process's user id.
spawnl(mode, file, *args)
spawnl(mode, file, *args) -> integer
Execute file with arguments from args in a subprocess.
If mode == P_NOWAIT return the pid of the process.
If mode == P_WAIT return the process's exit code if it exits normally;
otherwise return -SIG, where SIG is the signal that killed it.
spawnle(mode, file, *args)
spawnle(mode, file, *args, env) -> integer
Execute file with arguments from args in a subprocess with the
supplied environment.
If mode == P_NOWAIT return the pid of the process.
If mode == P_WAIT return the process's exit code if it exits normally;
otherwise return -SIG, where SIG is the signal that killed it.
spawnlp(mode, file, *args)
spawnlp(mode, file, *args) -> integer
Execute file (which is looked for along $PATH) with arguments from
args in a subprocess with the supplied environment.
If mode == P_NOWAIT return the pid of the process.
If mode == P_WAIT return the process's exit code if it exits normally;
otherwise return -SIG, where SIG is the signal that killed it.
spawnlpe(mode, file, *args)
spawnlpe(mode, file, *args, env) -> integer
Execute file (which is looked for along $PATH) with arguments from
args in a subprocess with the supplied environment.
If mode == P_NOWAIT return the pid of the process.
If mode == P_WAIT return the process's exit code if it exits normally;
otherwise return -SIG, where SIG is the signal that killed it.
spawnv(mode, file, args)
spawnv(mode, file, args) -> integer
Execute file with arguments from args in a subprocess.
If mode == P_NOWAIT return the pid of the process.
If mode == P_WAIT return the process's exit code if it exits normally;
otherwise return -SIG, where SIG is the signal that killed it.
spawnve(mode, file, args, env)
spawnve(mode, file, args, env) -> integer
Execute file with arguments from args in a subprocess with the
specified environment.
If mode == P_NOWAIT return the pid of the process.
If mode == P_WAIT return the process's exit code if it exits normally;
otherwise return -SIG, where SIG is the signal that killed it.
spawnvp(mode, file, args)
spawnvp(mode, file, args) -> integer
Execute file (which is looked for along $PATH) with arguments from
args in a subprocess.
If mode == P_NOWAIT return the pid of the process.
If mode == P_WAIT return the process's exit code if it exits normally;
otherwise return -SIG, where SIG is the signal that killed it.
spawnvpe(mode, file, args, env)
spawnvpe(mode, file, args, env) -> integer
Execute file (which is looked for along $PATH) with arguments from
args in a subprocess with the supplied environment.
If mode == P_NOWAIT return the pid of the process.
If mode == P_WAIT return the process's exit code if it exits normally;
otherwise return -SIG, where SIG is the signal that killed it.
stat(path, *, dir_fd=None, follow_symlinks=True)
Perform a stat system call on the given path.
path
Path to be examined; can be string, bytes, a path-like object or
open-file-descriptor int.
dir_fd
If not None, it should be a file descriptor open to a directory,
and path should be a relative string; path will then be relative to
that directory.
follow_symlinks
If False, and the last element of the path is a symbolic link,
stat will examine the symbolic link itself instead of the file
the link points to.
dir_fd and follow_symlinks may not be implemented
on your platform. If they are unavailable, using them will raise a
NotImplementedError.
It's an error to use dir_fd or follow_symlinks when specifying path as
an open file descriptor.
statvfs(path)
Perform a statvfs system call on the given path.
path may always be specified as a string.
On some platforms, path may also be specified as an open file descriptor.
If this functionality is unavailable, using it raises an exception.
strerror(code, /)
Translate an error code to a message string.
symlink(src, dst, target_is_directory=False, *, dir_fd=None)
Create a symbolic link pointing to src named dst.
target_is_directory is required on Windows if the target is to be
interpreted as a directory. (On Windows, symlink requires
Windows 6.0 or greater, and raises a NotImplementedError otherwise.)
target_is_directory is ignored on non-Windows platforms.
If dir_fd is not None, it should be a file descriptor open to a directory,
and path should be relative; path will then be relative to that directory.
dir_fd may not be implemented on your platform.
If it is unavailable, using it will raise a NotImplementedError.
sync()
Force write of everything to disk.
sysconf(name, /)
Return an integer-valued system configuration variable.
system(command)
Execute the command in a subshell.
tcgetpgrp(fd, /)
Return the process group associated with the terminal specified by fd.
tcsetpgrp(fd, pgid, /)
Set the process group associated with the terminal specified by fd.
times()
Return a collection containing process timing information.
The object returned behaves like a named tuple with these fields:
(utime, stime, cutime, cstime, elapsed_time)
All fields are floating point numbers.
truncate(path, length)
Truncate a file, specified by path, to a specific length.
On some platforms, path may also be specified as an open file descriptor.
If this functionality is unavailable, using it raises an exception.
ttyname(fd, /)
Return the name of the terminal device connected to 'fd'.
fd
Integer file descriptor handle.
umask(mask, /)
Set the current numeric umask and return the previous umask.
uname()
Return an object identifying the current operating system.
The object behaves like a named tuple with the following fields:
(sysname, nodename, release, version, machine)
unlink(path, *, dir_fd=None)
Remove a file (same as remove()).
If dir_fd is not None, it should be a file descriptor open to a directory,
and path should be relative; path will then be relative to that directory.
dir_fd may not be implemented on your platform.
If it is unavailable, using it will raise a NotImplementedError.
unsetenv(name, /)
Delete an environment variable.
urandom(size, /)
Return a bytes object containing random bytes suitable for cryptographic use.
utime(path, times=None, *, ns=None, dir_fd=None, follow_symlinks=True)
Set the access and modified time of path.
path may always be specified as a string.
On some platforms, path may also be specified as an open file descriptor.
If this functionality is unavailable, using it raises an exception.
If times is not None, it must be a tuple (atime, mtime);
atime and mtime should be expressed as float seconds since the epoch.
If ns is specified, it must be a tuple (atime_ns, mtime_ns);
atime_ns and mtime_ns should be expressed as integer nanoseconds
since the epoch.
If times is None and ns is unspecified, utime uses the current time.
Specifying tuples for both times and ns is an error.
If dir_fd is not None, it should be a file descriptor open to a directory,
and path should be relative; path will then be relative to that directory.
If follow_symlinks is False, and the last element of the path is a symbolic
link, utime will modify the symbolic link itself instead of the file the
link points to.
It is an error to use dir_fd or follow_symlinks when specifying path
as an open file descriptor.
dir_fd and follow_symlinks may not be available on your platform.
If they are unavailable, using them will raise a NotImplementedError.
wait()
Wait for completion of a child process.
Returns a tuple of information about the child process:
(pid, status)
wait3(options)
Wait for completion of a child process.
Returns a tuple of information about the child process:
(pid, status, rusage)
wait4(pid, options)
Wait for completion of a specific child process.
Returns a tuple of information about the child process:
(pid, status, rusage)
waitpid(pid, options, /)
Wait for completion of a given child process.
Returns a tuple of information regarding the child process:
(pid, status)
The options argument is ignored on Windows.
walk(top, topdown=True, onerror=None, followlinks=False)
Directory tree generator.
For each directory in the directory tree rooted at top (including top
itself, but excluding '.' and '..'), yields a 3-tuple
dirpath, dirnames, filenames
dirpath is a string, the path to the directory. dirnames is a list of
the names of the subdirectories in dirpath (excluding '.' and '..').
filenames is a list of the names of the non-directory files in dirpath.
Note that the names in the lists are just names, with no path components.
To get a full path (which begins with top) to a file or directory in
dirpath, do os.path.join(dirpath, name).
If optional arg 'topdown' is true or not specified, the triple for a
directory is generated before the triples for any of its subdirectories
(directories are generated top down). If topdown is false, the triple
for a directory is generated after the triples for all of its
subdirectories (directories are generated bottom up).
When topdown is true, the caller can modify the dirnames list in-place
(e.g., via del or slice assignment), and walk will only recurse into the
subdirectories whose names remain in dirnames; this can be used to prune the
search, or to impose a specific order of visiting. Modifying dirnames when
topdown is false is ineffective, since the directories in dirnames have
already been generated by the time dirnames itself is generated. No matter
the value of topdown, the list of subdirectories is retrieved before the
tuples for the directory and its subdirectories are generated.
By default errors from the os.scandir() call are ignored. If
optional arg 'onerror' is specified, it should be a function; it
will be called with one argument, an OSError instance. It can
report the error to continue with the walk, or raise the exception
to abort the walk. Note that the filename is available as the
filename attribute of the exception object.
By default, os.walk does not follow symbolic links to subdirectories on
systems that support them. In order to get this functionality, set the
optional argument 'followlinks' to true.
Caution: if you pass a relative pathname for top, don't change the
current working directory between resumptions of walk. walk never
changes the current directory, and assumes that the client doesn't
either.
Example:
import os
from os.path import join, getsize
for root, dirs, files in os.walk('python/Lib/email'):
print(root, "consumes", end="")
print(sum([getsize(join(root, name)) for name in files]), end="")
print("bytes in", len(files), "non-directory files")
if 'CVS' in dirs:
dirs.remove('CVS') # don't visit CVS directories
write(fd, data, /)
Write a bytes object to a file descriptor.
writev(fd, buffers, /)
Iterate over buffers, and write the contents of each to a file descriptor.
Returns the total number of bytes written.
buffers must be a sequence of bytes-like objects.
DATA
CLD_CONTINUED = 6
CLD_DUMPED = 3
CLD_EXITED = 1
CLD_TRAPPED = 4
EX_CANTCREAT = 73
EX_CONFIG = 78
EX_DATAERR = 65
EX_IOERR = 74
EX_NOHOST = 68
EX_NOINPUT = 66
EX_NOPERM = 77
EX_NOUSER = 67
EX_OK = 0
EX_OSERR = 71
EX_OSFILE = 72
EX_PROTOCOL = 76
EX_SOFTWARE = 70
EX_TEMPFAIL = 75
EX_UNAVAILABLE = 69
EX_USAGE = 64
F_LOCK = 1
F_OK = 0
F_TEST = 3
F_TLOCK = 2
F_ULOCK = 0
NGROUPS_MAX = 16
O_ACCMODE = 3
O_APPEND = 8
O_ASYNC = 64
O_CLOEXEC = 16777216
O_CREAT = 512
O_DIRECTORY = 1048576
O_DSYNC = 4194304
O_EXCL = 2048
O_EXLOCK = 32
O_NDELAY = 4
O_NOCTTY = 131072
O_NOFOLLOW = 256
O_NONBLOCK = 4
O_RDONLY = 0
O_RDWR = 2
O_SHLOCK = 16
O_SYNC = 128
O_TRUNC = 1024
O_WRONLY = 1
PRIO_PGRP = 1
PRIO_PROCESS = 0
PRIO_USER = 2
P_ALL = 0
P_NOWAIT = 1
P_NOWAITO = 1
P_PGID = 2
P_PID = 1
P_WAIT = 0
RTLD_GLOBAL = 8
RTLD_LAZY = 1
RTLD_LOCAL = 4
RTLD_NODELETE = 128
RTLD_NOLOAD = 16
RTLD_NOW = 2
R_OK = 4
SCHED_FIFO = 4
SCHED_OTHER = 1
SCHED_RR = 2
SEEK_CUR = 1
SEEK_DATA = 4
SEEK_END = 2
SEEK_HOLE = 3
SEEK_SET = 0
ST_NOSUID = 2
ST_RDONLY = 1
TMP_MAX = 308915776
WCONTINUED = 16
WEXITED = 4
WNOHANG = 1
WNOWAIT = 32
WSTOPPED = 8
WUNTRACED = 2
W_OK = 2
X_OK = 1
__all__ = ['altsep', 'curdir', 'pardir', 'sep', 'pathsep', 'linesep', ...
altsep = None
confstr_names = {'CS_PATH': 1, 'CS_XBS5_ILP32_OFF32_CFLAGS': 20, 'CS_X...
curdir = '.'
defpath = ':/bin:/usr/bin'
devnull = '/dev/null'
environ = environ({'TERM_SESSION_ID': 'w0t2p0:7C358450-9EC...END': 'mo...
environb = environ({b'TERM_SESSION_ID': b'w0t2p0:7C358450-9...ND': b'm...
extsep = '.'
linesep = '\n'
name = 'posix'
pardir = '..'
pathconf_names = {'PC_ALLOC_SIZE_MIN': 16, 'PC_ASYNC_IO': 17, 'PC_CHOW...
pathsep = ':'
sep = '/'
supports_bytes_environ = True
sysconf_names = {'SC_2_CHAR_TERM': 20, 'SC_2_C_BIND': 18, 'SC_2_C_DEV'...
FILE
/usr/local/Cellar/python/3.7.2_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/os.py
###Markdown
shutil模块可以用于文件和目录管理。 10.2 文件通配符glob模块提供通过通配符检索文件的功能。
###Code
import glob
glob.glob('*.ip*')
###Output
_____no_output_____
###Markdown
10.3 命令行参数命令行参数一般存储在sys模块的argv属性里面(list类型)。但现在一般解析命令行参数都是采用argparse模块。 10.4 错误输出重定向和程序终止sys模块还有stdin, stdout以及stderr属性。最后对于输出警告和错误信息非常有用。终止脚本最直接的方式是sys.exit()
###Code
import sys
sys.stderr.write('Warning, log file not found starting a new one\n')
###Output
Warning, log file not found starting a new one
###Markdown
10.5 字符串模式匹配re模块提供了正则表达式。当只有一些简单的功能需要时,采用字符串函数更好一点。
###Code
'tea for too'.replace('too', 'two')
###Output
_____no_output_____
###Markdown
10.6 数学运算math模块- math.cos- math.pi- math.lograndom模块- random.choice- random.sample- random.random()- random.randrange()statistics模块- statistics.mean()- statistics.median()- statistics.variance() 10.7 网络连接有一些连接网络和处理网络协议的模块,两个最简单的是urllib.request从URL获得数据,smtplib用来发送邮件。 10.8 日期和时间datetime提供了一些操作时间和日期的类。 10.9 数据压缩zlib, gzip, bz2, lzma, zipfile, tarfile等压缩模块。 10.10 性能评估timeit模块profile以及pstats模块提供了大型代码快的时间评估部分。
###Code
from timeit import Timer
Timer('t=a; a=b; b=t', 'a=1; b=2').timeit()
Timer('a,b = b,a', 'a=1; b=2').timeit()
###Output
_____no_output_____
###Markdown
10.11 质量控制doctest模块提供了一个工具,为了扫描整个模块,评估测试嵌入到程序中的docstrings。
###Code
def average(values):
"""Computes the arithmetic mean of a list of numbers.
>>> print(average([20, 30, 70]))
40.0
"""
return sum(values) / len(values)
import doctest
doctest.testmod() # automatically validate the embedded tests
###Output
_____no_output_____
###Markdown
unittest模块比doctest模块要复杂,但提供的功能更丰富完善。
###Code
import unittest
class TestStatisticalFunctions(unittest.TestCase):
def test_average(self):
self.assertEqual(average([20, 30, 70]), 40.0)
self.assertEqual(round(average([1, 5, 7]), 1), 4.3)
with self.assertRaises(ZeroDivisionError):
average([])
with self.assertRaises(TypeError):
average(20, 30, 70)
unittest.main() # Calling from the command line invokes all tests
###Output
E
======================================================================
ERROR: /Users/tiankai/Library/Jupyter/runtime/kernel-7af24a60-0692-4641-9cc3-00520f441837 (unittest.loader._FailedTest)
----------------------------------------------------------------------
AttributeError: module '__main__' has no attribute '/Users/tiankai/Library/Jupyter/runtime/kernel-7af24a60-0692-4641-9cc3-00520f441837'
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (errors=1)
|
deeplearning.ai/Refresher.ipynb | ###Markdown
TensorsTo retrieve shapes from a tensor:``` tensor.get_shape().as_list() ```is used. Reshape Tensors:To convert a 3D tensor to 2D.For example, if you have activation from a layer, you would have filter_size*filter_size * n_C i.e. 3D volume.To change it to a 2D tensor:```np.transpose(np.reshape(tensor, [n_H*n_W, n_C]))```can be used.
###Code
## Example:
np.random.seed(10)
tf.random.set_seed(10)
a_C = np.random.normal(loc=1, scale=4, size=[1, 1, 4, 4, 3])
a_G = np.random.normal(loc=1, scale=4, size = [1,1,4,4,3])
a_C.shape
a_G.shape
a_C = tf.random.normal([1, 1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random.normal([1, 1, 4, 4, 3], mean=1, stddev=4)
a_C.shape
a_G.shape
_, _, n_H, n_W, n_C = a_C.get_shape().as_list()
n_H, n_W, n_C
reshaped_A_C = tf.reshape(a_C, [n_H*n_W, n_C])
reshaped_A_C.shape
trs_A_C= tf.transpose(reshaped_A_C)
trs_A_C.shape
###Output
_____no_output_____ |
notebooks/Telco - Action Rules.ipynb | ###Markdown
Telco - Action Rules DiscoveryIt is a business-oriented example. The source dataset called "Telco Customer Churn" is from Kaggle („Telco Customer Churn“, 2017). The task for action rules is to find the actions that can limit customer churn. Load dataLoad data to Pandas dataframe
###Code
import pandas as pd
from actionrules.actionRulesDiscovery import ActionRulesDiscovery
pd.set_option('display.max_columns', None)
dataFrame = pd.read_csv("data/telco.csv", sep=";")
dataFrame.head()
###Output
_____no_output_____
###Markdown
Instantiate and fit the model objectIt measures the time needed for fitting the model.
###Code
import time
actionRulesDiscovery = ActionRulesDiscovery()
actionRulesDiscovery.load_pandas(dataFrame)
start = time.time()
actionRulesDiscovery.fit(stable_attributes = ["gender", "SeniorCitizen", "Partner"],
flexible_attributes = ["PhoneService",
"InternetService",
"OnlineSecurity",
"DeviceProtection",
"TechSupport",
"StreamingTV",
],
consequent = "Churn",
conf=60,
supp=4,
desired_classes = ["No"],
is_nan=False,
is_reduction=True,
min_stable_attributes=1,
min_flexible_attributes=1)
end = time.time()
print("Time: " + str(end - start) + "s")
###Output
Time: 7.172839879989624s
###Markdown
Number of discovered action rules
###Code
len(actionRulesDiscovery.get_action_rules())
###Output
_____no_output_____
###Markdown
Representation of action rules
###Code
for rule in actionRulesDiscovery.get_action_rules_representation():
print(rule)
print(" ")
###Output
r = [(Partner: No) ∧ (InternetService: Fiber optic → No) ∧ (OnlineSecurity: No → No internet service) ∧ (DeviceProtection: No → No internet service) ∧ (TechSupport: No → No internet service) ] ⇒ [Churn: Yes → No] with support: 0.06772682095697856, confidence: 0.5599898610564512 and uplift: 0.05620874238092184.
r = [(Partner: No) ∧ (InternetService: Fiber optic → No) ∧ (OnlineSecurity: No → No internet service) ∧ (DeviceProtection: No → No internet service) ∧ (StreamingTV: No → No internet service) ] ⇒ [Churn: Yes → No] with support: 0.0440153343745563, confidence: 0.5367331680635895 and uplift: 0.036205441411027696.
r = [(Partner: No) ∧ (InternetService: Fiber optic → No) ∧ (OnlineSecurity: No → No internet service) ∧ (DeviceProtection: No → No internet service) ∧ (TechSupport: No → No internet service) ∧ (StreamingTV: No → No internet service) ] ⇒ [Churn: Yes → No] with support: 0.04174357518103081, confidence: 0.5518065094057928 and uplift: 0.03453910027669047.
r = [(Partner: No) ∧ (InternetService: Fiber optic → No) ∧ (OnlineSecurity: No → No internet service) ∧ (TechSupport: No → No internet service) ∧ (StreamingTV: Yes → No internet service) ] ⇒ [Churn: Yes → No] with support: 0.0400397557858867, confidence: 0.5383313809709749 and uplift: 0.03295636449338401.
r = [(Partner: No) ∧ (InternetService: No → Fiber optic) ∧ (OnlineSecurity: No internet service → No) ∧ (DeviceProtection: No internet service → No) ∧ (TechSupport: No internet service → No) ] ⇒ [Churn: No → Yes] with support: 0.06772682095697856, confidence: 0.5599898610564512 and uplift: 0.058203007879325114.
r = [(Partner: No) ∧ (InternetService: No → Fiber optic) ∧ (OnlineSecurity: No internet service → No) ∧ (DeviceProtection: No internet service → No) ∧ (StreamingTV: No internet service → No) ] ⇒ [Churn: No → Yes] with support: 0.0440153343745563, confidence: 0.5367331680635895 and uplift: 0.05529048029436011.
r = [(Partner: No) ∧ (InternetService: No → Fiber optic) ∧ (OnlineSecurity: No internet service → No) ∧ (TechSupport: No internet service → No) ∧ (StreamingTV: No internet service → Yes) ] ⇒ [Churn: No → Yes] with support: 0.0400397557858867, confidence: 0.5383313809709749 and uplift: 0.05549063081364658.
r = [(Partner: No) ∧ (InternetService: No → Fiber optic) ∧ (OnlineSecurity: No internet service → No) ∧ (DeviceProtection: No internet service → No) ∧ (TechSupport: No internet service → No) ∧ (StreamingTV: No internet service → No) ] ⇒ [Churn: No → Yes] with support: 0.04174357518103081, confidence: 0.5518065094057928 and uplift: 0.05717817440763044.
r = [(SeniorCitizen: 0) ∧ (Partner: No) ∧ (InternetService: Fiber optic → No) ∧ (OnlineSecurity: No → No internet service) ∧ (DeviceProtection: No → No internet service) ∧ (TechSupport: No → No internet service) ] ⇒ [Churn: Yes → No] with support: 0.04756495811443987, confidence: 0.5464168524169862 and uplift: 0.039304895280051814.
r = [(SeniorCitizen: 0) ∧ (Partner: No) ∧ (InternetService: No → Fiber optic) ∧ (OnlineSecurity: No internet service → No) ∧ (DeviceProtection: No internet service → No) ∧ (TechSupport: No internet service → No) ] ⇒ [Churn: No → Yes] with support: 0.04756495811443987, confidence: 0.5464168524169862 and uplift: 0.05472561149394076.
r = [(gender: Female) ∧ (InternetService: Fiber optic → No) ∧ (OnlineSecurity: No → No internet service) ∧ (DeviceProtection: No → No internet service) ∧ (TechSupport: No → No internet service) ] ⇒ [Churn: Yes → No] with support: 0.05068862700553741, confidence: 0.5713442003307347 and uplift: 0.04453632600352662.
r = [(gender: Female) ∧ (InternetService: No → Fiber optic) ∧ (OnlineSecurity: No internet service → No) ∧ (DeviceProtection: No internet service → No) ∧ (TechSupport: No internet service → No) ] ⇒ [Churn: No → Yes] with support: 0.05068862700553741, confidence: 0.5713442003307347 and uplift: 0.05755819294919444.
r = [(gender: Female) ∧ (Partner: No) ∧ (InternetService: Fiber optic → No) ∧ (OnlineSecurity: No → No internet service) ∧ (TechSupport: No → No internet service) ] ⇒ [Churn: Yes → No] with support: 0.043873349424960954, confidence: 0.5484247006931318 and uplift: 0.03981257986653415.
r = [(gender: Female) ∧ (Partner: No) ∧ (InternetService: Fiber optic → No) ∧ (DeviceProtection: No → No internet service) ∧ (TechSupport: No → No internet service) ] ⇒ [Churn: Yes → No] with support: 0.04060769558426807, confidence: 0.5592937155876211 and uplift: 0.0338220496453463.
r = [(gender: Female) ∧ (Partner: No) ∧ (InternetService: No → Fiber optic) ∧ (DeviceProtection: No internet service → No) ∧ (TechSupport: No internet service → No) ] ⇒ [Churn: No → Yes] with support: 0.04060769558426807, confidence: 0.5592937155876211 and uplift: 0.025477308138961725.
r = [(gender: Female) ∧ (Partner: No) ∧ (InternetService: No → Fiber optic) ∧ (OnlineSecurity: No internet service → No) ∧ (TechSupport: No internet service → No) ] ⇒ [Churn: No → Yes] with support: 0.043873349424960954, confidence: 0.5484247006931318 and uplift: 0.024882862416583846.
###Markdown
Human language representation
###Code
for rule in actionRulesDiscovery.get_pretty_action_rules():
print(rule)
print(" ")
###Output
If attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.06772682095697856, confidence: 0.5599898610564512 and uplift: 0.05620874238092184.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'StreamingTV' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.0440153343745563, confidence: 0.5367331680635895 and uplift: 0.036205441411027696.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', attribute 'StreamingTV' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.04174357518103081, confidence: 0.5518065094057928 and uplift: 0.03453910027669047.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', attribute 'StreamingTV' value 'Yes' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.0400397557858867, confidence: 0.5383313809709749 and uplift: 0.03295636449338401.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.06772682095697856, confidence: 0.5599898610564512 and uplift: 0.058203007879325114.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'StreamingTV' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.0440153343745563, confidence: 0.5367331680635895 and uplift: 0.05529048029436011.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', attribute 'StreamingTV' value 'No internet service' is changed to 'Yes', then 'Churn' value 'No' is changed to 'Yes' with support: 0.0400397557858867, confidence: 0.5383313809709749 and uplift: 0.05549063081364658.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', attribute 'StreamingTV' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.04174357518103081, confidence: 0.5518065094057928 and uplift: 0.05717817440763044.
If attribute 'SeniorCitizen' is '0', attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.04756495811443987, confidence: 0.5464168524169862 and uplift: 0.039304895280051814.
If attribute 'SeniorCitizen' is '0', attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.04756495811443987, confidence: 0.5464168524169862 and uplift: 0.05472561149394076.
If attribute 'gender' is 'Female', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.05068862700553741, confidence: 0.5713442003307347 and uplift: 0.04453632600352662.
If attribute 'gender' is 'Female', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.05068862700553741, confidence: 0.5713442003307347 and uplift: 0.05755819294919444.
If attribute 'gender' is 'Female', attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.043873349424960954, confidence: 0.5484247006931318 and uplift: 0.03981257986653415.
If attribute 'gender' is 'Female', attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.04060769558426807, confidence: 0.5592937155876211 and uplift: 0.0338220496453463.
If attribute 'gender' is 'Female', attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.04060769558426807, confidence: 0.5592937155876211 and uplift: 0.025477308138961725.
If attribute 'gender' is 'Female', attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.043873349424960954, confidence: 0.5484247006931318 and uplift: 0.024882862416583846.
###Markdown
Deployment Save the model
###Code
from joblib import dump, load
dump(actionRulesDiscovery, 'armodel.joblib')
###Output
_____no_output_____
###Markdown
Load the model
###Code
actionRulesDiscovery = load('armodel.joblib')
###Output
_____no_output_____
###Markdown
Test that the loaded model works well.
###Code
for rule in actionRulesDiscovery.get_pretty_action_rules():
print(rule)
print(" ")
###Output
If attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.06772682095697856, confidence: 0.5599898610564512 and uplift: 0.05620874238092184.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'StreamingTV' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.0440153343745563, confidence: 0.5367331680635895 and uplift: 0.036205441411027696.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', attribute 'StreamingTV' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.04174357518103081, confidence: 0.5518065094057928 and uplift: 0.03453910027669047.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', attribute 'StreamingTV' value 'Yes' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.0400397557858867, confidence: 0.5383313809709749 and uplift: 0.03295636449338401.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.06772682095697856, confidence: 0.5599898610564512 and uplift: 0.058203007879325114.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'StreamingTV' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.0440153343745563, confidence: 0.5367331680635895 and uplift: 0.05529048029436011.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', attribute 'StreamingTV' value 'No internet service' is changed to 'Yes', then 'Churn' value 'No' is changed to 'Yes' with support: 0.0400397557858867, confidence: 0.5383313809709749 and uplift: 0.05549063081364658.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', attribute 'StreamingTV' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.04174357518103081, confidence: 0.5518065094057928 and uplift: 0.05717817440763044.
If attribute 'SeniorCitizen' is '0', attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.04756495811443987, confidence: 0.5464168524169862 and uplift: 0.039304895280051814.
If attribute 'SeniorCitizen' is '0', attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.04756495811443987, confidence: 0.5464168524169862 and uplift: 0.05472561149394076.
If attribute 'gender' is 'Female', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.05068862700553741, confidence: 0.5713442003307347 and uplift: 0.04453632600352662.
If attribute 'gender' is 'Female', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.05068862700553741, confidence: 0.5713442003307347 and uplift: 0.05755819294919444.
If attribute 'gender' is 'Female', attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.043873349424960954, confidence: 0.5484247006931318 and uplift: 0.03981257986653415.
If attribute 'gender' is 'Female', attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.04060769558426807, confidence: 0.5592937155876211 and uplift: 0.0338220496453463.
If attribute 'gender' is 'Female', attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.04060769558426807, confidence: 0.5592937155876211 and uplift: 0.025477308138961725.
If attribute 'gender' is 'Female', attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.043873349424960954, confidence: 0.5484247006931318 and uplift: 0.024882862416583846.
###Markdown
Telco - Action Rules DiscoveryIt is a business-oriented example. The source dataset called "Telco Customer Churn" is from Kaggle („Telco Customer Churn“, 2017). The task for action rules is to find the actions that can limit customer churn. Load dataLoad data to Pandas dataframe
###Code
import pandas as pd
from actionrules.actionRulesDiscovery import ActionRulesDiscovery
pd.set_option('display.max_columns', None)
dataFrame = pd.read_csv("data/telco.csv", sep=";")
dataFrame.head()
###Output
_____no_output_____
###Markdown
Instantiate and fit the model objectIt measures the time needed for fitting the model.
###Code
import time
actionRulesDiscovery = ActionRulesDiscovery()
actionRulesDiscovery.load_pandas(dataFrame)
start = time.time()
actionRulesDiscovery.fit(stable_attributes = ["gender", "SeniorCitizen", "Partner"],
flexible_attributes = ["PhoneService",
"InternetService",
"OnlineSecurity",
"DeviceProtection",
"TechSupport",
"StreamingTV",
],
consequent = "Churn",
conf=60,
supp=4,
desired_classes = ["No", "Yes"],
is_nan=False,
is_reduction=True,
min_stable_attributes=1,
min_flexible_attributes=1)
end = time.time()
print("Time: " + str(end - start) + "s")
###Output
Time: 7.172839879989624s
###Markdown
Number of discovered action rules
###Code
len(actionRulesDiscovery.get_action_rules())
###Output
_____no_output_____
###Markdown
Representation of action rules
###Code
for rule in actionRulesDiscovery.get_action_rules_representation():
print(rule)
print(" ")
###Output
r = [(Partner: No) ∧ (InternetService: Fiber optic → No) ∧ (OnlineSecurity: No → No internet service) ∧ (DeviceProtection: No → No internet service) ∧ (TechSupport: No → No internet service) ] ⇒ [Churn: Yes → No] with support: 0.06772682095697856, confidence: 0.5599898610564512 and uplift: 0.05620874238092184.
r = [(Partner: No) ∧ (InternetService: Fiber optic → No) ∧ (OnlineSecurity: No → No internet service) ∧ (DeviceProtection: No → No internet service) ∧ (StreamingTV: No → No internet service) ] ⇒ [Churn: Yes → No] with support: 0.0440153343745563, confidence: 0.5367331680635895 and uplift: 0.036205441411027696.
r = [(Partner: No) ∧ (InternetService: Fiber optic → No) ∧ (OnlineSecurity: No → No internet service) ∧ (DeviceProtection: No → No internet service) ∧ (TechSupport: No → No internet service) ∧ (StreamingTV: No → No internet service) ] ⇒ [Churn: Yes → No] with support: 0.04174357518103081, confidence: 0.5518065094057928 and uplift: 0.03453910027669047.
r = [(Partner: No) ∧ (InternetService: Fiber optic → No) ∧ (OnlineSecurity: No → No internet service) ∧ (TechSupport: No → No internet service) ∧ (StreamingTV: Yes → No internet service) ] ⇒ [Churn: Yes → No] with support: 0.0400397557858867, confidence: 0.5383313809709749 and uplift: 0.03295636449338401.
r = [(Partner: No) ∧ (InternetService: No → Fiber optic) ∧ (OnlineSecurity: No internet service → No) ∧ (DeviceProtection: No internet service → No) ∧ (TechSupport: No internet service → No) ] ⇒ [Churn: No → Yes] with support: 0.06772682095697856, confidence: 0.5599898610564512 and uplift: 0.058203007879325114.
r = [(Partner: No) ∧ (InternetService: No → Fiber optic) ∧ (OnlineSecurity: No internet service → No) ∧ (DeviceProtection: No internet service → No) ∧ (StreamingTV: No internet service → No) ] ⇒ [Churn: No → Yes] with support: 0.0440153343745563, confidence: 0.5367331680635895 and uplift: 0.05529048029436011.
r = [(Partner: No) ∧ (InternetService: No → Fiber optic) ∧ (OnlineSecurity: No internet service → No) ∧ (TechSupport: No internet service → No) ∧ (StreamingTV: No internet service → Yes) ] ⇒ [Churn: No → Yes] with support: 0.0400397557858867, confidence: 0.5383313809709749 and uplift: 0.05549063081364658.
r = [(Partner: No) ∧ (InternetService: No → Fiber optic) ∧ (OnlineSecurity: No internet service → No) ∧ (DeviceProtection: No internet service → No) ∧ (TechSupport: No internet service → No) ∧ (StreamingTV: No internet service → No) ] ⇒ [Churn: No → Yes] with support: 0.04174357518103081, confidence: 0.5518065094057928 and uplift: 0.05717817440763044.
r = [(SeniorCitizen: 0) ∧ (Partner: No) ∧ (InternetService: Fiber optic → No) ∧ (OnlineSecurity: No → No internet service) ∧ (DeviceProtection: No → No internet service) ∧ (TechSupport: No → No internet service) ] ⇒ [Churn: Yes → No] with support: 0.04756495811443987, confidence: 0.5464168524169862 and uplift: 0.039304895280051814.
r = [(SeniorCitizen: 0) ∧ (Partner: No) ∧ (InternetService: No → Fiber optic) ∧ (OnlineSecurity: No internet service → No) ∧ (DeviceProtection: No internet service → No) ∧ (TechSupport: No internet service → No) ] ⇒ [Churn: No → Yes] with support: 0.04756495811443987, confidence: 0.5464168524169862 and uplift: 0.05472561149394076.
r = [(gender: Female) ∧ (InternetService: Fiber optic → No) ∧ (OnlineSecurity: No → No internet service) ∧ (DeviceProtection: No → No internet service) ∧ (TechSupport: No → No internet service) ] ⇒ [Churn: Yes → No] with support: 0.05068862700553741, confidence: 0.5713442003307347 and uplift: 0.04453632600352662.
r = [(gender: Female) ∧ (InternetService: No → Fiber optic) ∧ (OnlineSecurity: No internet service → No) ∧ (DeviceProtection: No internet service → No) ∧ (TechSupport: No internet service → No) ] ⇒ [Churn: No → Yes] with support: 0.05068862700553741, confidence: 0.5713442003307347 and uplift: 0.05755819294919444.
r = [(gender: Female) ∧ (Partner: No) ∧ (InternetService: Fiber optic → No) ∧ (OnlineSecurity: No → No internet service) ∧ (TechSupport: No → No internet service) ] ⇒ [Churn: Yes → No] with support: 0.043873349424960954, confidence: 0.5484247006931318 and uplift: 0.03981257986653415.
r = [(gender: Female) ∧ (Partner: No) ∧ (InternetService: Fiber optic → No) ∧ (DeviceProtection: No → No internet service) ∧ (TechSupport: No → No internet service) ] ⇒ [Churn: Yes → No] with support: 0.04060769558426807, confidence: 0.5592937155876211 and uplift: 0.0338220496453463.
r = [(gender: Female) ∧ (Partner: No) ∧ (InternetService: No → Fiber optic) ∧ (DeviceProtection: No internet service → No) ∧ (TechSupport: No internet service → No) ] ⇒ [Churn: No → Yes] with support: 0.04060769558426807, confidence: 0.5592937155876211 and uplift: 0.025477308138961725.
r = [(gender: Female) ∧ (Partner: No) ∧ (InternetService: No → Fiber optic) ∧ (OnlineSecurity: No internet service → No) ∧ (TechSupport: No internet service → No) ] ⇒ [Churn: No → Yes] with support: 0.043873349424960954, confidence: 0.5484247006931318 and uplift: 0.024882862416583846.
###Markdown
Human language representation
###Code
for rule in actionRulesDiscovery.get_pretty_action_rules():
print(rule)
print(" ")
###Output
If attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.06772682095697856, confidence: 0.5599898610564512 and uplift: 0.05620874238092184.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'StreamingTV' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.0440153343745563, confidence: 0.5367331680635895 and uplift: 0.036205441411027696.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', attribute 'StreamingTV' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.04174357518103081, confidence: 0.5518065094057928 and uplift: 0.03453910027669047.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', attribute 'StreamingTV' value 'Yes' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.0400397557858867, confidence: 0.5383313809709749 and uplift: 0.03295636449338401.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.06772682095697856, confidence: 0.5599898610564512 and uplift: 0.058203007879325114.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'StreamingTV' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.0440153343745563, confidence: 0.5367331680635895 and uplift: 0.05529048029436011.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', attribute 'StreamingTV' value 'No internet service' is changed to 'Yes', then 'Churn' value 'No' is changed to 'Yes' with support: 0.0400397557858867, confidence: 0.5383313809709749 and uplift: 0.05549063081364658.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', attribute 'StreamingTV' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.04174357518103081, confidence: 0.5518065094057928 and uplift: 0.05717817440763044.
If attribute 'SeniorCitizen' is '0', attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.04756495811443987, confidence: 0.5464168524169862 and uplift: 0.039304895280051814.
If attribute 'SeniorCitizen' is '0', attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.04756495811443987, confidence: 0.5464168524169862 and uplift: 0.05472561149394076.
If attribute 'gender' is 'Female', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.05068862700553741, confidence: 0.5713442003307347 and uplift: 0.04453632600352662.
If attribute 'gender' is 'Female', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.05068862700553741, confidence: 0.5713442003307347 and uplift: 0.05755819294919444.
If attribute 'gender' is 'Female', attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.043873349424960954, confidence: 0.5484247006931318 and uplift: 0.03981257986653415.
If attribute 'gender' is 'Female', attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.04060769558426807, confidence: 0.5592937155876211 and uplift: 0.0338220496453463.
If attribute 'gender' is 'Female', attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.04060769558426807, confidence: 0.5592937155876211 and uplift: 0.025477308138961725.
If attribute 'gender' is 'Female', attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.043873349424960954, confidence: 0.5484247006931318 and uplift: 0.024882862416583846.
###Markdown
Deployment Save the model
###Code
from joblib import dump, load
dump(actionRulesDiscovery, 'armodel.joblib')
###Output
_____no_output_____
###Markdown
Load the model
###Code
actionRulesDiscovery = load('armodel.joblib')
###Output
_____no_output_____
###Markdown
Test that the loaded model works well.
###Code
for rule in actionRulesDiscovery.get_pretty_action_rules():
print(rule)
print(" ")
###Output
If attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.06772682095697856, confidence: 0.5599898610564512 and uplift: 0.05620874238092184.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'StreamingTV' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.0440153343745563, confidence: 0.5367331680635895 and uplift: 0.036205441411027696.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', attribute 'StreamingTV' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.04174357518103081, confidence: 0.5518065094057928 and uplift: 0.03453910027669047.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', attribute 'StreamingTV' value 'Yes' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.0400397557858867, confidence: 0.5383313809709749 and uplift: 0.03295636449338401.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.06772682095697856, confidence: 0.5599898610564512 and uplift: 0.058203007879325114.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'StreamingTV' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.0440153343745563, confidence: 0.5367331680635895 and uplift: 0.05529048029436011.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', attribute 'StreamingTV' value 'No internet service' is changed to 'Yes', then 'Churn' value 'No' is changed to 'Yes' with support: 0.0400397557858867, confidence: 0.5383313809709749 and uplift: 0.05549063081364658.
If attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', attribute 'StreamingTV' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.04174357518103081, confidence: 0.5518065094057928 and uplift: 0.05717817440763044.
If attribute 'SeniorCitizen' is '0', attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.04756495811443987, confidence: 0.5464168524169862 and uplift: 0.039304895280051814.
If attribute 'SeniorCitizen' is '0', attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.04756495811443987, confidence: 0.5464168524169862 and uplift: 0.05472561149394076.
If attribute 'gender' is 'Female', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.05068862700553741, confidence: 0.5713442003307347 and uplift: 0.04453632600352662.
If attribute 'gender' is 'Female', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.05068862700553741, confidence: 0.5713442003307347 and uplift: 0.05755819294919444.
If attribute 'gender' is 'Female', attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'OnlineSecurity' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.043873349424960954, confidence: 0.5484247006931318 and uplift: 0.03981257986653415.
If attribute 'gender' is 'Female', attribute 'Partner' is 'No', attribute 'InternetService' value 'Fiber optic' is changed to 'No', attribute 'DeviceProtection' value 'No' is changed to 'No internet service', attribute 'TechSupport' value 'No' is changed to 'No internet service', then 'Churn' value 'Yes' is changed to 'No' with support: 0.04060769558426807, confidence: 0.5592937155876211 and uplift: 0.0338220496453463.
If attribute 'gender' is 'Female', attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'DeviceProtection' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.04060769558426807, confidence: 0.5592937155876211 and uplift: 0.025477308138961725.
If attribute 'gender' is 'Female', attribute 'Partner' is 'No', attribute 'InternetService' value 'No' is changed to 'Fiber optic', attribute 'OnlineSecurity' value 'No internet service' is changed to 'No', attribute 'TechSupport' value 'No internet service' is changed to 'No', then 'Churn' value 'No' is changed to 'Yes' with support: 0.043873349424960954, confidence: 0.5484247006931318 and uplift: 0.024882862416583846.
|
notebooks/pydata02_python_basics_2.ipynb | ###Markdown
파이썬 프로그래밍 기초 1부 2편 주요 내용 * 스칼라 자료형* 제어문: `if` 조건문, `for` 반복문, `while` 반복문 스칼라 자료형 정수, 부동소수점, 부울값(`True`와 `False`), 문자열, 날짜와 시간 등처럼 단일한 값의 자료형을 __스칼라__ 자료형이라 부른다. 반면에 리스트, 튜플, 사전 등은 스칼라 자료형이 아니다. __참고:__ 문자열은 언어에 따라 스칼라 자료형이 아닐 수도 있다.예를 들어, C 언어는 문자열을 문자로 이루어진 어레이로 정의되기에스칼라 자료형이라 말하기 어렵다. 정수 자료형: `int` `int` 자료형은 임의의 숫자를 다룰 수 있다.
###Code
ival = 17239871
ival ** 6 # ival의 6승
###Output
_____no_output_____
###Markdown
부동소수점 자료형: `float` 유리수를 다룬다.
###Code
fval = 7.243
###Output
_____no_output_____
###Markdown
과학 표기법도 사용할 수 있다.
###Code
fval2 = 6.78e-5
fval2
###Output
_____no_output_____
###Markdown
나눗셈은 부동소수점으로 계산된다.
###Code
3 / 2
###Output
_____no_output_____
###Markdown
몫은 정수형으로 계산된다.
###Code
3 // 2
###Output
_____no_output_____
###Markdown
문자열 자료형: `str` 문자열은 작은따옴표(`'`) 또는 큰따옴표(`"`)를 사용한다.
###Code
a = '작은따옴표를 사용하는 문자열'
b = "큰따옴표를 사용하는 문자열"
###Output
_____no_output_____
###Markdown
여러 줄로 이루어진 문자열은 삼중 큰따옴표로 감싼다.
###Code
c = """
여러 줄에 걸친 문자열은
삼중 큰따옴표로 감싼다.
"""
###Output
_____no_output_____
###Markdown
문자열 자료형은 다양한 메서드를 제공한다.예를 들어, 문자열이 몇 줄로 이루어졌는가를 확인하려면 `count()` 메서드를 이용하여 줄바꿈 기호(`\n`)가 사용된 횟수를 세면 된다.
###Code
c.count('\n')
###Output
_____no_output_____
###Markdown
문자열은 앞서 설명한 대로 수정할 수 없는 불변(immutable) 자료형이다.
###Code
a = 'this is a string'
a[10] = 'f'
###Output
_____no_output_____
###Markdown
한 번 생성된 문자열은 수정할 수 없지만 새로운 문자열을 생성하는 데에 활용될 수는 있다.예를 들어, `replace()` 메서드는 문자열에 사용된 부분 문자열을 다른 문자열로 대체하는 방식으로 새로운 문자열을 생성한다.
###Code
b = a.replace('string', 'longer string')
b
###Output
_____no_output_____
###Markdown
변수 `a`가 가리키는 값은 변하지 않았다.
###Code
a
###Output
_____no_output_____
###Markdown
많은 파이썬 객체를 `str()` 함수를 이용하여 문자열로 변환할 수 있다.__주의사항:__ `str`은 문자열 자료형을, `str()`은 문자열을 반환하는 함수를 가리킨다.
###Code
a = 5.6
s = str(a)
type(s)
###Output
_____no_output_____
###Markdown
문자열을 리스트, 튜플 등처럼 일종의 순차 자료형, 즉, 순서가 있는 모음 자료형으로 취급할 수도 있다.따라서 인덱싱, 슬라이싱 기능 등이 모두 사용가능하다.__참고:__ 인덱싱, 슬라이싱 등에 대해서 앞으로 많이 배울 것이다.
###Code
s = 'python'
s[:3]
###Output
_____no_output_____
###Markdown
역슬래시(\) 기능 __주의사항:__ 윈도우 브라우저에서는 역슬래시가 원화 통화기호(&8361;) 모양으로 보이지만,동일하게 기능한다.역슬래시 문자(\)는 특수한 기능을 수행한다.예를 들어, 줄바꿈을 의미하는 문자 `\n`, 탭을 의미하는 문자 `\t` 등에서 역슬래시가 특수한 기능을 수행한다. 따라서 역슬래시 자체를 문자열에 포함할 때 조심해야 한다.예를 들어, 아래와 같은 문자열을 사용하고자 한다고 가정하자.```python"12\34"```그런데 그냥 아래와 같이 지정하면 다르게 작동한다.이유는 `\3`이 특수한 문자를 표현하도록 역슬래시가 기능하기 때문이다.(아래 유니코드 설명 참조)
###Code
s = '12\34'
print(s)
s = '\3'
print(s)
###Output
###Markdown
따라서 `12\34`로 출력되게 하려면 역슬래시의 기능을 해제해야 하며,그러기 위해 역슬래시를 두 번 적어주면 된다.그러면 첫째 역슬래시 뒤에 나오는 역슬래시의 기능을 해제해서 지정한 문자열로 처리된다.__참고:__ 이처럼 특정 기능을 작동하지 못하도록 하는 것을 영어로 __이스케이프__(escape)라고 부른다.
###Code
s = '12\\34'
print(s)
###Output
12\34
###Markdown
그런데 문자열 안에 많은 역슬래시가 포함되어 있다면 이런 방식은 매우 불편하다. 하지만 문자열 앞에 영어 알파벳 `r`을 추가하면 간단하게 해결된다.
###Code
s = r'this\has\no\special\characters'
print(s)
###Output
this\has\no\special\characters
###Markdown
문자열 연산: 덧셈과 정수 곱셈 두 문자열을 더하면 두 문자열을 이어서 붙인다.
###Code
a = 'this is the first half '
b = 'and this is the second half'
a + b
###Output
_____no_output_____
###Markdown
문자열과 정수를 곱하면 해당 정수만큼 복사되어 길어진다.
###Code
a * 2
###Output
_____no_output_____
###Markdown
문자열 템플릿 __문자열 템플릿__은 문자열 안에 일부 변하는 값을 지정할 수 있도록 선언된 문자열이다.예를 들어, 아래 템플릿은 세 개의 값을 임의로 지정할 수 있도록 준비되어 있다.
###Code
template = '{0:.2f} {1:s}는 {2:d} 미국달러에 해당한다.'
###Output
_____no_output_____
###Markdown
__참고:__ 중괄호 안에 사용된 숫자와 기호의 의미는 다음과 같다.* `0:.2f` - `format()` 메서드의 첫째 인자인 부동소수점이 자리하며 소수점 이하 두 자리까지 표기* `1:s` - `format()` 메서드의 둘째 인자인 문자열이 자리하는 자리* `2:d` - `format()` 메서드의 셋째 인자인 정수가 위치하는 자리 문자열 템플릿에 지정된 수 만큼의 값을 `format()` 메서드를 이용하여 입력하여새로운 문자열을 생성할 수 있다.단, `format()` 메서드에 사용되는 인자의 순서는 지정된 순서대로 정해져야 한다.예를 들어, `template` 변수가 가리키는 문자열 템플릿의 세 위치에 차례대로부동소수점, 문자열, 정수를 입력해야 하며, 아래와 같이 차례대로 인자로 사용하면 된다.
###Code
template.format(4.5560, '아르헨티나 페소', 1)
###Output
_____no_output_____
###Markdown
__참고:__콜론(`:`)을 포함하여 그 이후에 해당하는 부분은 입력될 값들의 자료형을 안내하는 역할을 수행한다.하지만 의무사항은 아니며 다만, 사용되는 값에 대한 정보를 제공하거나아니면 보다 서식을 갖춘 문자열이 출력하도록 사용된다.
###Code
template = '{0} {1}는 {2} 미국달러에 해당한다.'
template.format(4.5560, '아르헨티나 페소', 1)
###Output
_____no_output_____
###Markdown
`format()` 메서드를 사용하는 대신에 요즘에는 f-문자열을 보다 많이 사용한다.이유는 보다 편리한 사용성 때문이다.f-문자열은 문자열 앞에 영어 알파벳 f를 추가하기만 하면 되며 문자열 안에 변수를 중괄호로 감싸 직접 대입한다.
###Code
a = 4.5560
b = '아르헨티나 페소'
c = 1
template = f'{a:.2f} {b:s}는 {c:d} 미국달러에 해당한다.'
template
###Output
_____no_output_____
###Markdown
여기서도 콜론 부분은 생략해도 된다.
###Code
template = f'{a} {b}는 {c} 미국달러에 해당한다.'
template
###Output
_____no_output_____
###Markdown
유니코드와 바이트 __유니코드__(unicode)는 순전히 키보드만을 이용하여 문자를 표현하는 코드표 모음집이며 파이썬에서 영어 알파벳과 한글을 포함하여 거의 모든 문자를 지원한다. 파이썬 또한 유니코드를 기본적으로 지원한다.반면에 __바이트__(bytes)는 문자코드가 컴퓨터가 이해할 수 있는 형태로 변환된 값이다. 유니코드를 바이트로 인코딩(변환, encoding)하는 방식은 일반적으로 __UTF-8__ 방식을 따른다.반면에 한글에 특화된 인코딩 방식으로 __EUC-KR__, __CP-949__ 등이 있다. 따라서 사용하는 브라우저에 따라 한글이 깨져서 보이는 경우 언급한 세 가지 인코딩 방식 중하나로 설정해야 한다. __참고:__ 요즘은 UTF-8 방식으로 인코딩하는 것을 기본값으로 추천한다. 예를 들어, 아래는 스페인어(Spanish)를 의미하는 스페인 단어 "español"를 가리키는 변수를 선언한다.
###Code
val = "español"
val
###Output
_____no_output_____
###Markdown
UTF-8 방식으로 바이트로 인코딩하면 사람은 알아볼 수 없게 된다.
###Code
val_utf8 = val.encode('utf-8')
val_utf8
###Output
_____no_output_____
###Markdown
인코딩된 값의 자료형은 `bytes`이다.
###Code
type(val_utf8)
###Output
_____no_output_____
###Markdown
인코딩 방식을 안다면 유니코드로 디코딩할 수 있다.
###Code
val_utf8.decode('utf-8')
###Output
_____no_output_____
###Markdown
UTF-8 방식이 대세이지만 다른 인코딩 방식도 존재한다는 사실 정도는 상식으로 알고 있어야 한다.`bytes` 자료형의 객체는 파일(files) 다루면서 흔하게 접한다. 문자열 앞에 `b`를 붙이면 UTF-8 방식으로 인코딩 된 `bytes` 객체로 취급된다.
###Code
bytes_val = b'this is bytes'
bytes_val
###Output
_____no_output_____
###Markdown
UTF-8 방식으로 디코딩하면 유니코드 문자열(`str`)을 얻는다.
###Code
decoded = bytes_val.decode('utf8')
decoded
type(decoded)
###Output
_____no_output_____
###Markdown
부울값 `True` 또는 `False`의 값으로 계산될 수 있는 값을 __부울값__(boolean values)이다.부울값과 관려된 연산자는 논리곱 연산자 `and`, 논리합 연산자 `or`가 대표적이며,두 연산자의 기능은 일반적으로 알려진 것과 동일하다.
###Code
True and True
False or True
###Output
_____no_output_____
###Markdown
형변환 함수 `str()`, `bool()`, `int()`, `float()`는 인자로 들어온 값을 각각 문자열, 부울값, 정수, 부동소수점으로 변환한다.단, 인자로 사용된 값에 따라 오류가 발생할 수 있다.
###Code
s = '3.14159'
fval = float(s)
fval
type(fval)
###Output
_____no_output_____
###Markdown
`int()` 함수는 부동소수점에서 소수점 이하를 버리고 정수를 반환한다.
###Code
int(fval)
###Output
_____no_output_____
###Markdown
`int()` 함수는 문자열도 직접 정수로 반환한다.
###Code
int('334')
###Output
_____no_output_____
###Markdown
하지만 문자열이 정수 형식이 아니면 오류가 발생한다.
###Code
int(s)
###Output
_____no_output_____
###Markdown
`bool()` 함수는 0에 대해서만 `False`를 반환한다.
###Code
bool(fval)
bool(0)
bool(-1)
###Output
_____no_output_____
###Markdown
`None` 값 `None`은 어떤 의미도 없는 값, 소위 널(null)값이며, 문법적으로 `NoneType` 자료형의 유일한 값이다.
###Code
type(None)
a = None
a is None
b = 5
b is not None
###Output
_____no_output_____
###Markdown
`None`은 특정 매개변수에 대한 인자가 경우에 따라 추가로 필요할 때를 대비에서 키워드 매개변수의 기본값으로 사용되곤 한다.예를 들어, 아래 `add_and_maybe_multiply()` 함수의 셋째 인자는 기본적으로 `None` 이지만,경우에 따라 다른 값을 지정하여 사용할 수 있도록 활용되고 있다.
###Code
def add_and_maybe_multiply(a, b, c=None):
result = a + b
if c is not None:
result = result * c
return result
###Output
_____no_output_____
###Markdown
키워드 인자를 별도로 지정하지 않으면 지정된 값을 사용한다.
###Code
add_and_maybe_multiply(2, 3) # 2 + 3
###Output
_____no_output_____
###Markdown
키워드 인자를 별도로 지정하면 그 값을 사용한다.
###Code
add_and_maybe_multiply(2, 3, 4) # (2 + 3) * 4
###Output
_____no_output_____
###Markdown
날짜와 시간 `datetime` 모듈은 날짜와 시간과 관련된 유용한 클래스를 제공한다.대표적으로 `datetime`, `date`, `time` 세 클래스가 포함되며각각 날짜와시간, 날짜, 시간 정보를 속성으로 갖는다.
###Code
from datetime import datetime, date, time
###Output
_____no_output_____
###Markdown
년-월-일-시-분-초 정보를 담은 객체는 아래와 같이 생성한다.
###Code
dt = datetime(2021, 3, 2, 17, 5, 1)
###Output
_____no_output_____
###Markdown
`datetime` 객체는 년-월-일-시-분-초를 각각 따로 제공하는 속성 변수를 갖고 있다.예를 들어, 일(day) 속성은 아래와 같이 확인한다.
###Code
dt.day
###Output
_____no_output_____
###Markdown
분(minute) 속성은 다음과 같이 확인한다.
###Code
dt.minute
###Output
_____no_output_____
###Markdown
날짜 정보만 갖는 `date` 클래스의 객체로의 변환은 `date()` 메서드를 이용한다.
###Code
dt.date()
###Output
_____no_output_____
###Markdown
시간 정보만 갖는 `time` 클래스의 객체로의 변환은 `time()` 메서드를 이용한다.
###Code
dt.time()
###Output
_____no_output_____
###Markdown
일상적으로 사용하는 날짜-시간 표기법으로 변환하려면 `strftime()` 메서드를 이용한다.단, 인자로 어떤 포맷(format)을 따를지 지정해야 한다.예를 들어, 서양식은 24시간 형식을 따르면서, `요일, 달-일-년 시:분`으로 많이 보여준다.
###Code
dt.strftime('%A, %m/%d/%Y %H:%M')
###Output
_____no_output_____
###Markdown
반면에 한국식은 오전/오후를 구분하여 12시간제를 따르면서, `년-월-일(요일) 시:분`으로 많이 보여준다.
###Code
dt.strftime('%Y/%m/%d(%A) %I:%M%p')
###Output
_____no_output_____
###Markdown
__참고:__ 보다 다양한 포맷(format)에 대한 정보는 [datetime 모듈의 공식문서](https://docs.python.org/ko/3/library/datetime.html)에서확인할 수 있다. `strptime()` 함수는 문자열을 해석하여 `datetime` 클래스의 객체로 변환한다.대신에 입력된 문자가 어떤 포맷을 따르는가에 대한 정보를 둘째 인자로 함께 전달해야 한다.
###Code
datetime.strptime('20200228', '%Y%m%d')
###Output
_____no_output_____
###Markdown
__주의사항:__ 0초인 경우 굳이 보여주지 않는다. `datetime` 클래스의 객체는 불변(immutable)이다. 하지만 문자열의 경우와 비슷하게 특정 값을 이용하여 새로운 `datetime` 클래스의객체를 생성할 수는 있다.예를 들어, `replace()` 메서드는 년, 월, 일, 시, 분, 초 각각의 값을 다른 값으로 지정하여 새로운 `datetime` 클래스의 객체를 생성한다.아래 예제는 분과 초를 모두 0으로 설정하여 새로운 `datetime` 객체를 생성한다.
###Code
dt.replace(minute=0, second=0)
###Output
_____no_output_____
###Markdown
두 `datetime` 객체의 차(difference)는 일(days)과 초(seconds) 단위로 계산되어 `timedelta` 클래스의 객체를 반환한다.
###Code
dt2 = datetime(2021, 6, 15, 23, 59)
delta = dt2 - dt
delta
type(delta)
###Output
_____no_output_____
###Markdown
실제로 `dt + deta == dt2`는 참이된다.
###Code
dt
dt + delta == dt2
###Output
_____no_output_____
###Markdown
제어문 프로그램 실행의 흐름을 제어하는 명령문을 소개한다.* `if` 조건문* `for` 반복문* `while` 반복문 `if` 조건문 `if` 다음에 위치한 조건식이 참이면 해당 본문 불록의 코드를 실행한다.
###Code
x = -2
if x < 0:
print("It's negative")
###Output
It's negative
###Markdown
__주의사항:__ "It's negative" 문자열 안에 작은따옴표가 사용되기 때문에 반드시 큰따옴표로 감싸야 한다. 조건식이 만족되지 않으면 해당 본문 블록을 건너뛴다.
###Code
x = 4
if x < 0:
print("It's negative")
print("if문을 건너뛰었음!")
###Output
if문을 건너뛰었음!
###Markdown
경우에 따른 여러 조건을 사용할 경우 원하는 만큼의 `elif` 문을 사용하고마지막에 `else` 문을 한 번 사용할 수 있다. 하지만 `else` 문이 생략될 수도 있다.위에 위치한 조건식의 만족여부부터 조사하며 한 곳에서 만족되면 나머지 부분은 무시된다.
###Code
if x < 0:
print('음수')
elif x == 0:
print('숫자 0')
elif 0 < x < 5:
print('5 보다 작은 양수')
else:
print('5 보다 큰 양수')
###Output
5 보다 작은 양수
###Markdown
__참고:__ 부울 연산자가 사용되는 경우에도 하나의 표현식이 참이거나 거짓이냐에 따라다른 표현식을 검사하거나 무시하기도 한다.예를 들어, `or` 연산자는 첫 표현식이 `True` 이면 다른 표현식은 검사하지 않는다.아래 코드에서 `a 0` 은 아예 검사하지 않는다.하지만 `c/d > 0`을 검사한다면 오류가 발생해야 한다.이유는 0으로 나누는 일은 허용되지 않기 때문이다.
###Code
a = 5; b = 7
c = 8; d = 0
if a < b or c / d > 0:
print('오른편 표현식은 검사하지 않아요!')
###Output
오른편 표현식은 검사하지 않아요!
###Markdown
실제로 0으로 나눗셈을 시도하면 `ZeroDivisionError` 라는 오류가 발생한다.
###Code
c / d > 0
###Output
_____no_output_____
###Markdown
`pass` 명령문 아무 것도 하지 않고 다음으로 넘어가도는 하는 명령문이다. 주로 앞으로 채워야 할 부분을 명시할 때 또는무시해야 하는 경우를 다룰 때 사용한다.아래 코드는 x가 0일 때 할 일을 추후에 지정하도록 `pass` 명령문을 사용한다.
###Code
x = 0
if x < 0:
print('negative!')
elif x == 0:
# 할 일: 추추 지정
pass
else:
print('positive!')
###Output
_____no_output_____
###Markdown
삼항 표현식 삼항 표현식 `if ... else ...` 를 이용하여 지정된 값을 한 줄로 간단하게 표현할 수 있다.예를 들어, 아래 코드를 살펴보자.
###Code
x = 5
if x >= 0:
y = 'Non-negative'
else:
y = 'Negative'
print(y)
###Output
Non-negative
###Markdown
변수 `y`를 아래와 같이 한 줄로 선언할 수 있다.
###Code
y = 'Non-negative' if x >= 0 else 'Negative'
print(y)
###Output
Non-negative
###Markdown
`for` 반복문 `for` 반복문은 리스트, 튜플, 문자열과 같은 이터러블 자료형의 값에 포함된 항목을 순회하는 데에 사용된다.기본 사용 양식은 다음과 같다. ```pythonfor item in collection: 코드 블록 (item 변수 사용 가능)``` `continue` 명령문 `for` 또는 아래에서 소개할 `while` 반복문이 실행되는 도중에`continue` 명령문을 만나는 순간 현재 실행되는 코드의 실행을 멈추고다음 순번 항목을 대상으로 반복문을 이어간다.예를 들어, 리스트에 포함된 항목 중에서 `None`을 제외한 값들의 합을 계산할 수 있다.
###Code
sequence = [1, 2, None, 4, None, 5]
total = 0
for value in sequence:
if value is None:
continue
total += value
print(total)
###Output
12
###Markdown
`break` 명령문 `for` 또는 아래에서 소개할 `while` 반복문이 실행되는 도중에`break` 명령문을 만나는 순간 현재 실행되는 반복문 자체의 실행을 멈추고,다음 명령을 실행한다. 예를 들어, 리스트에 포함된 항목들의 합을 계산하는 과정 중에 5를 만나는 순간 계산을 멈추게 하려면 다음과 같이 `break` 명령문을 이용한다.
###Code
sequence = [1, 2, 0, 4, 6, 5, 2, 1]
total_until_5 = 0
for value in sequence:
if value == 5:
break
total_until_5 += value
print(total_until_5)
###Output
13
###Markdown
`break` 명령문은 가장 안쪽에 있는 `for` 또는 `while` 반복문을 빠져나가며,또 다른 반복문에 의해 감싸져 있다면 해당 반복문을 이어서 실행한다.예를 들어, 아래 코드는 0, 1, 2, 3으로 이루어진 순서쌍들을 출력한다.그런데 둘째 항목이 첫째 항목보다 큰 경우는 제외시킨다.__참고:__ `range(4)`는 리스트 `[1, 2, 3, 4]`와 유사하게 작동한다.레인지(range)에 대해서는 잠시 뒤에 살펴본다.
###Code
for i in range(4):
for j in range(4):
if j > i:
break
print((i, j))
###Output
(0, 0)
(1, 0)
(1, 1)
(2, 0)
(2, 1)
(2, 2)
(3, 0)
(3, 1)
(3, 2)
(3, 3)
###Markdown
리스트, 튜를 등의 항목이 또 다른 튜플, 리스트 등의 값이라면 아래와 같이 여러 개의 변수를 이용하여 `for` 반복문을 실행할 수 있다.
###Code
an_iterator = [(1, 2, 3), (4, 5, 6), (7, 8, 9)]
for a, b, c in an_iterator:
print(a * (b + c))
###Output
5
44
119
###Markdown
위 코드에서 `a`, `b`, `c` 각각은 길이가 3인 튜플의 첫째, 둘째, 셋째 항목을 가리킨다. 따라서 위 결과는 아래 계산의 결과를 보여준 것이다.```python1 * (2 + 3) = 54 * (5 + 6) = 447 * (8 + 9) = 119``` `while` 반복문 지정된 조건이 만족되는 동안, 또는 실행 중에 `break` 명령문을 만날 때까동일한 코드를 반복실행 시킬 때 `while` 반복문을 사용한다.아래 코드는 256부터 시작해서 계속 반으로 나눈 값을 더하는 코드이며,나누어진 값이 0보다 작거나 같아지면, 또는 더한 합이 500보다 커지면 바로 실행을 멈춘다.
###Code
x = 256
total = 0
while x > 0:
if total > 500:
break
total += x
x = x // 2
print(total)
###Output
504
###Markdown
파이썬 프로그래밍 기초 1부 2편 주요 내용 * 스칼라 자료형* 제어문: `if` 조건문, `for` 반복문, `while` 반복문 스칼라 자료형 정수, 부동소수점, 부울값(`True`와 `False`), 문자열, 날짜와 시간 등처럼 단일한 값의 자료형을 __스칼라__ 자료형이라 부른다. 반면에 리스트, 튜플, 사전 등은 스칼라 자료형이 아니다. __참고:__ 문자열은 언어에 따라 스칼라 자료형이 아닐 수도 있다.예를 들어, C 언어는 문자열을 문자로 이루어진 어레이로 정의되기에스칼라 자료형이라 말하기 어렵다. 정수 자료형: `int` `int` 자료형은 임의의 숫자를 다룰 수 있다.
###Code
ival = 17239871
ival ** 6 # ival의 6승
###Output
_____no_output_____
###Markdown
부동소수점 자료형: `float` 유리수를 다룬다.
###Code
fval = 7.243
###Output
_____no_output_____
###Markdown
과학 표기법도 사용할 수 있다.
###Code
fval2 = 6.78e-5
fval2
###Output
_____no_output_____
###Markdown
나눗셈은 부동소수점으로 계산된다.
###Code
3 / 2
###Output
_____no_output_____
###Markdown
몫은 정수형으로 계산된다.
###Code
3 // 2
###Output
_____no_output_____
###Markdown
문자열 자료형: `str` 문자열은 작은따옴표(`'`) 또는 큰따옴표(`"`)를 사용한다.
###Code
a = '작은따옴표를 사용하는 문자열'
b = "큰따옴표를 사용하는 문자열"
###Output
_____no_output_____
###Markdown
여러 줄로 이루어진 문자열은 삼중 큰따옴표로 감싼다.
###Code
c = """
여러 줄에 걸친 문자열은
삼중 큰따옴표로 감싼다.
"""
###Output
_____no_output_____
###Markdown
문자열 자료형은 다양한 메서드를 제공한다.예를 들어, 문자열이 몇 줄로 이루어졌는가를 확인하려면 `count()` 메서드를 이용하여 줄바꿈 기호(`\n`)가 사용된 횟수를 세면 된다.
###Code
c.count('\n')
###Output
_____no_output_____
###Markdown
문자열은 앞서 설명한 대로 수정할 수 없는 불변(immutable) 자료형이다.
###Code
a = 'this is a string'
a[10] = 'f'
###Output
_____no_output_____
###Markdown
한 번 생성된 문자열은 수정할 수 없지만 새로운 문자열을 생성하는 데에 활용될 수는 있다.예를 들어, `replace()` 메서드는 문자열에 사용된 부분 문자열을 다른 문자열로 대체하는 방식으로 새로운 문자열을 생성한다.
###Code
b = a.replace('string', 'longer string')
b
###Output
_____no_output_____
###Markdown
변수 `a`가 가리키는 값은 변하지 않았다.
###Code
a
###Output
_____no_output_____
###Markdown
많은 파이썬 객체를 `str()` 함수를 이용하여 문자열로 변환할 수 있다.__주의사항:__ `str`은 문자열 자료형을, `str()`은 문자열을 반환하는 함수를 가리킨다.
###Code
a = 5.6
s = str(a)
type(s)
###Output
_____no_output_____
###Markdown
문자열을 리스트, 튜플 등처럼 일종의 순차 자료형, 즉, 순서가 있는 모음 자료형으로 취급할 수도 있다.따라서 인덱싱, 슬라이싱 기능 등이 모두 사용가능하다.__참고:__ 인덱싱, 슬라이싱 등에 대해서 앞으로 많이 배울 것이다.
###Code
s = 'python'
s[:3]
###Output
_____no_output_____
###Markdown
역슬래시(\) 기능 __주의사항:__ 윈도우 브라우저에서는 역슬래시가 원화 통화기호(&8361;) 모양으로 보이지만,동일하게 기능한다.역슬래시 문자(\)는 특수한 기능을 수행한다.예를 들어, 줄바꿈을 의미하는 문자 `\n`, 탭을 의미하는 문자 `\t` 등에서 역슬래시가 특수한 기능을 수행한다. 따라서 역슬래시 자체를 문자열에 포함할 때 조심해야 한다.예를 들어, 아래와 같은 문자열을 사용하고자 한다고 가정하자.```python"12\34"```그런데 그냥 아래와 같이 지정하면 다르게 작동한다.이유는 `\3`이 특수한 문자를 표현하도록 역슬래시가 기능하기 때문이다.(아래 유니코드 설명 참조)
###Code
s = '12\34'
print(s)
s = '\3'
print(s)
###Output
###Markdown
따라서 `12\34`로 출력되게 하려면 역슬래시의 기능을 해제해야 하며,그러기 위해 역슬래시를 두 번 적어주면 된다.그러면 첫째 역슬래시 뒤에 나오는 역슬래시의 기능을 해제해서 지정한 문자열로 처리된다.__참고:__ 이처럼 특정 기능을 작동하지 못하도록 하는 것을 영어로 __이스케이프__(escape)라고 부른다.
###Code
s = '12\\34'
print(s)
###Output
12\34
###Markdown
그런데 문자열 안에 많은 역슬래시가 포함되어 있다면 이런 방식은 매우 불편하다. 하지만 문자열 앞에 영어 알파벳 `r`을 추가하면 간단하게 해결된다.
###Code
s = r'this\has\no\special\characters'
print(s)
###Output
this\has\no\special\characters
###Markdown
문자열 연산: 덧셈과 정수 곱셈 두 문자열을 더하면 두 문자열을 이어서 붙인다.
###Code
a = 'this is the first half '
b = 'and this is the second half'
a + b
###Output
_____no_output_____
###Markdown
문자열과 정수를 곱하면 해당 정수만큼 복사되어 길어진다.
###Code
a * 2
###Output
_____no_output_____
###Markdown
문자열 템플릿 __문자열 템플릿__은 문자열 안에 일부 변하는 값을 지정할 수 있도록 선언된 문자열이다.예를 들어, 아래 템플릿은 세 개의 값을 임의로 지정할 수 있도록 준비되어 있다.
###Code
template = '{0:.2f} {1:s}는 {2:d} 미국달러에 해당한다.'
###Output
_____no_output_____
###Markdown
__참고:__ 중괄호 안에 사용된 숫자와 기호의 의미는 다음과 같다.* `0:.2f` - `format()` 메서드의 첫째 인자인 부동소수점이 자리하며 소수점 이하 두 자리까지 표기* `1:s` - `format()` 메서드의 둘째 인자인 문자열이 자리하는 자리* `2:d` - `format()` 메서드의 셋째 인자인 정수가 위치하는 자리 문자열 템플릿에 지정된 수 만큼의 값을 `format()` 메서드를 이용하여 입력하여새로운 문자열을 생성할 수 있다.단, `format()` 메서드에 사용되는 인자의 순서는 지정된 순서대로 정해져야 한다.예를 들어, `template` 변수가 가리키는 문자열 템플릿의 세 위치에 차례대로부동소수점, 문자열, 정수를 입력해야 하며, 아래와 같이 차례대로 인자로 사용하면 된다.
###Code
template.format(4.5560, '아르헨티나 페소', 1)
###Output
_____no_output_____
###Markdown
__참고:__콜론(`:`)을 포함하여 그 이후에 해당하는 부분은 입력될 값들의 자료형을 안내하는 역할을 수행한다.하지만 의무사항은 아니며 다만, 사용되는 값에 대한 정보를 제공하거나아니면 보다 서식을 갖춘 문자열이 출력하도록 사용된다.
###Code
template = '{0} {1}는 {2} 미국달러에 해당한다.'
template.format(4.5560, '아르헨티나 페소', 1)
###Output
_____no_output_____
###Markdown
`format()` 메서드를 사용하는 대신에 요즘에는 f-문자열을 보다 많이 사용한다.이유는 보다 편리한 사용성 때문이다.f-문자열은 문자열 앞에 영어 알파벳 f를 추가하기만 하면 되며 문자열 안에 변수를 중괄호로 감싸 직접 대입한다.
###Code
a = 4.5560
b = '아르헨티나 페소'
c = 1
template = f'{a:.2f} {b:s}는 {c:d} 미국달러에 해당한다.'
template
###Output
_____no_output_____
###Markdown
여기서도 콜론 부분은 생략해도 된다.
###Code
template = f'{a} {b}는 {c} 미국달러에 해당한다.'
template
###Output
_____no_output_____
###Markdown
유니코드와 바이트 __유니코드__(unicode)는 순전히 키보드만을 이용하여 문자를 표현하는 코드표 모음집이며 파이썬에서 영어 알파벳과 한글을 포함하여 거의 모든 문자를 지원한다. 파이썬 또한 유니코드를 기본적으로 지원한다.반면에 __바이트__(bytes)는 문자코드가 컴퓨터가 이해할 수 있는 형태로 변환된 값이다. 유니코드를 바이트로 인코딩(변환, encoding)하는 방식은 일반적으로 __UTF-8__ 방식을 따른다.반면에 한글에 특화된 인코딩 방식으로 __EUC-KR__, __CP-949__ 등이 있다. 따라서 사용하는 브라우저에 따라 한글이 깨져서 보이는 경우 언급한 세 가지 인코딩 방식 중하나로 설정해야 한다. __참고:__ 요즘은 UTF-8 방식으로 인코딩하는 것을 기본값으로 추천한다. 예를 들어, 아래는 스페인어(Spanish)를 의미하는 스페인 단어 "español"를 가리키는 변수를 선언한다.
###Code
val = "español"
val
###Output
_____no_output_____
###Markdown
UTF-8 방식으로 바이트로 인코딩하면 사람은 알아볼 수 없게 된다.
###Code
val_utf8 = val.encode('utf-8')
val_utf8
###Output
_____no_output_____
###Markdown
인코딩된 값의 자료형은 `bytes`이다.
###Code
type(val_utf8)
###Output
_____no_output_____
###Markdown
인코딩 방식을 안다면 유니코드로 디코딩할 수 있다.
###Code
val_utf8.decode('utf-8')
###Output
_____no_output_____
###Markdown
UTF-8 방식이 대세이지만 다른 인코딩 방식도 존재한다는 사실 정도는 상식으로 알고 있어야 한다.`bytes` 자료형의 객체는 파일(files) 다루면서 흔하게 접한다. 문자열 앞에 `b`를 붙이면 UTF-8 방식으로 인코딩 된 `bytes` 객체로 취급된다.
###Code
bytes_val = b'this is bytes'
bytes_val
###Output
_____no_output_____
###Markdown
UTF-8 방식으로 디코딩하면 유니코드 문자열(`str`)을 얻는다.
###Code
decoded = bytes_val.decode('utf8')
decoded
type(decoded)
###Output
_____no_output_____
###Markdown
부울값 `True` 또는 `False`의 값으로 계산될 수 있는 값을 __부울값__(boolean values)이다.부울값과 관려된 연산자는 논리곱 연산자 `and`, 논리합 연산자 `or`가 대표적이며,두 연산자의 기능은 일반적으로 알려진 것과 동일하다.
###Code
True and True
False or True
###Output
_____no_output_____
###Markdown
형변환 함수 `str()`, `bool()`, `int()`, `float()`는 인자로 들어온 값을 각각 문자열, 부울값, 정수, 부동소수점으로 변환한다.단, 인자로 사용된 값에 따라 오류가 발생할 수 있다.
###Code
s = '3.14159'
fval = float(s)
fval
type(fval)
###Output
_____no_output_____
###Markdown
`int()` 함수는 부동소수점에서 소수점 이하를 버리고 정수를 반환한다.
###Code
int(fval)
###Output
_____no_output_____
###Markdown
`int()` 함수는 문자열도 직접 정수로 반환한다.
###Code
int('334')
###Output
_____no_output_____
###Markdown
하지만 문자열이 정수 형식이 아니면 오류가 발생한다.
###Code
int(s)
###Output
_____no_output_____
###Markdown
`bool()` 함수는 0에 대해서만 `False`를 반환한다.
###Code
bool(fval)
bool(0)
bool(-1)
###Output
_____no_output_____
###Markdown
`None` 값 `None`은 어떤 의미도 없는 값, 소위 널(null)값이며, 문법적으로 `NoneType` 자료형의 유일한 값이다.
###Code
type(None)
a = None
a is None
b = 5
b is not None
###Output
_____no_output_____
###Markdown
`None`은 특정 매개변수에 대한 인자가 경우에 따라 추가로 필요할 때를 대비에서 키워드 매개변수의 기본값으로 사용되곤 한다.예를 들어, 아래 `add_and_maybe_multiply()` 함수의 셋째 인자는 기본적으로 `None` 이지만,경우에 따라 다른 값을 지정하여 사용할 수 있도록 활용되고 있다.
###Code
def add_and_maybe_multiply(a, b, c=None):
result = a + b
if c is not None:
result = result * c
return result
###Output
_____no_output_____
###Markdown
키워드 인자를 별도로 지정하지 않으면 지정된 값을 사용한다.
###Code
add_and_maybe_multiply(2, 3) # 2 + 3
###Output
_____no_output_____
###Markdown
키워드 인자를 별도로 지정하면 그 값을 사용한다.
###Code
add_and_maybe_multiply(2, 3, 4) # (2 + 3) * 4
###Output
_____no_output_____
###Markdown
날짜와 시간 `datetime` 모듈은 날짜와 시간과 관련된 유용한 클래스를 제공한다.대표적으로 `datetime`, `date`, `time` 세 클래스가 포함되며각각 날짜와시간, 날짜, 시간 정보를 속성으로 갖는다.
###Code
from datetime import datetime, date, time
###Output
_____no_output_____
###Markdown
년-월-일-시-분-초 정보를 담은 객체는 아래와 같이 생성한다.
###Code
dt = datetime(2021, 3, 2, 17, 5, 1)
###Output
_____no_output_____
###Markdown
`datetime` 객체는 년-월-일-시-분-초를 각각 따로 제공하는 속성 변수를 갖고 있다.예를 들어, 일(day) 속성은 아래와 같이 확인한다.
###Code
dt.day
###Output
_____no_output_____
###Markdown
분(minute) 속성은 다음과 같이 확인한다.
###Code
dt.minute
###Output
_____no_output_____
###Markdown
날짜 정보만 갖는 `date` 클래스의 객체로의 변환은 `date()` 메서드를 이용한다.
###Code
dt.date()
###Output
_____no_output_____
###Markdown
시간 정보만 갖는 `time` 클래스의 객체로의 변환은 `time()` 메서드를 이용한다.
###Code
dt.time()
###Output
_____no_output_____
###Markdown
일상적으로 사용하는 날짜-시간 표기법으로 변환하려면 `strftime()` 메서드를 이용한다.단, 인자로 어떤 포맷(format)을 따를지 지정해야 한다.예를 들어, 서양식은 24시간 형식을 따르면서, `요일, 달-일-년 시:분`으로 많이 보여준다.
###Code
dt.strftime('%A, %m/%d/%Y %H:%M')
###Output
_____no_output_____
###Markdown
반면에 한국식은 오전/오후를 구분하여 12시간제를 따르면서, `년-월-일(요일) 시:분`으로 많이 보여준다.
###Code
dt.strftime('%Y/%m/%d(%A) %I:%M%p')
###Output
_____no_output_____
###Markdown
__참고:__ 보다 다양한 포맷(format)에 대한 정보는 [datetime 모듈의 공식문서](https://docs.python.org/ko/3/library/datetime.html)에서확인할 수 있다. `strptime()` 함수는 문자열을 해석하여 `datetime` 클래스의 객체로 변환한다.대신에 입력된 문자가 어떤 포맷을 따르는가에 대한 정보를 둘째 인자로 함께 전달해야 한다.
###Code
datetime.strptime('20200228', '%Y%m%d')
###Output
_____no_output_____
###Markdown
__주의사항:__ 0초인 경우 굳이 보여주지 않는다. `datetime` 클래스의 객체는 불변(immutable)이다. 하지만 문자열의 경우와 비슷하게 특정 값을 이용하여 새로운 `datetime` 클래스의객체를 생성할 수는 있다.예를 들어, `replace()` 메서드는 년, 월, 일, 시, 분, 초 각각의 값을 다른 값으로 지정하여 새로운 `datetime` 클래스의 객체를 생성한다.아래 예제는 분과 초를 모두 0으로 설정하여 새로운 `datetime` 객체를 생성한다.
###Code
dt.replace(minute=0, second=0)
###Output
_____no_output_____
###Markdown
두 `datetime` 객체의 차(difference)는 일(days)과 초(seconds) 단위로 계산되어 `timedelta` 클래스의 객체를 반환한다.
###Code
dt2 = datetime(2021, 6, 15, 23, 59)
delta = dt2 - dt
delta
type(delta)
###Output
_____no_output_____
###Markdown
실제로 `dt + deta == dt2`는 참이된다.
###Code
dt
dt + delta == dt2
###Output
_____no_output_____
###Markdown
제어문 프로그램 실행의 흐름을 제어하는 명령문을 소개한다.* `if` 조건문* `for` 반복문* `while` 반복문 `if` 조건문 `if` 다음에 위치한 조건식이 참이면 해당 본문 불록의 코드를 실행한다.
###Code
x = -2
if x < 0:
print("It's negative")
###Output
It's negative
###Markdown
__주의사항:__ "It's negative" 문자열 안에 작은따옴표가 사용되기 때문에 반드시 큰따옴표로 감싸야 한다. 조건식이 만족되지 않으면 해당 본문 블록을 건너뛴다.
###Code
x = 4
if x < 0:
print("It's negative")
print("if문을 건너뛰었음!")
###Output
if문을 건너뛰었음!
###Markdown
경우에 따른 여러 조건을 사용할 경우 원하는 만큼의 `elif` 문을 사용하고마지막에 `else` 문을 한 번 사용할 수 있다. 하지만 `else` 문이 생략될 수도 있다.위에 위치한 조건식의 만족여부부터 조사하며 한 곳에서 만족되면 나머지 부분은 무시된다.
###Code
if x < 0:
print('음수')
elif x == 0:
print('숫자 0')
elif 0 < x < 5:
print('5 보다 작은 양수')
else:
print('5 보다 큰 양수')
###Output
5 보다 작은 양수
###Markdown
__참고:__ 부울 연산자가 사용되는 경우에도 하나의 표현식이 참이거나 거짓이냐에 따라다른 표현식을 검사하거나 무시하기도 한다.예를 들어, `or` 연산자는 첫 표현식이 `True` 이면 다른 표현식은 검사하지 않는다.아래 코드에서 `a 0` 은 아예 검사하지 않는다.하지만 `c/d > 0`을 검사한다면 오류가 발생해야 한다.이유는 0으로 나누는 일은 허용되지 않기 때문이다.
###Code
a = 5; b = 7
c = 8; d = 0
if a < b or c / d > 0:
print('오른편 표현식은 검사하지 않아요!')
###Output
오른편 표현식은 검사하지 않아요!
###Markdown
실제로 0으로 나눗셈을 시도하면 `ZeroDivisionError` 라는 오류가 발생한다.
###Code
c / d > 0
###Output
_____no_output_____
###Markdown
`pass` 명령문 아무 것도 하지 않고 다음으로 넘어가도는 하는 명령문이다. 주로 앞으로 채워야 할 부분을 명시할 때 또는무시해야 하는 경우를 다룰 때 사용한다.아래 코드는 x가 0일 때 할 일을 추후에 지정하도록 `pass` 명령문을 사용한다.
###Code
x = 0
if x < 0:
print('negative!')
elif x == 0:
# 할 일: 추추 지정
pass
else:
print('positive!')
###Output
_____no_output_____
###Markdown
삼항 표현식 삼항 표현식 `if ... else ...` 를 이용하여 지정된 값을 한 줄로 간단하게 표현할 수 있다.예를 들어, 아래 코드를 살펴보자.
###Code
x = 5
if x >= 0:
y = 'Non-negative'
else:
y = 'Negative'
print(y)
###Output
Non-negative
###Markdown
변수 `y`를 아래와 같이 한 줄로 선언할 수 있다.
###Code
y = 'Non-negative' if x >= 0 else 'Negative'
print(y)
###Output
Non-negative
###Markdown
`for` 반복문 `for` 반복문은 리스트, 튜플, 문자열과 같은 이터러블 자료형의 값에 포함된 항목을 순회하는 데에 사용된다.기본 사용 양식은 다음과 같다. ```pythonfor item in collection: 코드 블록 (item 변수 사용 가능)``` `continue` 명령문 `for` 또는 아래에서 소개할 `while` 반복문이 실행되는 도중에`continue` 명령문을 만나는 순간 현재 실행되는 코드의 실행을 멈추고다음 순번 항목을 대상으로 반복문을 이어간다.예를 들어, 리스트에 포함된 항목 중에서 `None`을 제외한 값들의 합을 계산할 수 있다.
###Code
sequence = [1, 2, None, 4, None, 5]
total = 0
for value in sequence:
if value is None:
continue
total += value
print(total)
###Output
12
###Markdown
`break` 명령문 `for` 또는 아래에서 소개할 `while` 반복문이 실행되는 도중에`break` 명령문을 만나는 순간 현재 실행되는 반복문 자체의 실행을 멈추고,다음 명령을 실행한다. 예를 들어, 리스트에 포함된 항목들의 합을 계산하는 과정 중에 5를 만나는 순간 계산을 멈추게 하려면 다음과 같이 `break` 명령문을 이용한다.
###Code
sequence = [1, 2, 0, 4, 6, 5, 2, 1]
total_until_5 = 0
for value in sequence:
if value == 5:
break
total_until_5 += value
print(total_until_5)
###Output
13
###Markdown
`break` 명령문은 가장 안쪽에 있는 `for` 또는 `while` 반복문을 빠져나가며,또 다른 반복문에 의해 감싸져 있다면 해당 반복문을 이어서 실행한다.예를 들어, 아래 코드는 0, 1, 2, 3으로 이루어진 순서쌍들을 출력한다.그런데 둘째 항목이 첫째 항목보다 큰 경우는 제외시킨다.__참고:__ `range(4)`는 리스트 `[1, 2, 3, 4]`와 유사하게 작동한다.레인지(range)에 대해서는 잠시 뒤에 살펴본다.
###Code
for i in range(4):
for j in range(4):
if j > i:
break
print((i, j))
###Output
(0, 0)
(1, 0)
(1, 1)
(2, 0)
(2, 1)
(2, 2)
(3, 0)
(3, 1)
(3, 2)
(3, 3)
###Markdown
리스트, 튜를 등의 항목이 또 다른 튜플, 리스트 등의 값이라면 아래와 같이 여러 개의 변수를 이용하여 `for` 반복문을 실행할 수 있다.
###Code
an_iterator = [(1, 2, 3), (4, 5, 6), (7, 8, 9)]
for a, b, c in an_iterator:
print(a * (b + c))
###Output
5
44
119
###Markdown
위 코드에서 `a`, `b`, `c` 각각은 길이가 3인 튜플의 첫째, 둘째, 셋째 항목을 가리킨다. 따라서 위 결과는 아래 계산의 결과를 보여준 것이다.```python1 * (2 + 3) = 54 * (5 + 6) = 447 * (8 + 9) = 119``` `while` 반복문 지정된 조건이 만족되는 동안, 또는 실행 중에 `break` 명령문을 만날 때까동일한 코드를 반복실행 시킬 때 `while` 반복문을 사용한다.아래 코드는 256부터 시작해서 계속 반으로 나눈 값을 더하는 코드이며,나누어진 값이 0보다 작거나 같아지면, 또는 더한 합이 500보다 커지면 바로 실행을 멈춘다.
###Code
x = 256
total = 0
while x > 0:
if total > 500:
break
total += x
x = x // 2
print(total)
###Output
504
|
module-2/Probability-Intro/your-code/main.ipynb | ###Markdown
Introduction To Probability Challenge 1A and B are events of a probability space with $(\omega, \sigma, P)$ such that $P(A) = 0.3$, $P(B) = 0.6$ and $P(A \cap B) = 0.1$Which of the following statements are false?* $P(A \cup B) = 0.6$* $P(A \cap B^{C}) = 0.2$* $P(A \cap (B \cup B^{C})) = 0.4$* $P(A^{C} \cap B^{C}) = 0.3$* $P((A \cap B)^{C}) = 0.9$
###Code
"""
your solution here:
which are false?-
𝑃(𝐴∪𝐵)=0.6
0.8 != 0.6 FALSE
𝑃(𝐴∩𝐵𝐶)=0.2
02 = 0.2 TRUE!
𝑃(𝐴∩(𝐵∪𝐵𝐶))=0.4
0.3 != 0.4 FALSE
𝑃(𝐴𝐶∩𝐵𝐶)=0.3
0.2 != 0.3 FALSE
𝑃((𝐴∩𝐵)𝐶)=0.9
1 - 0.1
0.9 = 0.9 TRUE!
"""
###Output
_____no_output_____
###Markdown
Challenge 2There is a box with 10 white balls, 12 red balls and 8 black balls. Calculate the probability of:* Taking a white ball out.* Taking a white ball out after taking a black ball out.* Taking a red ball out after taking a black and a red ball out.* Taking a red ball out after taking a black and a red ball out with reposition.**Hint**: Reposition means putting back the ball into the box after taking it out.
###Code
"""
your solution here
Taking a white ball out.
10/30 = 1/3
Taking a white ball out after taking a black ball out.
10/29
Taking a red ball out after taking a black and a red ball out.
12/28 = 3/7
Taking a red ball out after taking a black and a red ball out with reposition.
12/30 = 2/5
"""
###Output
_____no_output_____
###Markdown
Challenge 3You are planning to go on a picnic today but the morning is cloudy. You hate rain so you don't know whether to go out or stay home! To help you make a decision, you gather the following data about rainy days:* 50% of all rainy days start off cloudy!* Cloudy mornings are common. About 40% of days start cloudy. * This month is usually dry so only 3 of 30 days (10%) tend to be rainy. What is the chance of rain during the day?
###Code
"""
your solution here:
C | R = 0.5
R = 0.1
C = 0.4
P(R|C) = P(0.5)*(0.1)
-------------- = 0.125 = ~ 12.5% CHANCE OF ACTUAL RAIN
P(0.4) GIVEN THAT IT'S A CLOUDY MORNING
NOT TOO BAD, HAVE A PICNIC.
"""
###Output
_____no_output_____
###Markdown
Challenge 4One thousand people were asked through a telephone survey whether they thought more street lighting is needed at night or not.Out of the 480 men that answered the survey, 324 said yes and 156 said no. On the other hand, out of the 520 women that answered, 351 said yes and 169 said no. We wonder if men and women have a different opinions about the street lighting matter. Is gender relevant or irrelevant to the question?Consider the following events:- The answer is yes, so the person that answered thinks that more street lighting is needed.- The person who answered is a man.We want to know if these events are independent, that is, if the fact of wanting more light depends on whether one is male or female. Are these events independent or not?**Hint**: To clearly compare the answers by gender, it is best to place the data in a table.
###Code
# your code here
import pandas as pd
d = {'Category': ['Men', 'Women', 'Total'],'Total': [480, 520, 1000], 'Yes': [324, 351, 675], 'No': [156, 169, 325]}
df = pd.DataFrame(data=d)
df
"""
your solution here
INDEPENDENT EVENTS:
P(A ∩ B) = P(A) * P(B)
P(A) = P(M_y)
P(B) = P(W_y)
P(M_y) = 324/480 = 0.675 SO YES THEY ARE INDEPENDENT EVENTS,
REGARDLES OF FEMALE OR MALE SAYING YES THEY WANT MORE LIGHTS AR NIGHT
P(W_y) = 351/520 = 0.675
P(YES) = 324 + 351
--------- = 0.675
1000
"""
###Output
_____no_output_____
###Markdown
Introduction To Probability Challenge 1A and B are events of a probability space with $(\omega, \sigma, P)$ such that $P(A) = 0.3$, $P(B) = 0.6$ and $P(A \cap B) = 0.1$Which of the following statements are false?* $P(A \cup B) = 0.6$* $P(A \cap B^{C}) = 0.2$* $P(A \cap (B \cup B^{C})) = 0.4$* $P(A^{C} \cap B^{C}) = 0.3$* $P((A \cap B)^{C}) = 0.9$
###Code
"""
your solution here
"""
###Output
_____no_output_____
###Markdown
Challenge 2There is a box with 10 white balls, 12 red balls and 8 black balls. Calculate the probability of:* Taking a white ball out.* Taking a white ball out after taking a black ball out.* Taking a red ball out after taking a black and a red ball out.* Taking a red ball out after taking a black and a red ball out with reposition.**Hint**: Reposition means putting back the ball into the box after taking it out.
###Code
"""
your solution here
"""
###Output
_____no_output_____
###Markdown
Challenge 3You are planning to go on a picnic today but the morning is cloudy. You hate rain so you don't know whether to go out or stay home! To help you make a decision, you gather the following data about rainy days:* 50% of all rainy days start off cloudy!* Cloudy mornings are common. About 40% of days start cloudy. * This month is usually dry so only 3 of 30 days (10%) tend to be rainy. What is the chance of rain during the day?
###Code
"""
your solution here
"""
###Output
_____no_output_____
###Markdown
Challenge 4One thousand people were asked through a telephone survey whether they thought more street lighting is needed at night or not.Out of the 480 men that answered the survey, 324 said yes and 156 said no. On the other hand, out of the 520 women that answered, 351 said yes and 169 said no. We wonder if men and women have a different opinions about the street lighting matter. Is gender relevant or irrelevant to the question?Consider the following events:- The answer is yes, so the person that answered thinks that more street lighting is needed.- The person who answered is a man.We want to know if these events are independent, that is, if the fact of wanting more light depends on whether one is male or female. Are these events independent or not?**Hint**: To clearly compare the answers by gender, it is best to place the data in a table.
###Code
# your code here
"""
your solution here
"""
###Output
_____no_output_____
###Markdown
Introduction To Probability Challenge 1A and B are events of a probability space with $(\omega, \sigma, P)$ such that $P(A) = 0.3$, $P(B) = 0.6$ and $P(A \cap B) = 0.1$Which of the following statements are false?* $P(A \cup B) = 0.6$* $P(A \cap B^{C}) = 0.2$* $P(A \cap (B \cup B^{C})) = 0.4$* $P(A^{C} \cap B^{C}) = 0.3$* $P((A \cap B)^{C}) = 0.9$
###Code
"""
The first one is false because of 𝑃(𝐴∪𝐵) = 0.3 + 0.6 - 0.1 = 0.8
The second one is true because 𝐵compliment is going to be 0.2 + 0.2 = 0.4 and the interception with A = 0.2
The third one is false because the first union is going to be the whole system (1) and its interception with A is going to be A= 0.3
The fourth one is false because the interception of everything but A (U and B which is not A) and everything but B (U and A which is not B) is U = 0.2
The fifth one is true because A intercepted B is 0.1 and all the system except this is going to be 0.9
"""
###Output
_____no_output_____
###Markdown
Challenge 2There is a box with 10 white balls, 12 red balls and 8 black balls. Calculate the probability of:* Taking a white ball out.* Taking a white ball out after taking a black ball out.* Taking a red ball out after taking a black and a red ball out.* Taking a red ball out after taking a black and a red ball out with reposition.**Hint**: Reposition means putting back the ball into the box after taking it out.
###Code
"""
The probability of taking a white ball out is going to be 10/30 = 1/3 = 0.3333 = 33.33%
The probability of taking a black ball out and then a white ball out is 8/30 * 10/29 = 8/87 = 0.091954 = 9.20%
The probability of taking a black ball, a red ball and a red ball again is 8/30 * 12/29 * 11/28 = 0.0433498 = 4.33%
The probability of taking a black ball, a red ball and red ball with reposition is 8/30 * 12/30 * 12/30 = 0.0426667 = 4.27%
"""
###Output
_____no_output_____
###Markdown
Challenge 3You are planning to go on a picnic today but the morning is cloudy. You hate rain so you don't know whether to go out or stay home! To help you make a decision, you gather the following data about rainy days:* 50% of all rainy days start off cloudy!* Cloudy mornings are common. About 40% of days start cloudy. * This month is usually dry so only 3 of 30 days (10%) tend to be rainy. What is the chance of rain during the day?
###Code
"""
A = R = 10% = 1/10
B = C = 40% = 2/5
P(R|C) = 50% = 1/2
P(C|R) = 1/2 * 1/10
---------- = 1/8 = 0.125 = 12.5%
2/5
"""
###Output
_____no_output_____
###Markdown
Challenge 4One thousand people were asked through a telephone survey whether they thought more street lighting is needed at night or not.Out of the 480 men that answered the survey, 324 said yes and 156 said no. On the other hand, out of the 520 women that answered, 351 said yes and 169 said no. We wonder if men and women have a different opinions about the street lighting matter. Is gender relevant or irrelevant to the question?Consider the following events:- The answer is yes, so the person that answered thinks that more street lighting is needed.- The person who answered is a man.We want to know if these events are independent, that is, if the fact of wanting more light depends on whether one is male or female. Are these events independent or not?**Hint**: To clearly compare the answers by gender, it is best to place the data in a table.
###Code
"""
Yes No Total
Men 324 156 480
Women 351 169 520
"""
#probability of getting a yes
print('P(Yes) = (324 + 351)/1000 = ', (324 + 351)/1000)
print('P(man Yes) = 324/480 = ', 324/480)
print('P(woman Yes) = 351/520 = ', 351/520)
print('The probabilities are the same, so it does not really matter whether if the subject is male or female.')
###Output
P(Yes) = (324 + 351)/1000 = 0.675
P(man Yes) = 324/480 = 0.675
P(woman Yes) = 351/520 = 0.675
The probabilities are the same, so it does not really matter whether if the subject is male or female.
|
chrfarri Final Project/Optimizer Testing.ipynb | ###Markdown
Optimizer Testing Load Packages
###Code
import numpy as np
import pandas as pd
import keras
from keras.utils import plot_model
from keras import applications
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Activation, Flatten, Bidirectional, Conv2D, MaxPooling2D, GlobalAveragePooling2D, Lambda, MaxPool2D, BatchNormalization, Input, concatenate, Reshape, LSTM, CuDNNLSTM
#from keras.layers import K
from keras.utils import np_utils
from keras.utils.np_utils import to_categorical
from keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import RMSprop, Adam
from keras.callbacks import Callback, EarlyStopping, ReduceLROnPlateau, ModelCheckpoint, TensorBoard
from keras.utils.np_utils import to_categorical
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
import xml.etree.ElementTree as ET
import sklearn
import itertools
import cv2
import scipy
import os
import csv
from PIL import Image
import matplotlib.pyplot as plt
%matplotlib inline
from tqdm import tqdm
# save np.load
np_load_old = np.load
# modify the default parameters of np.load
np.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)
class1 = {1:'NEUTROPHIL',2:'EOSINOPHIL',3:'MONOCYTE',4:'LYMPHOCYTE'}
class2 = {0:'Mononuclear',1:'Polynuclear'}
tree_path = 'C:\\Users\\Chris\\Blood Cells\\BCCD_Dataset-master\\BCCD\\Annotations'
image_path = 'C:\\Users\\Chris\\Blood Cells\\BCCD_Dataset-master\\BCCD\\JPEGImages'
###Output
_____no_output_____
###Markdown
Define Helper Functions and Load Data
###Code
def get_data(folder, size):
"""
Load the data and labels from the given folder.
"""
X = []
y = []
z = []
for wbc_type in os.listdir(folder):
if not wbc_type.startswith('.'):
if wbc_type in ['NEUTROPHIL']:
label = 1
label2 = 1
elif wbc_type in ['EOSINOPHIL']:
label = 2
label2 = 1
elif wbc_type in ['MONOCYTE']:
label = 3
label2 = 0
elif wbc_type in ['LYMPHOCYTE']:
label = 4
label2 = 0
else:
label = 5
label2 = 0
for image_filename in tqdm(os.listdir(folder + wbc_type)):
img_file = cv2.imread(folder + wbc_type + '/' + image_filename)
if img_file is not None:
img_file = cv2.resize(img_file, dsize=size)
img_arr = np.asarray(img_file)
X.append(img_arr)
y.append(label)
z.append(label2)
X = np.asarray(X)
y = np.asarray(y)
z = np.asarray(z)
return X,y,z
def plot_learning_curve(history):
plt.figure(figsize=(8,8))
plt.subplot(1,2,1)
plt.plot(history['acc'])
plt.plot(history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.savefig('./accuracy_curve.png')
#plt.clf()
# summarize history for loss
plt.subplot(1,2,2)
plt.plot(history['loss'])
plt.plot(history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.savefig('./loss_curve.png')
size=(64, 64)
X_train, y_train, z_train = get_data('C:\\Users\\Chris\\Blood Cells\\BCCD_Dataset-master\\dataset2-master\\images\\TRAIN\\', size)
X_test, y_test, z_test = get_data('C:\\Users\\Chris\\Blood Cells\\BCCD_Dataset-master\\dataset2-master\\images\\TEST\\', size)
# Encode labels to hot vectors (ex : 2 -> [0,0,1,0,0,0,0,0,0,0])
y_trainHot = to_categorical(y_train, num_classes = 5)
y_testHot = to_categorical(y_test, num_classes = 5)
z_trainHot = to_categorical(z_train, num_classes = 2)
z_testHot = to_categorical(z_test, num_classes = 2)
# Normalize to [0,1]
X_train=np.array(X_train)
X_train=X_train/255.0
X_test=np.array(X_test)
X_test=X_test/255.0
def Build_Model(OPT):
model = Sequential() #Keras sequential builder
model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
#Compile
model.compile(loss=keras.losses.categorical_crossentropy, optimizer=OPT, metrics=['accuracy'])
return model
#Training Parameters
batch_size = 128
epochs = 50
#Input Data parameters
num_classes = 5
img_rows, img_cols = 64, 64
input_shape = (img_rows, img_cols, 3)
###Output
_____no_output_____
###Markdown
SGD
###Code
LR = 0.01
#Build Model
model = Build_Model(keras.optimizers.SGD(lr=LR))
model.summary()
datagen = ImageDataGenerator(
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=True) # randomly flip images
history = model.fit_generator(datagen.flow(X_train,y_trainHot, batch_size=batch_size),
steps_per_epoch=len(X_train) / batch_size, epochs=epochs, validation_data = [X_test, y_testHot], verbose=2)
plot_learning_curve(history.history)
plt.show()
###Output
_____no_output_____
###Markdown
RMSprop
###Code
LR = 0.001
#Build Model
model = Build_Model(RMSprop(lr=LR))
datagen = ImageDataGenerator(
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=True) # randomly flip images
history = model.fit_generator(datagen.flow(X_train,y_trainHot, batch_size=batch_size),
steps_per_epoch=len(X_train) / batch_size, epochs=epochs, validation_data = [X_test, y_testHot], verbose=2)
plot_learning_curve(history.history)
plt.show()
###Output
_____no_output_____
###Markdown
Adam
###Code
LR = 0.001
#Build Model
model = Build_Model(Adam(lr=LR))
datagen = ImageDataGenerator(
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=True) # randomly flip images
history = model.fit_generator(datagen.flow(X_train,y_trainHot, batch_size=batch_size),
steps_per_epoch=len(X_train) / batch_size, epochs=epochs, validation_data = [X_test, y_testHot], verbose=2)
plot_learning_curve(history.history)
plt.show()
LR = 0.002
#Build Model
model = Build_Model(Adam(lr=LR))
datagen = ImageDataGenerator(
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=True) # randomly flip images
history = model.fit_generator(datagen.flow(X_train,y_trainHot, batch_size=batch_size),
steps_per_epoch=len(X_train) / batch_size, epochs=epochs, validation_data = [X_test, y_testHot], verbose=2)
plot_learning_curve(history.history)
plt.show()
LR = 0.0005
#Build Model
model = Build_Model(Adam(lr=LR))
datagen = ImageDataGenerator(
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=True) # randomly flip images
history = model.fit_generator(datagen.flow(X_train,y_trainHot, batch_size=batch_size),
steps_per_epoch=len(X_train) / batch_size, epochs=epochs, validation_data = [X_test, y_testHot], verbose=2)
plot_learning_curve(history.history)
plt.show()
LR = 0.003
#Build Model
model = Build_Model(Adam(lr=LR))
datagen = ImageDataGenerator(
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=True) # randomly flip images
history = model.fit_generator(datagen.flow(X_train,y_trainHot, batch_size=batch_size),
steps_per_epoch=len(X_train) / batch_size, epochs=epochs, validation_data = [X_test, y_testHot], verbose=2)
plot_learning_curve(history.history)
plt.show()
###Output
_____no_output_____ |
injest_spending_data/Ingest spending data.ipynb | ###Markdown
Ingest spending data PDF version of State Infrastructure Planhttps://www.dsdmip.qld.gov.au/infrastructure/infrastructure-planning-and-policy/state-infrastructure-plan.htmlTabular data as found by CCGhttps://www.data.qld.gov.au/dataset/queensland-government-capital-works-building-projects/resource/410fb21f-8c5a-43a1-8b57-a74a3329d1d0
###Code
version
library(pdftools)
library(tibble)
pdf_file <- "https://github.com/ropensci/tabulizer/raw/master/inst/examples/data.pdf"
txt <- pdf_text(pdf_file)
cat(txt[1])
pdf_file <- "https://www.dsdmip.qld.gov.au/resources/plan/capital-program-sept2020.pdf"
SIP_pdf <- suppressMessages(pdf_data(pdf_file))
print(SIP_pdf, n=50)
num_pages <- length(lengths(SIP_pdf))
df <- data.frame()
for (page in 1:num_pages){
SIP_pdf_page <- SIP_pdf[[page]]
line_ys <- unique(SIP_pdf_page$y)
words <- vector()
first_number <- vector()
for (i in 1:length(line_ys)) {
line_vect <- as.vector(SIP_pdf_page$text[SIP_pdf_page$y==line_ys[i]])
position_of_last_word = -1
position_of_first_number = -1
for (j in 1:length(line_vect)) {
word = line_vect[j]
if (length(word) > 0) {
if (!is.na(word)) {
if (sum(substr(word, 1, 1)==c(letters, LETTERS)) > 0) {
position_of_last_word = j
} else if ( !is.na(suppressWarnings(as.numeric(word))) & (position_of_first_number == -1) ) {
position_of_first_number = j
}
}
}
}
if (position_of_last_word != -1) {
words[i] = paste(line_vect[1:position_of_last_word], collapse=" ")
} else {
words[i] = ""
}
if (position_of_first_number != -1) {
first_number[i] <- line_vect[position_of_first_number]
} else {
first_number[i] = NA
}
}
if (length(words) == 1 & words[1]=="") {
words <- vector()
first_number <- vector()
}
this_df <- data.frame(page = rep(page, i),y=line_ys, words, first_number)
df <- rbind(df, this_df)
}
df
###Output
_____no_output_____ |
Assigment1/Python_Mandatory_Assignment.ipynb | ###Markdown
Python: without numpy or sklearn Q1: Given two matrices please print the product of those two matrices Ex 1: A = [[1 3 4] [2 5 7] [5 9 6]] B = [[1 0 0] [0 1 0] [0 0 1]] A*B = [[1 3 4] [2 5 7] [5 9 6]] Ex 2: A = [[1 2] [3 4]] B = [[1 2 3 4 5] [5 6 7 8 9]] A*B = [[11 14 17 20 23] [18 24 30 36 42]] Ex 3: A = [[1 2] [3 4]] B = [[1 4] [5 6] [7 8] [9 6]] A*B =Not possible
###Code
# write your python code here
# you can take the above example as sample input for your program to test
# it should work for any general input try not to hard code for only given input examples
# you can free to change all these codes/structure
# here A and B are list of lists
def matrix_mul(A, B):
# write your code
return(#multiplication_of_A_and_B)
matrix_mul(A, B)
###Output
_____no_output_____
###Markdown
Q2: Select a number randomly with probability proportional to its magnitude from the given array of n elementsconsider an experiment, selecting an element from the list A randomly with probability proportional to its magnitude.assume we are doing the same experiment for 100 times with replacement, in each experiment you will print a number that is selected randomly from A.Ex 1: A = [0 5 27 6 13 28 100 45 10 79]let f(x) denote the number of times x getting selected in 100 experiments.f(100) > f(79) > f(45) > f(28) > f(27) > f(13) > f(10) > f(6) > f(5) > f(0)
###Code
from random import uniform
# write your python code here
# you can take the above example as sample input for your program to test
# it should work for any general input try not to hard code for only given input examples
# you can free to change all these codes/structure
def pick_a_number_from_list(A):
# your code here for picking an element from with the probability propotional to its magnitude
#.
#.
#.
return #selected_random_number
def sampling_based_on_magnitued():
for i in range(1,100):
number = pick_a_number_from_list(A)
print(number)
sampling_based_on_magnitued()
###Output
_____no_output_____
###Markdown
Q3: Replace the digits in the string with Consider a string that will have digits in that, we need to remove all the characters which are not digits and replace the digits with Ex 1: A = 234 Output: Ex 2: A = a2b3c4 Output: Ex 3: A = abc Output: (empty string)Ex 5: A = 2a$b%c%561 Output:
###Code
import re
# write your python code here
# you can take the above example as sample input for your program to test
# it should work for any general input try not to hard code for only given input examples
# you can free to change all these codes/structure
# String: it will be the input to your program
def replace_digits(String):
# write your code
#
return() # modified string which is after replacing the # with digits
replace_digits(String)
###Output
_____no_output_____
###Markdown
Q4: Students marks dashboardConsider the marks list of class students given in two lists Students = ['student1','student2','student3','student4','student5','student6','student7','student8','student9','student10'] Marks = [45, 78, 12, 14, 48, 43, 45, 98, 35, 80] from the above two lists the Student[0] got Marks[0], Student[1] got Marks[1] and so on. Your task is to print the name of studentsa. Who got top 5 ranks, in the descending order of marks b. Who got least 5 ranks, in the increasing order of marksd. Who got marks between >25th percentile <75th percentile, in the increasing order of marks.Ex 1: Students=['student1','student2','student3','student4','student5','student6','student7','student8','student9','student10'] Marks = [45, 78, 12, 14, 48, 43, 47, 98, 35, 80]a. student8 98student10 80student2 78student5 48student7 47b.student3 12student4 14student9 35student6 43student1 45c.student9 35student6 43student1 45student7 47student5 48
###Code
# write your python code here
# you can take the above example as sample input for your program to test
# it should work for any general input try not to hard code for only given input examples
# you can free to change all these codes/structure
def display_dash_board(students, marks):
# write code for computing top top 5 students
top_5_students = # compute this
# write code for computing top least 5 students
least_5_students = # compute this
# write code for computing top least 5 students
students_within_25_and_75 = # compute this
return top_5_students, least_5_students, students_within_25_and_75
top_5_students, least_5_students, students_within_25_and_75 = display_dash_board(students, marks)
print(# those values)
###Output
_____no_output_____
###Markdown
Q5: Find the closest pointsConsider you are given n data points in the form of list of tuples like S=[(x1,y1),(x2,y2),(x3,y3),(x4,y4),(x5,y5),..,(xn,yn)] and a point P=(p,q) your task is to find 5 closest points(based on cosine distance) in S from PCosine distance between two points (x,y) and (p,q) is defined as $cos^{-1}(\frac{(x\cdot p+y\cdot q)}{\sqrt(x^2+y^2)\cdot\sqrt(p^2+q^2)})$Ex:S= [(1,2),(3,4),(-1,1),(6,-7),(0, 6),(-5,-8),(-1,-1)(6,0),(1,-1)]P= (3,-4)Output:(6,-7)(1,-1)(6,0)(-5,-8)(-1,-1)
###Code
import math
# write your python code here
# you can take the above example as sample input for your program to test
# it should work for any general input try not to hard code for only given input examples
# you can free to change all these codes/structure
# here S is list of tuples and P is a tuple ot len=2
def closest_points_to_p(S, P):
# write your code here
return closest_points_to_p # its list of tuples
S= [(1,2),(3,4),(-1,1),(6,-7),(0, 6),(-5,-8),(-1,-1)(6,0),(1,-1)]
P= (3,-4)
points = closest_points_to_p(S, P)
print() #print the returned values
###Output
_____no_output_____
###Markdown
Q6: Find which line separates oranges and applesConsider you are given two set of data points in the form of list of tuples like Red =[(R11,R12),(R21,R22),(R31,R32),(R41,R42),(R51,R52),..,(Rn1,Rn2)]Blue=[(B11,B12),(B21,B22),(B31,B32),(B41,B42),(B51,B52),..,(Bm1,Bm2)]and set of line equations(in the string format, i.e list of strings)Lines = [a1x+b1y+c1,a2x+b2y+c2,a3x+b3y+c3,a4x+b4y+c4,..,K lines]Note: You need to do string parsing here and get the coefficients of x,y and intercept.Your task here is to print "YES"/"NO" for each line given. You should print YES, if all the red points are one side of the line and blue points are on other side of the line, otherwise you should print NO.Ex:Red= [(1,1),(2,1),(4,2),(2,4), (-1,4)]Blue= [(-2,-1),(-1,-2),(-3,-2),(-3,-1),(1,-3)]Lines=["1x+1y+0","1x-1y+0","1x+0y-3","0x+1y-0.5"]Output:YESNONOYES
###Code
import math
# write your python code here
# you can take the above example as sample input for your program to test
# it should work for any general input try not to hard code for only given input strings
# you can free to change all these codes/structure
def i_am_the_one(red,blue,line):
# your code
return #yes/no
Red= [(1,1),(2,1),(4,2),(2,4), (-1,4)]
Blue= [(-2,-1),(-1,-2),(-3,-2),(-3,-1),(1,-3)]
Lines=["1x+1y+0","1x-1y+0","1x+0y-3","0x+1y-0.5"]
for i in Lines:
yes_or_no = i_am_the_one(Red, Blue, i)
print() # the returned value
###Output
_____no_output_____
###Markdown
Q7: Filling the missing values in the specified formatYou will be given a string with digits and '\_'(missing value) symbols you have to replace the '\_' symbols as explained Ex 1: _, _, _, 24 ==> 24/4, 24/4, 24/4, 24/4 i.e we. have distributed the 24 equally to all 4 places Ex 2: 40, _, _, _, 60 ==> (60+40)/5,(60+40)/5,(60+40)/5,(60+40)/5,(60+40)/5 ==> 20, 20, 20, 20, 20 i.e. the sum of (60+40) is distributed qually to all 5 placesEx 3: 80, _, _, _, _ ==> 80/5,80/5,80/5,80/5,80/5 ==> 16, 16, 16, 16, 16 i.e. the 80 is distributed qually to all 5 missing values that are right to itEx 4: _, _, 30, _, _, _, 50, _, _ ==> we will fill the missing values from left to right a. first we will distribute the 30 to left two missing values (10, 10, 10, _, _, _, 50, _, _) b. now distribute the sum (10+50) missing values in between (10, 10, 12, 12, 12, 12, 12, _, _) c. now we will distribute 12 to right side missing values (10, 10, 12, 12, 12, 12, 4, 4, 4)for a given string with comma seprate values, which will have both missing values numbers like ex: "_, _, x, _, _, _"you need fill the missing valuesQ: your program reads a string like ex: "_, _, x, _, _, _" and returns the filled sequenceEx: Input1: "_,_,_,24"Output1: 6,6,6,6Input2: "40,_,_,_,60"Output2: 20,20,20,20,20Input3: "80,_,_,_,_"Output3: 16,16,16,16,16Input4: "_,_,30,_,_,_,50,_,_"Output4: 10,10,12,12,12,12,4,4,4
###Code
# write your python code here
# you can take the above example as sample input for your program to test
# it should work for any general input try not to hard code for only given input strings
# you can free to change all these codes/structure
def curve_smoothing(string):
# your code
return #list of values
S= "_,_,30,_,_,_,50,_,_"
smoothed_values= curve_smoothing(S)
print(# print above values)
###Output
_____no_output_____
###Markdown
Q8: Find the probabilitiesYou will be given a list of lists, each sublist will be of length 2 i.e. [[x,y],[p,q],[l,m]..[r,s]]consider its like a martrix of n rows and two columns1. The first column F will contain only 5 uniques values (F1, F2, F3, F4, F5)2. The second column S will contain only 3 uniques values (S1, S2, S3)your task is to finda. Probability of P(F=F1|S==S1), P(F=F1|S==S2), P(F=F1|S==S3)b. Probability of P(F=F2|S==S1), P(F=F2|S==S2), P(F=F2|S==S3)c. Probability of P(F=F3|S==S1), P(F=F3|S==S2), P(F=F3|S==S3)d. Probability of P(F=F4|S==S1), P(F=F4|S==S2), P(F=F4|S==S3)e. Probability of P(F=F5|S==S1), P(F=F5|S==S2), P(F=F5|S==S3)Ex:[[F1,S1],[F2,S2],[F3,S3],[F1,S2],[F2,S3],[F3,S2],[F2,S1],[F4,S1],[F4,S3],[F5,S1]]a. P(F=F1|S==S1)=1/4, P(F=F1|S==S2)=1/3, P(F=F1|S==S3)=0/3b. P(F=F2|S==S1)=1/4, P(F=F2|S==S2)=1/3, P(F=F2|S==S3)=1/3c. P(F=F3|S==S1)=0/4, P(F=F3|S==S2)=1/3, P(F=F3|S==S3)=1/3d. P(F=F4|S==S1)=1/4, P(F=F4|S==S2)=0/3, P(F=F4|S==S3)=1/3e. P(F=F5|S==S1)=1/4, P(F=F5|S==S2)=0/3, P(F=F5|S==S3)=0/3
###Code
# write your python code here
# you can take the above example as sample input for your program to test
# it should work for any general input try not to hard code for only given input strings
# you can free to change all these codes/structure
def compute_conditional_probabilites(A):
# your code
# print the output as per the instructions
A = [[F1,S1],[F2,S2],[F3,S3],[F1,S2],[F2,S3],[F3,S2],[F2,S1],[F4,S1],[F4,S3],[F5,S1]]
compute_conditional_probabilites(A)
###Output
_____no_output_____
###Markdown
Q9: Operations on sentences You will be given two sentances S1, S2 your task is to find a. Number of common words between S1, S2b. Words in S1 but not in S2c. Words in S2 but not in S1Ex: S1= "the first column F will contain only 5 unique values"S2= "the second column S will contain only 3 unique values"Output:a. 7b. ['first','F','5']c. ['second','S','3']
###Code
# write your python code here
# you can take the above example as sample input for your program to test
# it should work for any general input try not to hard code for only given input strings
# you can free to change all these codes/structure
def string_features(S1, S2):
# your code
return a, b, c
S1= "the first column F will contain only 5 uniques values"
S2= "the second column S will contain only 3 uniques values"
a,b,c = string_features(S1, S2)
print(#the above values)
###Output
_____no_output_____
###Markdown
Q10: Error FunctionYou will be given a list of lists, each sublist will be of length 2 i.e. [[x,y],[p,q],[l,m]..[r,s]]consider its like a martrix of n rows and two columnsa. the first column Y will contain interger values b. the second column $Y_{score}$ will be having float values Your task is to find the value of $f(Y,Y_{score}) = -1*\frac{1}{n}\Sigma_{for each Y,Y_{score} pair}(Ylog10(Y_{score})+(1-Y)log10(1-Y_{score}))$ here n is the number of rows in the matrixEx:[[1, 0.4], [0, 0.5], [0, 0.9], [0, 0.3], [0, 0.6], [1, 0.1], [1, 0.9], [1, 0.8]]output:0.44982$\frac{-1}{8}\cdot((1\cdot log_{10}(0.4)+0\cdot log_{10}(0.6))+(0\cdot log_{10}(0.5)+1\cdot log_{10}(0.5)) + ... + (1\cdot log_{10}(0.8)+0\cdot log_{10}(0.2)) )$
###Code
# write your python code here
# you can take the above example as sample input for your program to test
# it should work for any general input try not to hard code for only given input strings
# you can free to change all these codes/structure
def compute_log_loss(A):
# your code
return loss
A = [[1, 0.4], [0, 0.5], [0, 0.9], [0, 0.3], [0, 0.6], [1, 0.1], [1, 0.9], [1, 0.8]]
loss = compute_log_loss(A)
print(# the above loss)
###Output
_____no_output_____ |
BigQuery.ipynb | ###Markdown
First **BigQuery commands**To use BigQuery, we'll import the Python package below
###Code
from google.cloud import bigquery
#
client = bigquery.Client()
###Output
_____no_output_____
###Markdown
This notebook contains the exercises from the SQL training week conducted by Kaggle. It uses public tables in Google's Big Query database
###Code
pd.set_option('display.max_columns', 999)
# Imports the Google Cloud client library
from google.cloud import bigquery
import os
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = '/home/asanzgiri/Big-Query-23c4a79792b7.json'
# Instantiates a client
bigquery_client = bigquery.Client()
import pandas as pd
import bq_helper
###Output
_____no_output_____
###Markdown
Day 1
###Code
open_aq = bq_helper.BigQueryHelper(active_project="bigquery-public-data",
dataset_name="openaq")
open_aq.list_tables()
open_aq.head("global_air_quality", num_rows=3)
query = """SELECT city
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE country = 'US'
"""
us_cities = open_aq.query_to_pandas_safe(query)
us_cities.city.value_counts().head()
query = """SELECT distinct(country) as country, unit
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit != 'ppm'
"""
countries = open_aq.query_to_pandas_safe(query)
countries.head()
query = """SELECT distinct(pollutant)
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE value=0.0
"""
pollutants = open_aq.query_to_pandas_safe(query)
pollutants.head()
###Output
_____no_output_____
###Markdown
Day 2
###Code
# create a helper object for this dataset
hacker_news = bq_helper.BigQueryHelper(active_project="bigquery-public-data",
dataset_name="hacker_news")
# print the first couple rows of the "comments" table
hacker_news.head("comments")
query = """SELECT parent, COUNT(id)
FROM `bigquery-public-data.hacker_news.comments`
GROUP BY parent
HAVING COUNT(id) > 10
"""
popular_stories = hacker_news.query_to_pandas_safe(query)
popular_stories.head()
hacker_news.list_tables()
query = """SELECT type, COUNT(id) as count
FROM `bigquery-public-data.hacker_news.full`
GROUP BY type
"""
count_types = hacker_news.query_to_pandas_safe(query)
count_types.head()
query = """SELECT count(id) as count
FROM `bigquery-public-data.hacker_news.comments`
where deleted = True
"""
deleted_comments = hacker_news.query_to_pandas_safe(query)
deleted_comments.head()
query = """SELECT
countif(type = 'story') as count
FROM `bigquery-public-data.hacker_news.full`
where type = 'story'
"""
count_types = hacker_news.query_to_pandas_safe(query)
count_types.head()
query = """SELECT countif(deleted = True) as count
FROM `bigquery-public-data.hacker_news.comments`
where deleted = True
"""
deleted_comments = hacker_news.query_to_pandas_safe(query)
deleted_comments.head()
###Output
_____no_output_____
###Markdown
Day 3
###Code
accidents = bq_helper.BigQueryHelper(active_project="bigquery-public-data",
dataset_name="nhtsa_traffic_fatalities")
query = """SELECT COUNT(consecutive_number),
EXTRACT(DAYOFWEEK FROM timestamp_of_crash)
FROM `bigquery-public-data.nhtsa_traffic_fatalities.accident_2015`
GROUP BY EXTRACT(DAYOFWEEK FROM timestamp_of_crash)
ORDER BY COUNT(consecutive_number) DESC
"""
accidents_by_day = accidents.query_to_pandas_safe(query)
# library for plotting
import matplotlib.pyplot as plt
# make a plot to show that our data is, actually, sorted:
plt.plot(accidents_by_day.f0_)
plt.title("Number of Accidents by Rank of Day \n (Most to least dangerous)")
plt.show()
accidents.list_tables()
accidents.head('accident_2016')
accidents.table_schema('accident_2016')
query = """SELECT COUNT(consecutive_number) as num_accidents,
EXTRACT(HOUR FROM timestamp_of_crash) as hour
FROM `bigquery-public-data.nhtsa_traffic_fatalities.accident_2015`
GROUP BY hour
ORDER BY num_accidents DESC
"""
accidents_by_hour = accidents.query_to_pandas_safe(query)
plt.plot( accidents_by_hour.hour, accidents_by_hour.num_accidents, '.')
plt.title("Number of Accidents by Rank of Hour \n (Most to least dangerous)")
plt.show()
accidents_by_hour.head()
query = """SELECT countif(hit_and_run = 'Yes') as total_hit_and_runs,
registration_state_name as state
FROM `bigquery-public-data.nhtsa_traffic_fatalities.vehicle_2016`
GROUP BY state
ORDER BY total_hit_and_runs DESC
limit 5
"""
hit_and_runs_by_state = accidents.query_to_pandas_safe(query)
hit_and_runs_by_state.head(5)
pd.set_option('display.max_columns', 999)
accidents.head("vehicle_2016")
###Output
_____no_output_____
###Markdown
Day 4
###Code
bitcoin_blockchain = bq_helper.BigQueryHelper(active_project="bigquery-public-data",
dataset_name="bitcoin_blockchain")
query = """ WITH time AS
(
SELECT TIMESTAMP_MILLIS(timestamp) AS trans_time,
transaction_id
FROM `bigquery-public-data.bitcoin_blockchain.transactions`
)
SELECT COUNT(transaction_id) AS transactions,
EXTRACT(MONTH FROM trans_time) AS month,
EXTRACT(YEAR FROM trans_time) AS year
FROM time
GROUP BY year, month
ORDER BY year, month
"""
transactions_per_month = bitcoin_blockchain.query_to_pandas_safe(query, max_gb_scanned=21)
# plot monthly bitcoin transactions
plt.plot(transactions_per_month.transactions)
plt.title("Monthly Bitcoin Transcations")
plt.show()
query = """ WITH time AS
(
SELECT TIMESTAMP_MILLIS(timestamp) AS trans_time,
transaction_id
FROM `bigquery-public-data.bitcoin_blockchain.transactions`
)
SELECT COUNT(transaction_id) AS transactions,
EXTRACT(DAYOFYEAR FROM trans_time) AS day
FROM time
GROUP BY DAY
ORDER BY DAY asc
"""
transactions_by_day = bitcoin_blockchain.query_to_pandas_safe(query, max_gb_scanned=21)
# plot monthly bitcoin transactions
plt.plot(transactions_by_day.transactions)
plt.title("Daily Bitcoin Transcations")
plt.show()
bitcoin_blockchain.head('transactions')
query = """
SELECT merkle_root, count(transaction_id) as trans_count
FROM `bigquery-public-data.bitcoin_blockchain.transactions`
group by merkle_root
order by trans_count desc
"""
transactions_by_merkle_root = bitcoin_blockchain.query_to_pandas_safe(query, max_gb_scanned=40)
transactions_by_merkle_root.head()
###Output
_____no_output_____
###Markdown
Day 5
###Code
github = bq_helper.BigQueryHelper(active_project="bigquery-public-data",
dataset_name="github_repos")
query = ("""
-- Select all the columns we want in our joined table
SELECT L.license, COUNT(sf.path) AS number_of_files
FROM `bigquery-public-data.github_repos.sample_files` as sf
-- Table to merge into sample_files
INNER JOIN `bigquery-public-data.github_repos.licenses` as L
ON sf.repo_name = L.repo_name -- what columns should we join on?
GROUP BY L.license
ORDER BY number_of_files DESC
""")
file_count_by_license = github.query_to_pandas_safe(query, max_gb_scanned=6)
print(file_count_by_license)
github.head('sample_commits')
github.head('sample_files')
query = """
select a.repo_name, count(a.id) as num_commits
from `bigquery-public-data.github_repos.sample_files` as a
inner join `bigquery-public-data.github_repos.sample_commits` as b
on a.repo_name = b.repo_name
where a.path like '%.py'
group by a.repo_name
order by num_commits desc
"""
python_commits = github.query_to_pandas_safe(query, max_gb_scanned=9)
print(python_commits)
###Output
repo_name num_commits
0 torvalds/linux 23501556
1 tensorflow/tensorflow 4128858
2 apple/swift 4044664
3 facebook/react 13750
4 Microsoft/vscode 6909
###Markdown
Notebook feito para o auxilio e aprendizado da utilização do Python com o Google BigQueryinstalação necessárias:-pip install pandas-pip install google-cloud-bigqueryNecessário ter o projeto criado na Google Cloud e a Service Account Key
###Code
# Key de acesso:
service_account_json = r"C:\Users\Leopoldo\Desktop\Curso Big Query\big_query_leopoldo.json"
client = bigquery.Client.from_service_account_json(service_account_json)
#Referenciando o projeto
dataset_ref = client.dataset("tabelas_fato", project="curso-big-query-318121")
dataset = client.get_dataset(dataset_ref)
# Print das tabelas do dataset
lista = list(client.list_tables(dataset))
for x in lista:
print(x.table_id)
#Criando a referência da tablea
table_ref = dataset_ref.table("fato_presidencia")
# API request - fetch the table
table = client.get_table(table_ref)
#Verificando as colunas e os tipos de dados
#1º Campo: nome da coluna
#2º Campo: tipo
#3º Campo: Permite nulo ou não
#4º Campo: Descrição do campo
table.schema
#Realizando o 'preview' de uma tabela
client.list_rows(table,max_results = 10).to_dataframe()
query = """
SELECT
dim_tempo.Data,
dim_fabrica.Desc_Fabrica,
dim_organizacional.Desc_Vendedor as vendedor, dim_organizacional.Desc_Gerente as gerente, dim_organizacional.Desc_Diretor as diretor,
dim_cliente.Desc_Cliente, dim_cliente.Desc_Cidade as cidade_cliente,
dim_produto.Atr_Tamanho, dim_produto.Desc_Categoria, dim_produto.Desc_Marca, dim_produto.Desc_Produto,
ROUND(fato_presidencia.Faturamento,2) as faturamento, ROUND(fato_presidencia.Imposto,2) as imposto, ROUND(fato_presidencia.Custo_Fixo,2) as custo_fixo,
ROUND(fato_presidencia.Custo_Variavel,2) as custo_variavel, ROUND(fato_presidencia.Meta_Custo,2) as meta_custo, ROUND(fato_presidencia.Meta_Faturamento,2) as meta_faturamento
FROM `curso-big-query-318121.tabelas_fato.fato_presidencia` AS fato_presidencia
INNER JOIN `curso-big-query-318121.tabelas_fato.dim_fabrica` AS dim_fabrica
ON fato_presidencia.ID_Fabrica = dim_fabrica.ID_Fabrica
INNER JOIN `curso-big-query-318121.tabelas_fato.dim_cliente` AS dim_cliente
ON fato_presidencia.ID_Cliente = dim_cliente.ID_Cliente
INNER JOIN `curso-big-query-318121.tabelas_fato.dim_tempo` as dim_tempo
ON fato_presidencia.ID_Tempo = dim_tempo.ID_Tempo
INNER JOIN `curso-big-query-318121.tabelas_fato.dim_organizacional` as dim_organizacional
ON fato_presidencia.ID_Vendedor = dim_organizacional.ID_Vendedor
INNER JOIN `curso-big-query-318121.tabelas_fato.dim_produto` as dim_produto
ON fato_presidencia.ID_Produto = dim_produto.ID_Produto
ORDER BY dim_tempo.Data DESC
;
"""
# Criando verificação de tamanho de consultas:
dry_run_config = bigquery.QueryJobConfig(dry_run=True)
dry_run_query_job = client.query(query, job_config = dry_run_config)
consumo_mbytes = round(dry_run_query_job.total_bytes_processed / 1024 / 1024,2)
print(f"Essa consulta consumirá: {consumo_mbytes} MB")
#Rodando a Query e criando uma tabela
query_job = client.query(query)
resultado = query_job.to_dataframe()
resultado
query2 = """
SELECT * FROM `curso-big-query-318121.tabelas_fato.fato_presidencia` AS fato_presidencia
;
"""
# Suponhamos que eu queira limitar o máximo por consulta:
try:
maximo_mb = 0.5 * 1024 * 1024 #Resultado em MB
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed =maximo_mb)
safe_query_job = client.query(query2, job_config=safe_config)
resultado = safe_query_job.to_dataframe()
except Exception as e :
print("Consumo acima do estipulado \nErro: \n",e)
def funcao_query(query, limite_mb, nome_tabela, nome_projeto):
"""Função conecta no projeto e returna a query especificada se estiver dentro do limite"""
service_account_json = r"C:\Users\Leopoldo\Desktop\Curso Big Query\big_query_leopoldo.json"
client = bigquery.Client.from_service_account_json(service_account_json)
#Referenciando o projeto
dataset_ref = client.dataset(nome_tabela, project=nome_projeto)
dataset = client.get_dataset(dataset_ref)
# Criando verificação de tamanho de consultas:
dry_run_config = bigquery.QueryJobConfig(dry_run=True)
dry_run_query_job = client.query(query, job_config = dry_run_config)
consumo_mbytes = round(dry_run_query_job.total_bytes_processed / 1024 / 1024,2)
print(f"Essa consulta consumirá: {consumo_mbytes} MB")
if consumo_mbytes < limite_mb:
query_job = client.query(query)
resultado = query_job.to_dataframe()
return resultado
else:
print(f"Consulta não realizada, consumo acima do estipulado")
return False
resultado = funcao_query(query, 0.5, "tabelas_fato", "curso-big-query-318121")
resultado = funcao_query(query2, 10, "tabelas_fato", "curso-big-query-318121")
resultado
###Output
_____no_output_____ |
tasks/Conditional_statements_and_for-loops.ipynb | ###Markdown
For-loops and conditional statementsThis homework aims to help you understand loops and conditional statements by giving you little tasks or functions you should design. You first task is to write a program that finds all numbers in a certain range that fit some conditions. It's always useful to be able to go through your data and find outliers or interesting datapoints and for large datasets it's impossible to do that by hand. Let's get started with some things you need to do it effectively. conditional statements
###Code
a = True
if a:
print("this gets printed, because the statement behind the 'if' is True")
if not a:
print("this gets ignored, because a was negated")
###Output
this gets printed, because the statement behind the 'if' is True
###Markdown
'if' statements work like this: first comes the word "if" and then a statement, there is basically no limitations on how complex the statement can be.The words 'and' and 'or' can help to combine statements when you need multiple conditions simultanaeously, check the cell below.
###Code
a = True
b = False
if ((a or b) and (a or not b)) and (a and not b):
print("I am suprised i managed to get this statement to be true on first try!")
#lets see what the result looks like with different brackets.
if (a or b and a or not b and a) and b:
print("I am suprised i managed to get this statement to be true on first try!")
else:
print("Since the statement after 'If' is wrong we got something 'Else' printed ;)")
###Output
Since the statement after 'If' is wrong we got something 'Else' printed ;)
###Markdown
Logical statements is a huuuge part in mathematics and informatics, you will probably only need 'and', which only returns True if the statement left from it and the statement right from it are both true, and 'or' which returns True if atleast one of the statements is true. For-loops Like we demonstrated a couple of times in the last workshop for-loops are relatively easy created, they follow these rules:- for arbitrary_variable_name in some_iterable_object: - repeat something for every object in some_iterable_object First comes the keyword 'for', then a variable name, then the keyword 'in' and then your object you want to loop through.In each iteration an object from the iterable object gets picked and assigned to the variable, for lists and arrays, the loop iterates in order, for dictionarys there is no order. In most cases you will iterate through lists so from now on the iterable object will just be a list.
###Code
your_list = [1,2,"string",[3,4,5]]
for item in your_list:
print(item)
###Output
_____no_output_____
###Markdown
Sometimes you want to repeat something numerous times without iterating through an existing list so you can just create it in the for-loop header with the range(int) function.
###Code
a_string = ""
for a_number in range(10):
a_string = a_string + "i "
print(a_string)
for bla in range(5,10):
print(bla)
for bla in range(5,10,2):
print(bla)
###Output
_____no_output_____ |
getting-started/spark-jdbc.ipynb | ###Markdown
Spark JDBC to Databases- [Overview](spark-jdbc-overview)- [Setup](spark-jdbc-setup) - [Define Environment Variables](spark-jdbc-define-envir-vars) - [Initiate a Spark JDBC Session](spark-jdbc-init-session) - [Load Driver Packages Dynamically](spark-jdbc-init-dynamic-pkg-load) - [Load Driver Packages Locally](spark-jdbc-init-local-pkg-load)- [Connect to Databases Using Spark JDBC](spark-jdbc-connect-to-dbs) - [Connect to a MySQL Database](spark-jdbc-to-mysql) - [Connecting to a Public MySQL Instance](spark-jdbc-to-mysql-public) - [Connecting to a Test or Temporary MySQL Instance](spark-jdbc-to-mysql-test-or-temp) - [Connect to a PostgreSQL Database](spark-jdbc-to-postgresql) - [Connect to an Oracle Database](spark-jdbc-to-oracle) - [Connect to an MS SQL Server Database](spark-jdbc-to-ms-sql-server) - [Connect to a Redshift Database](spark-jdbc-to-redshift)- [Cleanup](spark-jdbc-cleanup) - [Delete Data](spark-jdbc-delete-data) - [Release Spark Resources](spark-jdbc-release-spark-resources) OverviewSpark SQL includes a data source that can read data from other databases using Java database connectivity (**JDBC**).The results are returned as a Spark DataFrame that can easily be processed in Spark SQL or joined with other data sources.For more information, see the [Spark documentation](https://spark.apache.org/docs/2.3.1/sql-programming-guide.htmljdbc-to-other-databases). Setup Define Environment VariablesBegin by initializing some environment variables.> **Note:** You need to edit the following code to assign valid values to the database variables (`DB_XXX`).
###Code
import os
# Read Iguazio Data Science Platform ("the platform") environment variables into local variables
V3IO_USER = os.getenv('V3IO_USERNAME')
V3IO_HOME = os.getenv('V3IO_HOME')
V3IO_HOME_URL = os.getenv('V3IO_HOME_URL')
# Define database environment variables
# TODO: Edit the variable definitions to assign valid values for your environment.
%env DB_HOST = "" # Database host as a fully qualified name (FQN)
%env DB_PORT = "" # Database port number
%env DB_DRIVER = "" # Database driver [mysql/postgresql|oracle:thin|sqlserver]
%env DB_Name = "" # Database|schema name
%env DB_TABLE = "" # Table name
%env DB_USER = "" # Database username
%env DB_PASSWORD = "" # Database user password
os.environ["PYSPARK_SUBMIT_ARGS"] = "--packages mysql:mysql-connector-java:5.1.39 pyspark-shell"
###Output
env: DB_HOST="" # Database host's fully qualified name
env: DB_PORT="" # Port num of the database
env: DB_DRIVER="" # Database Driver [postgresql|mysql|oracle:thin|sqlserver]
env: DB_Name="" # Database|Schema Name
env: DB_TABLE="" # Table Name
env: DB_USER="" # Database User Name
env: DB_PASSWORD="" # Database User's Password
###Markdown
Initiate a Spark JDBC SessionYou can select between two methods for initiating a Spark session with JDBC drivers ("Spark JDBC session"):- [Load Driver Packages Dynamically](spark-jdbc-init-dynamic-pkg-load) (preferred)- [Load Driver Packages Locally](spark-jdbc-init-local-pkg-load) Load Driver Packages DynamicallyThe preferred method for initiating a Spark JDBC session is to load the required JDBC driver packages dynamically from https://spark-packages.org/ by doing the following:1. Set the `PYSPARK_SUBMIT_ARGS` environment variable to `"--packages :: pyspark-shell"`.2. Initiate a new spark session.The following example demonstrates how to initiate a Spark session that uses version 5.1.39 of the **mysql-connector-java** MySQL JDBC database driver (`mysql:mysql-connector-java:5.1.39`).
###Code
from pyspark.conf import SparkConf
from pyspark.sql import SparkSession
# Configure the Spark JDBC driver package
# TODO: Replace `mysql:mysql-connector-java:5.1.39` with the required driver-pacakge information.
os.environ["PYSPARK_SUBMIT_ARGS"] = "--packages mysql:mysql-connector-java:5.1.39 pyspark-shell"
# Initiate a new Spark session; you can change the application name
spark = SparkSession.builder.appName("Spark JDBC tutorial").getOrCreate()
###Output
_____no_output_____
###Markdown
Load Driver Packages LocallyYou can also load the Spark JDBC driver package from the local file system of your Iguazio Data Science Platform ("the platform").It's recommended that you use this method only if you don't have internet connection ("dark-site installations") or if there's no official Spark package for your database.The platform comes pre-deployed with MySQL, PostgreSQL, Oracle, Redshift, and MS SQL Server JDBC driver packages, which are found in the **/spark/3rd_party** directory (**$SPARK_HOME/3rd_party**).You can also copy additional driver packages or different versions of the pre-deployed drivers to the platform — for example, from the **Data** dashboard page.To load a JDBC driver package locally, you need to set the `spark.driver.extraClassPath` and `spark.executor.extraClassPath` Spark configuration properties to the path to a Spark JDBC driver package in the platform's file system.You can do this using either of the following alternative methods:- Preconfigure the path to the driver package — 1. In your Spark-configuration file — **$SPARK_HOME/conf/spark-defaults.conf** — set the `extraClassPath` configuration properties to the path to the relevant driver package: ```python spark.driver.extraClassPath = "" spark.executor.extraClassPath = "" ``` 2. Initiate a new spark session.- Configure the path to the driver package as part of the initiation of a new Spark session: ```python spark = SparkSession.builder. \ appName(""). \ config("spark.driver.extraClassPath", ""). \ config("spark.executor.extraClassPath", ""). \ getOrCreate() ```The following example demonstrates how to initiate a Spark session that uses the pre-deployed version 8.0.13 of the **mysql-connector-java** MySQL JDBC database driver (**/spark/3rd_party/mysql-connector-java-8.0.13.jar**)
###Code
from pyspark.conf import SparkConf
from pyspark.sql import SparkSession
# METHOD I
# Edit your Spark configuration file ($SPARK_HOME/conf/spark-defaults.conf), set the `spark.driver.extraClassPath` and
# `spark.executor.extraClassPath` properties to the local file-system path to a pre-deployed Spark JDBC driver package.
# Replace "/spark/3rd_party/mysql-connector-java-8.0.13.jar" with the relevant path.
# spark.driver.extraClassPath = "/spark/3rd_party/mysql-connector-java-8.0.13.jar"
# spark.executor.extraClassPath = "/spark/3rd_party/mysql-connector-java-8.0.13.jar"
#
# Then, initiate a new Spark session; you can change the application name.
# spark = SparkSession.builder.appName("Spark JDBC tutorial").getOrCreate()
# METHOD II
# Initiate a new Spark Session; you can change the application name.
# Set the same `extraClassPath` configuration properties as in Method #1 as part of the initiation command.
# Replace "/spark/3rd_party/mysql-connector-java-8.0.13.jar" with the relevant path.
# local file-system path to a pre-deployed Spark JDBC driver package
spark = SparkSession.builder. \
appName("Spark JDBC tutorial"). \
config("spark.driver.extraClassPath", "/spark/3rd_party/mysql-connector-java-8.0.13.jar"). \
config("spark.executor.extraClassPath", "/spark/3rd_party/mysql-connector-java-8.0.13.jar"). \
getOrCreate()
import pprint
# Verify your configuration: run the following code to list the current Spark configurations, and check the output to verify that the
# `spark.driver.extraClassPath` and `spark.executor.extraClassPath` properties are set to the correct local driver-pacakge path.
conf = spark.sparkContext._conf.getAll()
pprint.pprint(conf)
###Output
[('spark.sql.catalogImplementation', 'in-memory'),
('spark.driver.extraLibraryPath', '/hadoop/etc/hadoop'),
('spark.app.id', 'app-20190704070308-0001'),
('spark.executor.memory', '2G'),
('spark.executor.id', 'driver'),
('spark.jars',
'file:///spark/v3io-libs/v3io-hcfs_2.11.jar,file:///spark/v3io-libs/v3io-spark2-object-dataframe_2.11.jar,file:///spark/v3io-libs/v3io-spark2-streaming_2.11.jar,file:///igz/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar'),
('spark.cores.max', '4'),
('spark.executorEnv.V3IO_ACCESS_KEY', 'bb79fffa-7582-4fd2-9347-a350335801fc'),
('spark.driver.extraClassPath',
'/spark/3rd_party/mysql-connector-java-8.0.13.jar'),
('spark.executor.extraJavaOptions', '"-Dsun.zip.disableMemoryMapping=true"'),
('spark.driver.port', '33751'),
('spark.driver.host', '10.233.92.91'),
('spark.executor.extraLibraryPath', '/hadoop/etc/hadoop'),
('spark.submit.pyFiles',
'/igz/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar'),
('spark.app.name', 'Spark JDBC tutorial'),
('spark.repl.local.jars',
'file:///spark/v3io-libs/v3io-hcfs_2.11.jar,file:///spark/v3io-libs/v3io-spark2-object-dataframe_2.11.jar,file:///spark/v3io-libs/v3io-spark2-streaming_2.11.jar,file:///igz/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar'),
('spark.rdd.compress', 'True'),
('spark.serializer.objectStreamReset', '100'),
('spark.files',
'file:///igz/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar'),
('spark.executor.cores', '1'),
('spark.executor.extraClassPath',
'/spark/3rd_party/mysql-connector-java-8.0.13.jar'),
('spark.submit.deployMode', 'client'),
('spark.driver.extraJavaOptions', '"-Dsun.zip.disableMemoryMapping=true"'),
('spark.ui.showConsoleProgress', 'true'),
('spark.executorEnv.V3IO_USERNAME', 'iguazio'),
('spark.master', 'spark://spark-jddcm4iwas-qxw13-master:7077')]
###Markdown
Connect to Databases Using Spark JDBC Connect to a MySQL Database- [Connecting to a Public MySQL Instance](spark-jdbc-to-mysql-public)- [Connecting to a Test or Temporary MySQL Instance](spark-jdbc-to-mysql-test-or-temp) Connect to a Public MySQL Instance
###Code
#Loading data from a JDBC source
dfMySQL = spark.read \
.format("jdbc") \
.option("url", "jdbc:mysql://mysql-rfam-public.ebi.ac.uk:4497/Rfam") \
.option("dbtable", "Rfam.family") \
.option("user", "rfamro") \
.option("password", "") \
.option("driver", "com.mysql.jdbc.Driver") \
.load()
dfMySQL.show()
###Output
+--------+-------------+---------+--------------------+--------------------+--------------------+----------------+--------------+------------+--------------------+--------------------+------------------+--------------------+--------------------+--------+--------+--------------+----------+--------------------+--------------------+-----------------+--------------------+---------------+--------+------------+---------+------------+--------------+----+----+---------------+-------+----------+-------------------+-------------------+
|rfam_acc| rfam_id|auto_wiki| description| author| seed_source|gathering_cutoff|trusted_cutoff|noise_cutoff| comment| previous_id| cmbuild| cmcalibrate| cmsearch|num_seed|num_full|num_genome_seq|num_refseq| type| structure_source|number_of_species|number_3d_structures|num_pseudonokts|tax_seed|ecmli_lambda| ecmli_mu|ecmli_cal_db|ecmli_cal_hits|maxl|clen|match_pair_node|hmm_tau|hmm_lambda| created| updated|
+--------+-------------+---------+--------------------+--------------------+--------------------+----------------+--------------+------------+--------------------+--------------------+------------------+--------------------+--------------------+--------+--------+--------------+----------+--------------------+--------------------+-----------------+--------------------+---------------+--------+------------+---------+------------+--------------+----+----+---------------+-------+----------+-------------------+-------------------+
| RF00001| 5S_rRNA| 1302| 5S ribosomal RNA|Griffiths-Jones S...|Szymanski et al, ...| 38.0| 38.0| 37.9|5S ribosomal RNA ...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 712| 139932| 0| 0| Gene; rRNA;|Published; PMID:1...| 8022| 0| null| | 0.59496| -5.32219| 1600000| 213632| 305| 119| true|-3.7812| 0.71822|2013-10-03 20:41:44|2019-01-04 15:01:52|
| RF00002| 5_8S_rRNA| 1303| 5.8S ribosomal RNA|Griffiths-Jones S...|Wuyts et al, Euro...| 42.0| 42.0| 41.9|5.8S ribosomal RN...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 61| 4716| 0| 0| Gene; rRNA;|Published; PMID:1...| 587| 0| null| | 0.65546| -9.33442| 1600000| 410495| 277| 154| true|-3.5135| 0.71791|2013-10-03 20:47:00|2019-01-04 15:01:52|
| RF00003| U1| 1304| U1 spliceosomal RNA|Griffiths-Jones S...|Zwieb C, The uRNA...| 40.0| 40.0| 39.9|U1 is a small nuc...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 100| 15436| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 837| 0| null| | 0.6869| -8.54663| 1600000| 421575| 267| 166| true|-3.7781| 0.71616|2013-10-03 20:57:11|2019-01-04 15:01:52|
| RF00004| U2| 1305| U2 spliceosomal RNA|Griffiths-Jones S...|The uRNA database...| 46.0| 46.0| 45.9|U2 is a small nuc...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 208| 16562| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 1102| 0| null| | 0.55222| -9.81359| 1600000| 403693| 301| 193| true|-3.5144| 0.71292|2013-10-03 20:58:30|2019-01-04 15:01:52|
| RF00005| tRNA| 1306| tRNA|Eddy SR, Griffith...| Eddy SR| 29.0| 29.0| 28.9|Transfer RNA (tRN...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 954| 1429129| 0| 0| Gene; tRNA;|Published; PMID:8...| 9934| 0| null| | 0.64375| -4.21424| 1600000| 283681| 253| 71| true|-2.6167| 0.73401|2013-10-03 21:00:26|2019-01-04 15:01:52|
| RF00006| Vault| 1307| Vault RNA|Bateman A, Gardne...|Published; PMID:1...| 34.0| 34.1| 33.9|This family of RN...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 73| 564| 0| 0| Gene;|Published; PMID:1...| 94| 0| null| | 0.63669| -4.8243| 1600000| 279629| 406| 101| true|-3.5531| 0.71855|2013-10-03 22:04:04|2019-01-04 15:01:52|
| RF00007| U12| 1308|U12 minor spliceo...|Griffiths-Jones S...|Shukla GC and Pad...| 53.0| 53.0| 52.9|The U12 small nuc...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 62| 531| 0| 0|Gene; snRNA; spli...|Predicted; Griffi...| 336| 0| null| | 0.55844| -9.95163| 1600000| 493455| 520| 155| true|-3.1678| 0.71782|2013-10-03 22:04:07|2019-01-04 15:01:52|
| RF00008| Hammerhead_3| 1309|Hammerhead ribozy...| Bateman A| Bateman A| 29.0| 29.0| 28.9|The hammerhead ri...| Hammerhead|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 82| 3098| 0| 0| Gene; ribozyme;|Published; PMID:7...| 176| 0| null| | 0.63206| -3.83325| 1600000| 199872| 394| 58| true| -4.375| 0.71923|2013-10-03 22:04:11|2019-01-04 15:01:52|
| RF00009| RNaseP_nuc| 1310| Nuclear RNase P|Griffiths-Jones S...|Brown JW, The Rib...| 28.0| 28.0| 27.9|Ribonuclease P (R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 116| 1237| 0| 0| Gene; ribozyme;|Published; PMID:9...| 763| 0| null| | 0.7641| -8.04053| 1600000| 274636|1082| 303| true|-4.3673| 0.70576|2013-10-03 22:04:14|2019-01-04 15:01:52|
| RF00010|RNaseP_bact_a| 2441|Bacterial RNase P...|Griffiths-Jones S...|Brown JW, The Rib...| 100.0| 100.5| 99.6|Ribonuclease P (R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 458| 6023| 0| 0| Gene; ribozyme;|Published; PMID:9...| 6324| 0| null| | 0.76804| -8.48988| 1600000| 366265| 873| 367| true|-4.3726| 0.70355|2013-10-03 22:04:21|2019-01-04 15:01:52|
| RF00011|RNaseP_bact_b| 2441|Bacterial RNase P...|Griffiths-Jones S...|Brown JW, The Rib...| 97.0| 97.1| 96.6|Ribonuclease P (R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 114| 676| 0| 0| Gene; ribozyme;|Published; PMID:9...| 767| 0| null| | 0.69906| -8.4903| 1600000| 418092| 675| 366| true|-4.0357| 0.70361|2013-10-03 22:04:51|2019-01-04 15:01:52|
| RF00012| U3| 1312|Small nucleolar R...| Gardner PP, Marz M|Published; PMID:1...| 34.0| 34.0| 33.9|Small nucleolar R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 87| 3924| 0| 0|Gene; snRNA; snoR...|Published; PMID:1...| 416| 0| null| | 0.59795| -9.77278| 1600000| 400072| 326| 218| true|-3.8301| 0.71077|2013-10-03 22:04:54|2019-01-04 15:01:52|
| RF00013| 6S| 2461| 6S / SsrS RNA|Bateman A, Barric...| Barrick JE| 48.0| 48.0| 47.9|E. coli 6S RNA wa...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 149| 3576| 0| 0| Gene;|Published; PMID:1...| 3309| 0| null| | 0.56243|-10.04259| 1600000| 331091| 277| 188| true|-3.5895| 0.71351|2013-10-03 22:05:06|2019-01-04 15:01:52|
| RF00014| DsrA| 1237| DsrA RNA| Bateman A| Bateman A| 60.0| 61.5| 57.6|DsrA RNA regulate...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 5| 35| 0| 0| Gene; sRNA;|Published; PMID:9...| 39| 0| null| | 0.53383| -8.38474| 1600000| 350673| 177| 85| true|-3.3562| 0.71888|2013-02-01 11:56:19|2019-01-04 15:01:52|
| RF00015| U4| 1314| U4 spliceosomal RNA| Griffiths-Jones SR|Zwieb C, The uRNA...| 46.0| 46.0| 45.9|U4 small nuclear ...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 170| 7522| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 1025| 0| null| | 0.58145| -8.85604| 1600000| 407516| 575| 140| true|-3.5007| 0.71795|2013-10-03 22:05:22|2019-01-04 15:01:52|
| RF00016| SNORD14| 1242|Small nucleolar R...|Griffiths-Jones S...| Griffiths-Jones SR| 64.0| 64.1| 63.9|U14 small nucleol...| U14|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 18| 1182| 0| 0|Gene; snRNA; snoR...| Predicted; PFOLD| 221| 0| null| | 0.63073| -3.65386| 1600000| 232910| 229| 116| true| -3.128| 0.71819|2013-02-01 11:56:23|2019-01-04 15:01:52|
| RF00017| Metazoa_SRP| 1315|Metazoan signal r...| Gardner PP|Published; PMID:1...| 70.0| 70.0| 69.9|The signal recogn...|SRP_euk_arch; 7SL...|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 91| 42386| 0| 0| Gene;|Published; PMID:1...| 402| 0| null| | 0.64536| -9.85267| 1600000| 488632| 514| 301| true|-4.0177| 0.70604|2013-10-03 22:07:53|2019-01-04 15:01:52|
| RF00018| CsrB| 2460|CsrB/RsmB RNA family|Bateman A, Gardne...| Bateman A| 71.0| 71.4| 70.9|The CsrB RNA bind...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 38| 254| 0| 0| Gene; sRNA;|Predicted; PFOLD;...| 196| 0| null| | 0.69326| -9.81172| 1600000| 546392| 555| 356| true|-4.0652| 0.70388|2013-10-03 23:07:27|2019-01-04 15:01:52|
| RF00019| Y_RNA| 1317| Y RNA|Griffiths-Jones S...|Griffiths-Jones S...| 38.0| 38.0| 37.9|Y RNAs are compon...| Y1; Y2; Y3; Y5|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 104| 8521| 0| 0| Gene;|Published; PMID:1...| 123| 0| null| | 0.59183| -5.14312| 1600000| 189478| 249| 98| true|-2.8418| 0.7187|2013-10-03 23:07:38|2019-01-04 15:01:52|
| RF00020| U5| 1318| U5 spliceosomal RNA|Griffiths-Jones S...|Zwieb C, The uRNA...| 40.0| 40.0| 39.9|U5 RNA is a compo...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 180| 7524| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 1001| 0| null| | 0.50732| -5.54774| 1600000| 339349| 331| 116| true|-4.1327| 0.7182|2013-10-03 23:08:43|2019-01-04 15:01:52|
+--------+-------------+---------+--------------------+--------------------+--------------------+----------------+--------------+------------+--------------------+--------------------+------------------+--------------------+--------------------+--------+--------+--------------+----------+--------------------+--------------------+-----------------+--------------------+---------------+--------+------------+---------+------------+--------------+----+----+---------------+-------+----------+-------------------+-------------------+
only showing top 20 rows
###Markdown
Connect to a Test or Temporary MySQL Instance> **Note:** The following code won't work if the MySQL instance has been shut down.
###Code
dfMySQL = spark.read \
.format("jdbc") \
.option("url", "jdbc:mysql://172.31.33.215:3306/db1") \
.option("dbtable", "db1.fruit") \
.option("user", "root") \
.option("password", "my-secret-pw") \
.option("driver", "com.mysql.jdbc.Driver") \
.load()
dfMySQL.show()
###Output
_____no_output_____
###Markdown
Connect to a PostgreSQL Database
###Code
# Load data from a JDBC source
dfPS = spark.read \
.format("jdbc") \
.option("url", "jdbc:postgresql:dbserver") \
.option("dbtable", "schema.tablename") \
.option("user", "username") \
.option("password", "password") \
.load()
dfPS2 = spark.read \
.jdbc("jdbc:postgresql:dbserver", "schema.tablename",
properties={"user": "username", "password": "password"})
# Specify DataFrame column data types on read
dfPS3 = spark.read \
.format("jdbc") \
.option("url", "jdbc:postgresql:dbserver") \
.option("dbtable", "schema.tablename") \
.option("user", "username") \
.option("password", "password") \
.option("customSchema", "id DECIMAL(38, 0), name STRING") \
.load()
# Save data to a JDBC source
dfPS.write \
.format("jdbc") \
.option("url", "jdbc:postgresql:dbserver") \
.option("dbtable", "schema.tablename") \
.option("user", "username") \
.option("password", "password") \
.save()
dfPS2.write \
properties={"user": "username", "password": "password"})
# Specify create table column data types on write
dfPS.write \
.option("createTableColumnTypes", "name CHAR(64), comments VARCHAR(1024)") \
.jdbc("jdbc:postgresql:dbserver", "schema.tablename", properties={"user": "username", "password": "password"})
###Output
_____no_output_____
###Markdown
Connect to an Oracle Database
###Code
# Read a table from Oracle (table: hr.emp)
dfORA = spark.read \
.format("jdbc") \
.option("url", "jdbc:oracle:thin:username/password@//hostname:portnumber/SID") \
.option("dbtable", "hr.emp") \
.option("user", "db_user_name") \
.option("password", "password") \
.option("driver", "oracle.jdbc.driver.OracleDriver") \
.load()
dfORA.printSchema()
dfORA.show()
# Read a query from Oracle
query = "(select empno,ename,dname from emp, dept where emp.deptno = dept.deptno) emp"
dfORA1 = spark.read \
.format("jdbc") \
.option("url", "jdbc:oracle:thin:username/password@//hostname:portnumber/SID") \
.option("dbtable", query) \
.option("user", "db_user_name") \
.option("password", "password") \
.option("driver", "oracle.jdbc.driver.OracleDriver") \
.load()
dfORA1.printSchema()
dfORA1.show()
###Output
_____no_output_____
###Markdown
Connect to an MS SQL Server Database
###Code
# Read a table from MS SQL Server
dfMS = spark.read \
.format("jdbc") \
.options(url="jdbc:sqlserver:username/password@//hostname:portnumber/DB") \
.option("dbtable", "db_table_name") \
.option("user", "db_user_name") \
.option("password", "password") \
.option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver" ) \
.load()
dfMS.printSchema()
dfMS.show()
###Output
_____no_output_____
###Markdown
Connect to a Redshift Database
###Code
# Read data from a table
dfRS = spark.read \
.format("com.databricks.spark.redshift") \
.option("url", "jdbc:redshift://redshifthost:5439/database?user=username&password=pass") \
.option("dbtable", "my_table") \
.option("tempdir", "s3n://path/for/temp/data") \
.load()
# Read data from a query
dfRS = spark.read \
.format("com.databricks.spark.redshift") \
.option("url", "jdbc:redshift://redshifthost:5439/database?user=username&password=pass") \
.option("query", "select x, count(*) my_table group by x") \
.option("tempdir", "s3n://path/for/temp/data") \
.load()
# Write data back to a table
dfRS.write \
.format("com.databricks.spark.redshift") \
.option("url", "jdbc:redshift://redshifthost:5439/database?user=username&password=pass") \
.option("dbtable", "my_table_copy") \
.option("tempdir", "s3n://path/for/temp/data") \
.mode("error") \
.save()
# Use IAM role-based authentication
dfRS.write \
.format("com.databricks.spark.redshift") \
.option("url", "jdbc:redshift://redshifthost:5439/database?user=username&password=pass") \
.option("dbtable", "my_table_copy") \
.option("tempdir", "s3n://path/for/temp/data") \
.option("aws_iam_role", "arn:aws:iam::123456789000:role/redshift_iam_role") \
.mode("error") \
.save()
###Output
_____no_output_____
###Markdown
CleanupPrior to exiting, release disk space, computation, and memory resources consumed by the active session:- [Delete Data](spark-jdbc-delete-data)- [Release Spark Resources](spark-jdbc-release-spark-resources) Delete DataYou can optionally delete any of the directories or files that you created.See the instructions in the [Creating and Deleting Container Directories](https://www.iguazio.com/docs/tutorials/latest-release/getting-started/containers/create-delete-container-dirs) tutorial.For example, the following code uses a local file-system command to delete a **<running user>/examples/spark-jdbc** directory in the "users" container.Edit the path, as needed, then remove the comment mark (``) and run the code.
###Code
# !rm -rf /User/examples/spark-jdbc/
###Output
_____no_output_____
###Markdown
Release Spark ResourcesWhen you're done, run the following command to stop your Spark session and release its computation and memory resources:
###Code
spark.stop()
###Output
_____no_output_____
###Markdown
Spark JDBC to Databases1. [Overview](Overview)2. [Set Up](Set-UP)3. [Initiate Spark Session](Initiate-Spark-Session)4. [Spark JDBC to Databases](Spark-JDBC-to-Databases) 1. [Spark JDBC to MySQL](Spark-JDBC-to-MySQL) 2. [Spark JDBC to PostgreSQL](Spark-JDBC-to-PostgreSQL) 3. [Spark JDBC to Oracle](Spark-JDBC-to-Oracle) 4. [Spark JDBC to MS SQL Server](Spark-JDBC-to-MS-SQL-Server) 5. [Spark JDBC to Redshift](Spark-JDBC-to-Redshift)5. [Clean Up](Clean-Up) OverviewSpark SQL includes a data source that can read data from many databases using JDBC. The results are returned as a DataFrame that can easily be processed in Spark SQL or joined with other data sources. **Prerequisites** In SPARK_HOME/conf/spark-default.conf file, add path of JDBC drivers for PostgrsSQL, MySQL, Oracle, and SQLServer to the following two Spark properties : * spark.driver.extraClassPath* spark.executor.extraClassPathFor more details read [Spark JDBC to Databases](https://spark.apache.org/docs/2.3.1/sql-programming-guide.htmljdbc-to-other-databases) Set UP
###Code
import os
# Iguazio env
V3IO_USER = os.getenv('V3IO_USERNAME')
V3IO_HOME = os.getenv('V3IO_HOME')
V3IO_HOME_URL = os.getenv('V3IO_HOME_URL')
# Database properties
%env DB_HOST = "" # Database host's fully qualified name
%env DB_PORT = "" # Port num of the database
%env DB_DRIVER = "" # Database Driver [postgresql|mysql|oracle:thin|sqlserver]
%env DB_Name = "" # Database|Schema Name
%env DB_TABLE = "" # Table Name
%env DB_USER = "" # Database User Name
%env DB_PASSWORD = "" # Database User's Password
###Output
env: DB_HOST="" # Database host's fully qualified name
env: DB_PORT="" # Port num of the database
env: DB_DRIVER="" # Database Driver [postgresql|mysql|oracle:thin|sqlserver]
env: DB_Name="" # Database|Schema Name
env: DB_TABLE="" # Table Name
env: DB_USER="" # Database User Name
env: DB_PASSWORD="" # Database User's Password
###Markdown
Initiate Spark Session
###Code
from pyspark.conf import SparkConf
from pyspark.sql import SparkSession
# METHOD I:
# Update Spark configurations of the following two extraClassPath with the JDBC driver location
# prior to initiating a Spark session:
# spark.driver.extraClassPath
# spark.executor.extraClassPath
#
# NOTE:
# If you don't connnect to mysql, replace the mysql's connector by the other database's JDBC connector
# in the following two extraClassPath.
#
# Initiate a Spark Session
spark = SparkSession.builder.\
appName("Spark JDBC to Databases - ipynb").\
config("spark.driver.extraClassPath", "/spark/3rd_party/mysql-connector-java-8.0.13.jar").\
config("spark.executor.extraClassPath", "/spark/3rd_party/mysql-connector-java-8.0.13.jar").getOrCreate()
"""
# METHOD II:
# Use "PYSPARK_SUBMIT_ARGS" with "--packages" option. (The preferred way)
#
# Usage:
# os.environ["PYSPARK_SUBMIT_ARGS"] = "--packages dialect:dialect-specific-jdbc-connector:version# pyspark-shell"
#
# NOTE:
# If you don't connnect to mysql, replace the mysql's connector by the other database's JDBC connector
# in the following line.
"""
# Set PYSPARK_SUBMIT_ARGS
#os.environ["PYSPARK_SUBMIT_ARGS"] = "--packages mysql:mysql-connector-java:5.1.39 pyspark-shell"
"""
# NOTE:
# If you use PYSPARK_SUBMIT_ARGS, use the following way to initiate the Spark session WITHOUT update:
# spark.driver.extraClassPath
# spark.executor.extraClassPath
"""
# Initiate a Spark Session
#spark = SparkSession.builder.appName("Spark JDBC to Databases - ipynb").getOrCreate()
import pprint
# List Spark configurations
# Verify if JDBC drivers' path was properly added to both of Spark driver and executor extra LibPath.
conf = spark.sparkContext._conf.getAll()
pprint.pprint(conf)
###Output
[('spark.sql.catalogImplementation', 'in-memory'),
('spark.driver.extraLibraryPath', '/hadoop/etc/hadoop'),
('spark.jars',
'file:///spark/v3io-libs/v3io-hcfs_2.11.jar,file:///spark/v3io-libs/v3io-spark2-object-dataframe_2.11.jar,file:///spark/v3io-libs/v3io-spark2-streaming_2.11.jar'),
('spark.executor.memory', '2G'),
('spark.repl.local.jars',
'file:///spark/v3io-libs/v3io-hcfs_2.11.jar,file:///spark/v3io-libs/v3io-spark2-object-dataframe_2.11.jar,file:///spark/v3io-libs/v3io-spark2-streaming_2.11.jar'),
('spark.executor.id', 'driver'),
('spark.driver.port', '41461'),
('spark.cores.max', '4'),
('spark.master', 'spark://spark-9nv9ola1rl-3qgje-master:7077'),
('spark.driver.extraClassPath',
'/spark/3rd_party/mysql-connector-java-8.0.13.jar'),
('spark.executor.extraJavaOptions', '"-Dsun.zip.disableMemoryMapping=true"'),
('spark.executor.extraLibraryPath', '/hadoop/etc/hadoop'),
('spark.app.name', 'Spark JDBC to Databases - ipynb'),
('spark.driver.host', '10.233.92.90'),
('spark.rdd.compress', 'True'),
('spark.executorEnv.V3IO_ACCESS_KEY', '07934877-3b89-4f88-b08e-a6ea1fb1092b'),
('spark.app.id', 'app-20190430214617-0002'),
('spark.serializer.objectStreamReset', '100'),
('spark.executor.cores', '1'),
('spark.executor.extraClassPath',
'/spark/3rd_party/mysql-connector-java-8.0.13.jar'),
('spark.submit.deployMode', 'client'),
('spark.submit.pyFiles', '/igz/java/libs/v3io-py.zip'),
('spark.driver.extraJavaOptions', '"-Dsun.zip.disableMemoryMapping=true"'),
('spark.ui.showConsoleProgress', 'true'),
('spark.executorEnv.V3IO_USERNAME', 'iguazio')]
###Markdown
Spark JDBC to Databases Spark JDBC to MySQL Connecting to a public mySQL instance
###Code
#Loading data from a JDBC source
dfMySQL = spark.read \
.format("jdbc") \
.option("url", "jdbc:mysql://mysql-rfam-public.ebi.ac.uk:4497/Rfam") \
.option("dbtable", "Rfam.family") \
.option("user", "rfamro") \
.option("password", "") \
.option("driver", "com.mysql.jdbc.Driver") \
.load()
dfMySQL.show()
###Output
+--------+-------------+---------+--------------------+--------------------+--------------------+----------------+--------------+------------+--------------------+--------------------+------------------+--------------------+--------------------+--------+--------+--------------+----------+--------------------+--------------------+-----------------+--------------------+---------------+--------+------------+---------+------------+--------------+----+----+---------------+-------+----------+-------------------+-------------------+
|rfam_acc| rfam_id|auto_wiki| description| author| seed_source|gathering_cutoff|trusted_cutoff|noise_cutoff| comment| previous_id| cmbuild| cmcalibrate| cmsearch|num_seed|num_full|num_genome_seq|num_refseq| type| structure_source|number_of_species|number_3d_structures|num_pseudonokts|tax_seed|ecmli_lambda| ecmli_mu|ecmli_cal_db|ecmli_cal_hits|maxl|clen|match_pair_node|hmm_tau|hmm_lambda| created| updated|
+--------+-------------+---------+--------------------+--------------------+--------------------+----------------+--------------+------------+--------------------+--------------------+------------------+--------------------+--------------------+--------+--------+--------------+----------+--------------------+--------------------+-----------------+--------------------+---------------+--------+------------+---------+------------+--------------+----+----+---------------+-------+----------+-------------------+-------------------+
| RF00001| 5S_rRNA| 1302| 5S ribosomal RNA|Griffiths-Jones S...|Szymanski et al, ...| 38.0| 38.0| 37.9|5S ribosomal RNA ...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 712| 139932| 0| 0| Gene; rRNA;|Published; PMID:1...| 8022| 0| null| | 0.59496| -5.32219| 1600000| 213632| 305| 119| true|-3.7812| 0.71822|2013-10-03 20:41:44|2019-01-04 15:01:52|
| RF00002| 5_8S_rRNA| 1303| 5.8S ribosomal RNA|Griffiths-Jones S...|Wuyts et al, Euro...| 42.0| 42.0| 41.9|5.8S ribosomal RN...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 61| 4716| 0| 0| Gene; rRNA;|Published; PMID:1...| 587| 0| null| | 0.65546| -9.33442| 1600000| 410495| 277| 154| true|-3.5135| 0.71791|2013-10-03 20:47:00|2019-01-04 15:01:52|
| RF00003| U1| 1304| U1 spliceosomal RNA|Griffiths-Jones S...|Zwieb C, The uRNA...| 40.0| 40.0| 39.9|U1 is a small nuc...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 100| 15436| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 837| 0| null| | 0.6869| -8.54663| 1600000| 421575| 267| 166| true|-3.7781| 0.71616|2013-10-03 20:57:11|2019-01-04 15:01:52|
| RF00004| U2| 1305| U2 spliceosomal RNA|Griffiths-Jones S...|The uRNA database...| 46.0| 46.0| 45.9|U2 is a small nuc...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 208| 16562| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 1102| 0| null| | 0.55222| -9.81359| 1600000| 403693| 301| 193| true|-3.5144| 0.71292|2013-10-03 20:58:30|2019-01-04 15:01:52|
| RF00005| tRNA| 1306| tRNA|Eddy SR, Griffith...| Eddy SR| 29.0| 29.0| 28.9|Transfer RNA (tRN...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 954| 1429129| 0| 0| Gene; tRNA;|Published; PMID:8...| 9934| 0| null| | 0.64375| -4.21424| 1600000| 283681| 253| 71| true|-2.6167| 0.73401|2013-10-03 21:00:26|2019-01-04 15:01:52|
| RF00006| Vault| 1307| Vault RNA|Bateman A, Gardne...|Published; PMID:1...| 34.0| 34.1| 33.9|This family of RN...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 73| 564| 0| 0| Gene;|Published; PMID:1...| 94| 0| null| | 0.63669| -4.8243| 1600000| 279629| 406| 101| true|-3.5531| 0.71855|2013-10-03 22:04:04|2019-01-04 15:01:52|
| RF00007| U12| 1308|U12 minor spliceo...|Griffiths-Jones S...|Shukla GC and Pad...| 53.0| 53.0| 52.9|The U12 small nuc...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 62| 531| 0| 0|Gene; snRNA; spli...|Predicted; Griffi...| 336| 0| null| | 0.55844| -9.95163| 1600000| 493455| 520| 155| true|-3.1678| 0.71782|2013-10-03 22:04:07|2019-01-04 15:01:52|
| RF00008| Hammerhead_3| 1309|Hammerhead ribozy...| Bateman A| Bateman A| 29.0| 29.0| 28.9|The hammerhead ri...| Hammerhead|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 82| 3098| 0| 0| Gene; ribozyme;|Published; PMID:7...| 176| 0| null| | 0.63206| -3.83325| 1600000| 199872| 394| 58| true| -4.375| 0.71923|2013-10-03 22:04:11|2019-01-04 15:01:52|
| RF00009| RNaseP_nuc| 1310| Nuclear RNase P|Griffiths-Jones S...|Brown JW, The Rib...| 28.0| 28.0| 27.9|Ribonuclease P (R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 116| 1237| 0| 0| Gene; ribozyme;|Published; PMID:9...| 763| 0| null| | 0.7641| -8.04053| 1600000| 274636|1082| 303| true|-4.3673| 0.70576|2013-10-03 22:04:14|2019-01-04 15:01:52|
| RF00010|RNaseP_bact_a| 2441|Bacterial RNase P...|Griffiths-Jones S...|Brown JW, The Rib...| 100.0| 100.5| 99.6|Ribonuclease P (R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 458| 6023| 0| 0| Gene; ribozyme;|Published; PMID:9...| 6324| 0| null| | 0.76804| -8.48988| 1600000| 366265| 873| 367| true|-4.3726| 0.70355|2013-10-03 22:04:21|2019-01-04 15:01:52|
| RF00011|RNaseP_bact_b| 2441|Bacterial RNase P...|Griffiths-Jones S...|Brown JW, The Rib...| 97.0| 97.1| 96.6|Ribonuclease P (R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 114| 676| 0| 0| Gene; ribozyme;|Published; PMID:9...| 767| 0| null| | 0.69906| -8.4903| 1600000| 418092| 675| 366| true|-4.0357| 0.70361|2013-10-03 22:04:51|2019-01-04 15:01:52|
| RF00012| U3| 1312|Small nucleolar R...| Gardner PP, Marz M|Published; PMID:1...| 34.0| 34.0| 33.9|Small nucleolar R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 87| 3924| 0| 0|Gene; snRNA; snoR...|Published; PMID:1...| 416| 0| null| | 0.59795| -9.77278| 1600000| 400072| 326| 218| true|-3.8301| 0.71077|2013-10-03 22:04:54|2019-01-04 15:01:52|
| RF00013| 6S| 2461| 6S / SsrS RNA|Bateman A, Barric...| Barrick JE| 48.0| 48.0| 47.9|E. coli 6S RNA wa...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 149| 3576| 0| 0| Gene;|Published; PMID:1...| 3309| 0| null| | 0.56243|-10.04259| 1600000| 331091| 277| 188| true|-3.5895| 0.71351|2013-10-03 22:05:06|2019-01-04 15:01:52|
| RF00014| DsrA| 1237| DsrA RNA| Bateman A| Bateman A| 60.0| 61.5| 57.6|DsrA RNA regulate...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 5| 35| 0| 0| Gene; sRNA;|Published; PMID:9...| 39| 0| null| | 0.53383| -8.38474| 1600000| 350673| 177| 85| true|-3.3562| 0.71888|2013-02-01 11:56:19|2019-01-04 15:01:52|
| RF00015| U4| 1314| U4 spliceosomal RNA| Griffiths-Jones SR|Zwieb C, The uRNA...| 46.0| 46.0| 45.9|U4 small nuclear ...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 170| 7522| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 1025| 0| null| | 0.58145| -8.85604| 1600000| 407516| 575| 140| true|-3.5007| 0.71795|2013-10-03 22:05:22|2019-01-04 15:01:52|
| RF00016| SNORD14| 1242|Small nucleolar R...|Griffiths-Jones S...| Griffiths-Jones SR| 64.0| 64.1| 63.9|U14 small nucleol...| U14|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 18| 1182| 0| 0|Gene; snRNA; snoR...| Predicted; PFOLD| 221| 0| null| | 0.63073| -3.65386| 1600000| 232910| 229| 116| true| -3.128| 0.71819|2013-02-01 11:56:23|2019-01-04 15:01:52|
| RF00017| Metazoa_SRP| 1315|Metazoan signal r...| Gardner PP|Published; PMID:1...| 70.0| 70.0| 69.9|The signal recogn...|SRP_euk_arch; 7SL...|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 91| 42386| 0| 0| Gene;|Published; PMID:1...| 402| 0| null| | 0.64536| -9.85267| 1600000| 488632| 514| 301| true|-4.0177| 0.70604|2013-10-03 22:07:53|2019-01-04 15:01:52|
| RF00018| CsrB| 2460|CsrB/RsmB RNA family|Bateman A, Gardne...| Bateman A| 71.0| 71.4| 70.9|The CsrB RNA bind...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 38| 254| 0| 0| Gene; sRNA;|Predicted; PFOLD;...| 196| 0| null| | 0.69326| -9.81172| 1600000| 546392| 555| 356| true|-4.0652| 0.70388|2013-10-03 23:07:27|2019-01-04 15:01:52|
| RF00019| Y_RNA| 1317| Y RNA|Griffiths-Jones S...|Griffiths-Jones S...| 38.0| 38.0| 37.9|Y RNAs are compon...| Y1; Y2; Y3; Y5|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 104| 8521| 0| 0| Gene;|Published; PMID:1...| 123| 0| null| | 0.59183| -5.14312| 1600000| 189478| 249| 98| true|-2.8418| 0.7187|2013-10-03 23:07:38|2019-01-04 15:01:52|
| RF00020| U5| 1318| U5 spliceosomal RNA|Griffiths-Jones S...|Zwieb C, The uRNA...| 40.0| 40.0| 39.9|U5 RNA is a compo...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 180| 7524| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 1001| 0| null| | 0.50732| -5.54774| 1600000| 339349| 331| 116| true|-4.1327| 0.7182|2013-10-03 23:08:43|2019-01-04 15:01:52|
+--------+-------------+---------+--------------------+--------------------+--------------------+----------------+--------------+------------+--------------------+--------------------+------------------+--------------------+--------------------+--------+--------+--------------+----------+--------------------+--------------------+-----------------+--------------------+---------------+--------+------------+---------+------------+--------------+----+----+---------------+-------+----------+-------------------+-------------------+
only showing top 20 rows
###Markdown
Connecting to a testing/temporary mySQL instance**NOTE** It won't work, when this tesing mySQL instance was shutdown.
###Code
dfMySQL = spark.read \
.format("jdbc") \
.option("url", "jdbc:mysql://172.31.33.215:3306/db1") \
.option("dbtable", "db1.fruit") \
.option("user", "root") \
.option("password", "my-secret-pw") \
.option("driver", "com.mysql.jdbc.Driver") \
.load()
dfMySQL.show()
###Output
+--------+------+-------------------+
|fruit_id| name| variety|
+--------+------+-------------------+
| 1| Apple| Red Delicious|
| 2| Pear| Comice|
| 3|Orange| Navel|
| 4| Pear| Bartlett|
| 5|Orange| Blood|
| 6| Apple|Cox's Orange Pippin|
| 7| Apple| Granny Smith|
| 8| Pear| Anjou|
| 9|Orange| Valencia|
| 10|Banana| Plantain|
| 11|Banana| Burro|
| 12|Banana| Cavendish|
+--------+------+-------------------+
###Markdown
Spark JDBC to PostgreSQL
###Code
#Loading data from a JDBC source
dfPS = spark.read \
.format("jdbc") \
.option("url", "jdbc:postgresql:dbserver") \
.option("dbtable", "schema.tablename") \
.option("user", "username") \
.option("password", "password") \
.load()
dfPS2 = spark.read \
.jdbc("jdbc:postgresql:dbserver", "schema.tablename",
properties={"user": "username", "password": "password"})
# Specifying dataframe column data types on read
dfPS3 = spark.read \
.format("jdbc") \
.option("url", "jdbc:postgresql:dbserver") \
.option("dbtable", "schema.tablename") \
.option("user", "username") \
.option("password", "password") \
.option("customSchema", "id DECIMAL(38, 0), name STRING") \
.load()
# Saving data to a JDBC source
dfPS.write \
.format("jdbc") \
.option("url", "jdbc:postgresql:dbserver") \
.option("dbtable", "schema.tablename") \
.option("user", "username") \
.option("password", "password") \
.save()
dfPS2.write \
properties={"user": "username", "password": "password"})
# Specifying create table column data types on write
dfPS.write \
.option("createTableColumnTypes", "name CHAR(64), comments VARCHAR(1024)") \
.jdbc("jdbc:postgresql:dbserver", "schema.tablename", properties={"user": "username", "password": "password"})
###Output
_____no_output_____
###Markdown
Spark JDBC to Oracle
###Code
# Read a table from Orale (table: hr.emp)
dfORA = spark.read \
.format("jdbc") \
.option("url", "jdbc:oracle:thin:username/password@//hostname:portnumber/SID") \
.option("dbtable", "hr.emp") \
.option("user", "db_user_name") \
.option("password", "password") \
.option("driver", "oracle.jdbc.driver.OracleDriver") \
.load()
dfORA.printSchema()
dfORA.show()
# Read a query from Orale
query = "(select empno,ename,dname from emp, dept where emp.deptno = dept.deptno) emp"
dfORA1 = spark.read \
.format("jdbc") \
.option("url", "jdbc:oracle:thin:username/password@//hostname:portnumber/SID") \
.option("dbtable", query) \
.option("user", "db_user_name") \
.option("password", "password") \
.option("driver", "oracle.jdbc.driver.OracleDriver") \
.load()
dfORA1.printSchema()
dfORA1.show()
###Output
_____no_output_____
###Markdown
Spark JDBC to MS SQL Server
###Code
# Read a table from MS SQL Server
dfMS = spark.read \
.format("jdbc") \
.options(url="jdbc:sqlserver:username/password@//hostname:portnumber/DB") \
.option("dbtable", "db_table_name") \
.option("user", "db_user_name") \
.option("password", "password") \
.option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver" ) \
.load()
dfMS.printSchema()
dfMS.show()
###Output
_____no_output_____
###Markdown
Spark JDBC to Redshift
###Code
# Read data from a table
dfRS = spark.read \
.format("com.databricks.spark.redshift") \
.option("url", "jdbc:redshift://redshifthost:5439/database?user=username&password=pass") \
.option("dbtable", "my_table") \
.option("tempdir", "s3n://path/for/temp/data") \
.load()
# Read data from a query
dfRS = spark.read \
.format("com.databricks.spark.redshift") \
.option("url", "jdbc:redshift://redshifthost:5439/database?user=username&password=pass") \
.option("query", "select x, count(*) my_table group by x") \
.option("tempdir", "s3n://path/for/temp/data") \
.load()
# Write back to a table
dfRS.write \
.format("com.databricks.spark.redshift") \
.option("url", "jdbc:redshift://redshifthost:5439/database?user=username&password=pass") \
.option("dbtable", "my_table_copy") \
.option("tempdir", "s3n://path/for/temp/data") \
.mode("error") \
.save()
# Using IAM Role based authentication
dfRS.write \
.format("com.databricks.spark.redshift") \
.option("url", "jdbc:redshift://redshifthost:5439/database?user=username&password=pass") \
.option("dbtable", "my_table_copy") \
.option("tempdir", "s3n://path/for/temp/data") \
.option("aws_iam_role", "arn:aws:iam::123456789000:role/redshift_iam_role") \
.mode("error") \
.save()
###Output
_____no_output_____
###Markdown
Clean UPPrior to exiting, let's run housekeeping to release disk space, computation and memory resources taken by this session. Remove Data
###Code
# Unmark the comment
# !rm -rf $HOME/PATH/*
###Output
_____no_output_____
###Markdown
Stop Spark Session
###Code
spark.stop()
###Output
_____no_output_____
###Markdown
Spark JDBC to Databases- [Overview](spark-jdbc-overview)- [Setup](spark-jdbc-setup) - [Define Environment Variables](spark-jdbc-define-envir-vars) - [Initiate a Spark JDBC Session](spark-jdbc-init-session) - [Load Driver Packages Dynamically](spark-jdbc-init-dynamic-pkg-load) - [Load Driver Packages Locally](spark-jdbc-init-local-pkg-load)- [Connect to Databases Using Spark JDBC](spark-jdbc-connect-to-dbs) - [Connect to a MySQL Database](spark-jdbc-to-mysql) - [Connecting to a Public MySQL Instance](spark-jdbc-to-mysql-public) - [Connecting to a Test or Temporary MySQL Instance](spark-jdbc-to-mysql-test-or-temp) - [Connect to a PostgreSQL Database](spark-jdbc-to-postgresql) - [Connect to an Oracle Database](spark-jdbc-to-oracle) - [Connect to an MS SQL Server Database](spark-jdbc-to-ms-sql-server) - [Connect to a Redshift Database](spark-jdbc-to-redshift)- [Cleanup](spark-jdbc-cleanup) - [Delete Data](spark-jdbc-delete-data) - [Release Spark Resources](spark-jdbc-release-spark-resources) OverviewSpark SQL includes a data source that can read data from other databases using Java database connectivity (**JDBC**).The results are returned as a Spark DataFrame that can easily be processed in Spark SQL or joined with other data sources.For more information, see the [Spark documentation](https://spark.apache.org/docs/2.3.1/sql-programming-guide.htmljdbc-to-other-databases). Setup Define Environment VariablesBegin by initializing some environment variables.> **Note:** You need to edit the following code to assign valid values to the database variables (`DB_XXX`).
###Code
import os
# Read Iguazio Data Science Platform ("the platform") environment variables into local variables
V3IO_USER = os.getenv('V3IO_USERNAME')
V3IO_HOME = os.getenv('V3IO_HOME')
V3IO_HOME_URL = os.getenv('V3IO_HOME_URL')
# Define database environment variables
# TODO: Edit the variable definitions to assign valid values for your environment.
%env DB_HOST = "" # Database host as a fully qualified name (FQN)
%env DB_PORT = "" # Database port number
%env DB_DRIVER = "" # Database driver [mysql/postgresql|oracle:thin|sqlserver]
%env DB_Name = "" # Database|schema name
%env DB_TABLE = "" # Table name
%env DB_USER = "" # Database username
%env DB_PASSWORD = "" # Database user password
os.environ["PYSPARK_SUBMIT_ARGS"] = "--packages mysql:mysql-connector-java:5.1.39 pyspark-shell"
###Output
env: DB_HOST="" # Database host's fully qualified name
env: DB_PORT="" # Port num of the database
env: DB_DRIVER="" # Database Driver [postgresql|mysql|oracle:thin|sqlserver]
env: DB_Name="" # Database|Schema Name
env: DB_TABLE="" # Table Name
env: DB_USER="" # Database User Name
env: DB_PASSWORD="" # Database User's Password
###Markdown
Initiate a Spark JDBC SessionYou can select between two methods for initiating a Spark session with JDBC drivers ("Spark JDBC session"):- [Load Driver Packages Dynamically](spark-jdbc-init-dynamic-pkg-load) (preferred)- [Load Driver Packages Locally](spark-jdbc-init-local-pkg-load) Load Driver Packages DynamicallyThe preferred method for initiating a Spark JDBC session is to load the required JDBC driver packages dynamically from https://spark-packages.org/ by doing the following:1. Set the `PYSPARK_SUBMIT_ARGS` environment variable to `"--packages :: pyspark-shell"`.2. Initiate a new spark session.The following example demonstrates how to initiate a Spark session that uses version 5.1.39 of the **mysql-connector-java** MySQL JDBC database driver (`mysql:mysql-connector-java:5.1.39`).
###Code
from pyspark.conf import SparkConf
from pyspark.sql import SparkSession
# Configure the Spark JDBC driver package
# TODO: Replace `mysql:mysql-connector-java:5.1.39` with the required driver-pacakge information.
os.environ["PYSPARK_SUBMIT_ARGS"] = "--packages mysql:mysql-connector-java:5.1.39 pyspark-shell"
# Initiate a new Spark session; you can change the application name
spark = SparkSession.builder.appName("Spark JDBC tutorial").getOrCreate()
###Output
_____no_output_____
###Markdown
Load Driver Packages LocallyYou can also load the Spark JDBC driver package from the local file system of your Iguazio Data Science Platform ("the platform").It's recommended that you use this method only if you don't have internet connection ("dark-site installations") or if there's no official Spark package for your database.The platform comes pre-deployed with MySQL, PostgreSQL, Oracle, Redshift, and MS SQL Server JDBC driver packages, which are found in the **/spark/3rd_party** directory (**$SPARK_HOME/3rd_party**).You can also copy additional driver packages or different versions of the pre-deployed drivers to the platform — for example, from the **Data** dashboard page.To load a JDBC driver package locally, you need to set the `spark.driver.extraClassPath` and `spark.executor.extraClassPath` Spark configuration properties to the path to a Spark JDBC driver package in the platform's file system.You can do this using either of the following alternative methods:- Preconfigure the path to the driver package — 1. In your Spark-configuration file — **$SPARK_HOME/conf/spark-defaults.conf** — set the `extraClassPath` configuration properties to the path to the relevant driver package: ```python spark.driver.extraClassPath = "" spark.executor.extraClassPath = "" ``` 2. Initiate a new spark session.- Configure the path to the driver package as part of the initiation of a new Spark session: ```python spark = SparkSession.builder. \ appName(""). \ config("spark.driver.extraClassPath", ""). \ config("spark.executor.extraClassPath", ""). \ getOrCreate() ```The following example demonstrates how to initiate a Spark session that uses the pre-deployed version 8.0.13 of the **mysql-connector-java** MySQL JDBC database driver (**/spark/3rd_party/mysql-connector-java-8.0.13.jar**)
###Code
from pyspark.conf import SparkConf
from pyspark.sql import SparkSession
# METHOD I
# Edit your Spark configuration file ($SPARK_HOME/conf/spark-defaults.conf), set the `spark.driver.extraClassPath` and
# `spark.executor.extraClassPath` properties to the local file-system path to a pre-deployed Spark JDBC driver package.
# Replace "/spark/3rd_party/mysql-connector-java-8.0.13.jar" with the relevant path.
# spark.driver.extraClassPath = "/spark/3rd_party/mysql-connector-java-8.0.13.jar"
# spark.executor.extraClassPath = "/spark/3rd_party/mysql-connector-java-8.0.13.jar"
#
# Then, initiate a new Spark session; you can change the application name.
# spark = SparkSession.builder.appName("Spark JDBC tutorial").getOrCreate()
# METHOD II
# Initiate a new Spark Session; you can change the application name.
# Set the same `extraClassPath` configuration properties as in Method #1 as part of the initiation command.
# Replace "/spark/3rd_party/mysql-connector-java-8.0.13.jar" with the relevant path.
local file-system path to a pre-deployed Spark JDBC driver package
spark = SparkSession.builder. \
appName("Spark JDBC tutorial"). \
config("spark.driver.extraClassPath", "/spark/3rd_party/mysql-connector-java-8.0.13.jar"). \
config("spark.executor.extraClassPath", "/spark/3rd_party/mysql-connector-java-8.0.13.jar"). \
getOrCreate()
import pprint
# Verify your configuration: run the following code to list the current Spark configurations, and check the output to verify that the
# `spark.driver.extraClassPath` and `spark.executor.extraClassPath` properties are set to the correct local driver-pacakge path.
conf = spark.sparkContext._conf.getAll()
pprint.pprint(conf)
###Output
[('spark.sql.catalogImplementation', 'in-memory'),
('spark.driver.extraLibraryPath', '/hadoop/etc/hadoop'),
('spark.app.id', 'app-20190704070308-0001'),
('spark.executor.memory', '2G'),
('spark.executor.id', 'driver'),
('spark.jars',
'file:///spark/v3io-libs/v3io-hcfs_2.11.jar,file:///spark/v3io-libs/v3io-spark2-object-dataframe_2.11.jar,file:///spark/v3io-libs/v3io-spark2-streaming_2.11.jar,file:///igz/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar'),
('spark.cores.max', '4'),
('spark.executorEnv.V3IO_ACCESS_KEY', 'bb79fffa-7582-4fd2-9347-a350335801fc'),
('spark.driver.extraClassPath',
'/spark/3rd_party/mysql-connector-java-8.0.13.jar'),
('spark.executor.extraJavaOptions', '"-Dsun.zip.disableMemoryMapping=true"'),
('spark.driver.port', '33751'),
('spark.driver.host', '10.233.92.91'),
('spark.executor.extraLibraryPath', '/hadoop/etc/hadoop'),
('spark.submit.pyFiles',
'/igz/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar'),
('spark.app.name', 'Spark JDBC tutorial'),
('spark.repl.local.jars',
'file:///spark/v3io-libs/v3io-hcfs_2.11.jar,file:///spark/v3io-libs/v3io-spark2-object-dataframe_2.11.jar,file:///spark/v3io-libs/v3io-spark2-streaming_2.11.jar,file:///igz/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar'),
('spark.rdd.compress', 'True'),
('spark.serializer.objectStreamReset', '100'),
('spark.files',
'file:///igz/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar'),
('spark.executor.cores', '1'),
('spark.executor.extraClassPath',
'/spark/3rd_party/mysql-connector-java-8.0.13.jar'),
('spark.submit.deployMode', 'client'),
('spark.driver.extraJavaOptions', '"-Dsun.zip.disableMemoryMapping=true"'),
('spark.ui.showConsoleProgress', 'true'),
('spark.executorEnv.V3IO_USERNAME', 'iguazio'),
('spark.master', 'spark://spark-jddcm4iwas-qxw13-master:7077')]
###Markdown
Connect to Databases Using Spark JDBC Connect to a MySQL Database- [Connecting to a Public MySQL Instance](spark-jdbc-to-mysql-public)- [Connecting to a Test or Temporary MySQL Instance](spark-jdbc-to-mysql-test-or-temp) Connect to a Public MySQL Instance
###Code
#Loading data from a JDBC source
dfMySQL = spark.read \
.format("jdbc") \
.option("url", "jdbc:mysql://mysql-rfam-public.ebi.ac.uk:4497/Rfam") \
.option("dbtable", "Rfam.family") \
.option("user", "rfamro") \
.option("password", "") \
.option("driver", "com.mysql.jdbc.Driver") \
.load()
dfMySQL.show()
###Output
+--------+-------------+---------+--------------------+--------------------+--------------------+----------------+--------------+------------+--------------------+--------------------+------------------+--------------------+--------------------+--------+--------+--------------+----------+--------------------+--------------------+-----------------+--------------------+---------------+--------+------------+---------+------------+--------------+----+----+---------------+-------+----------+-------------------+-------------------+
|rfam_acc| rfam_id|auto_wiki| description| author| seed_source|gathering_cutoff|trusted_cutoff|noise_cutoff| comment| previous_id| cmbuild| cmcalibrate| cmsearch|num_seed|num_full|num_genome_seq|num_refseq| type| structure_source|number_of_species|number_3d_structures|num_pseudonokts|tax_seed|ecmli_lambda| ecmli_mu|ecmli_cal_db|ecmli_cal_hits|maxl|clen|match_pair_node|hmm_tau|hmm_lambda| created| updated|
+--------+-------------+---------+--------------------+--------------------+--------------------+----------------+--------------+------------+--------------------+--------------------+------------------+--------------------+--------------------+--------+--------+--------------+----------+--------------------+--------------------+-----------------+--------------------+---------------+--------+------------+---------+------------+--------------+----+----+---------------+-------+----------+-------------------+-------------------+
| RF00001| 5S_rRNA| 1302| 5S ribosomal RNA|Griffiths-Jones S...|Szymanski et al, ...| 38.0| 38.0| 37.9|5S ribosomal RNA ...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 712| 139932| 0| 0| Gene; rRNA;|Published; PMID:1...| 8022| 0| null| | 0.59496| -5.32219| 1600000| 213632| 305| 119| true|-3.7812| 0.71822|2013-10-03 20:41:44|2019-01-04 15:01:52|
| RF00002| 5_8S_rRNA| 1303| 5.8S ribosomal RNA|Griffiths-Jones S...|Wuyts et al, Euro...| 42.0| 42.0| 41.9|5.8S ribosomal RN...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 61| 4716| 0| 0| Gene; rRNA;|Published; PMID:1...| 587| 0| null| | 0.65546| -9.33442| 1600000| 410495| 277| 154| true|-3.5135| 0.71791|2013-10-03 20:47:00|2019-01-04 15:01:52|
| RF00003| U1| 1304| U1 spliceosomal RNA|Griffiths-Jones S...|Zwieb C, The uRNA...| 40.0| 40.0| 39.9|U1 is a small nuc...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 100| 15436| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 837| 0| null| | 0.6869| -8.54663| 1600000| 421575| 267| 166| true|-3.7781| 0.71616|2013-10-03 20:57:11|2019-01-04 15:01:52|
| RF00004| U2| 1305| U2 spliceosomal RNA|Griffiths-Jones S...|The uRNA database...| 46.0| 46.0| 45.9|U2 is a small nuc...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 208| 16562| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 1102| 0| null| | 0.55222| -9.81359| 1600000| 403693| 301| 193| true|-3.5144| 0.71292|2013-10-03 20:58:30|2019-01-04 15:01:52|
| RF00005| tRNA| 1306| tRNA|Eddy SR, Griffith...| Eddy SR| 29.0| 29.0| 28.9|Transfer RNA (tRN...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 954| 1429129| 0| 0| Gene; tRNA;|Published; PMID:8...| 9934| 0| null| | 0.64375| -4.21424| 1600000| 283681| 253| 71| true|-2.6167| 0.73401|2013-10-03 21:00:26|2019-01-04 15:01:52|
| RF00006| Vault| 1307| Vault RNA|Bateman A, Gardne...|Published; PMID:1...| 34.0| 34.1| 33.9|This family of RN...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 73| 564| 0| 0| Gene;|Published; PMID:1...| 94| 0| null| | 0.63669| -4.8243| 1600000| 279629| 406| 101| true|-3.5531| 0.71855|2013-10-03 22:04:04|2019-01-04 15:01:52|
| RF00007| U12| 1308|U12 minor spliceo...|Griffiths-Jones S...|Shukla GC and Pad...| 53.0| 53.0| 52.9|The U12 small nuc...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 62| 531| 0| 0|Gene; snRNA; spli...|Predicted; Griffi...| 336| 0| null| | 0.55844| -9.95163| 1600000| 493455| 520| 155| true|-3.1678| 0.71782|2013-10-03 22:04:07|2019-01-04 15:01:52|
| RF00008| Hammerhead_3| 1309|Hammerhead ribozy...| Bateman A| Bateman A| 29.0| 29.0| 28.9|The hammerhead ri...| Hammerhead|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 82| 3098| 0| 0| Gene; ribozyme;|Published; PMID:7...| 176| 0| null| | 0.63206| -3.83325| 1600000| 199872| 394| 58| true| -4.375| 0.71923|2013-10-03 22:04:11|2019-01-04 15:01:52|
| RF00009| RNaseP_nuc| 1310| Nuclear RNase P|Griffiths-Jones S...|Brown JW, The Rib...| 28.0| 28.0| 27.9|Ribonuclease P (R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 116| 1237| 0| 0| Gene; ribozyme;|Published; PMID:9...| 763| 0| null| | 0.7641| -8.04053| 1600000| 274636|1082| 303| true|-4.3673| 0.70576|2013-10-03 22:04:14|2019-01-04 15:01:52|
| RF00010|RNaseP_bact_a| 2441|Bacterial RNase P...|Griffiths-Jones S...|Brown JW, The Rib...| 100.0| 100.5| 99.6|Ribonuclease P (R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 458| 6023| 0| 0| Gene; ribozyme;|Published; PMID:9...| 6324| 0| null| | 0.76804| -8.48988| 1600000| 366265| 873| 367| true|-4.3726| 0.70355|2013-10-03 22:04:21|2019-01-04 15:01:52|
| RF00011|RNaseP_bact_b| 2441|Bacterial RNase P...|Griffiths-Jones S...|Brown JW, The Rib...| 97.0| 97.1| 96.6|Ribonuclease P (R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 114| 676| 0| 0| Gene; ribozyme;|Published; PMID:9...| 767| 0| null| | 0.69906| -8.4903| 1600000| 418092| 675| 366| true|-4.0357| 0.70361|2013-10-03 22:04:51|2019-01-04 15:01:52|
| RF00012| U3| 1312|Small nucleolar R...| Gardner PP, Marz M|Published; PMID:1...| 34.0| 34.0| 33.9|Small nucleolar R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 87| 3924| 0| 0|Gene; snRNA; snoR...|Published; PMID:1...| 416| 0| null| | 0.59795| -9.77278| 1600000| 400072| 326| 218| true|-3.8301| 0.71077|2013-10-03 22:04:54|2019-01-04 15:01:52|
| RF00013| 6S| 2461| 6S / SsrS RNA|Bateman A, Barric...| Barrick JE| 48.0| 48.0| 47.9|E. coli 6S RNA wa...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 149| 3576| 0| 0| Gene;|Published; PMID:1...| 3309| 0| null| | 0.56243|-10.04259| 1600000| 331091| 277| 188| true|-3.5895| 0.71351|2013-10-03 22:05:06|2019-01-04 15:01:52|
| RF00014| DsrA| 1237| DsrA RNA| Bateman A| Bateman A| 60.0| 61.5| 57.6|DsrA RNA regulate...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 5| 35| 0| 0| Gene; sRNA;|Published; PMID:9...| 39| 0| null| | 0.53383| -8.38474| 1600000| 350673| 177| 85| true|-3.3562| 0.71888|2013-02-01 11:56:19|2019-01-04 15:01:52|
| RF00015| U4| 1314| U4 spliceosomal RNA| Griffiths-Jones SR|Zwieb C, The uRNA...| 46.0| 46.0| 45.9|U4 small nuclear ...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 170| 7522| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 1025| 0| null| | 0.58145| -8.85604| 1600000| 407516| 575| 140| true|-3.5007| 0.71795|2013-10-03 22:05:22|2019-01-04 15:01:52|
| RF00016| SNORD14| 1242|Small nucleolar R...|Griffiths-Jones S...| Griffiths-Jones SR| 64.0| 64.1| 63.9|U14 small nucleol...| U14|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 18| 1182| 0| 0|Gene; snRNA; snoR...| Predicted; PFOLD| 221| 0| null| | 0.63073| -3.65386| 1600000| 232910| 229| 116| true| -3.128| 0.71819|2013-02-01 11:56:23|2019-01-04 15:01:52|
| RF00017| Metazoa_SRP| 1315|Metazoan signal r...| Gardner PP|Published; PMID:1...| 70.0| 70.0| 69.9|The signal recogn...|SRP_euk_arch; 7SL...|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 91| 42386| 0| 0| Gene;|Published; PMID:1...| 402| 0| null| | 0.64536| -9.85267| 1600000| 488632| 514| 301| true|-4.0177| 0.70604|2013-10-03 22:07:53|2019-01-04 15:01:52|
| RF00018| CsrB| 2460|CsrB/RsmB RNA family|Bateman A, Gardne...| Bateman A| 71.0| 71.4| 70.9|The CsrB RNA bind...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 38| 254| 0| 0| Gene; sRNA;|Predicted; PFOLD;...| 196| 0| null| | 0.69326| -9.81172| 1600000| 546392| 555| 356| true|-4.0652| 0.70388|2013-10-03 23:07:27|2019-01-04 15:01:52|
| RF00019| Y_RNA| 1317| Y RNA|Griffiths-Jones S...|Griffiths-Jones S...| 38.0| 38.0| 37.9|Y RNAs are compon...| Y1; Y2; Y3; Y5|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 104| 8521| 0| 0| Gene;|Published; PMID:1...| 123| 0| null| | 0.59183| -5.14312| 1600000| 189478| 249| 98| true|-2.8418| 0.7187|2013-10-03 23:07:38|2019-01-04 15:01:52|
| RF00020| U5| 1318| U5 spliceosomal RNA|Griffiths-Jones S...|Zwieb C, The uRNA...| 40.0| 40.0| 39.9|U5 RNA is a compo...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 180| 7524| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 1001| 0| null| | 0.50732| -5.54774| 1600000| 339349| 331| 116| true|-4.1327| 0.7182|2013-10-03 23:08:43|2019-01-04 15:01:52|
+--------+-------------+---------+--------------------+--------------------+--------------------+----------------+--------------+------------+--------------------+--------------------+------------------+--------------------+--------------------+--------+--------+--------------+----------+--------------------+--------------------+-----------------+--------------------+---------------+--------+------------+---------+------------+--------------+----+----+---------------+-------+----------+-------------------+-------------------+
only showing top 20 rows
###Markdown
Connect to a Test or Temporary MySQL Instance> **Note:** The following code won't work if the MySQL instance has been shut down.
###Code
dfMySQL = spark.read \
.format("jdbc") \
.option("url", "jdbc:mysql://172.31.33.215:3306/db1") \
.option("dbtable", "db1.fruit") \
.option("user", "root") \
.option("password", "my-secret-pw") \
.option("driver", "com.mysql.jdbc.Driver") \
.load()
dfMySQL.show()
###Output
_____no_output_____
###Markdown
Connect to a PostgreSQL Database
###Code
# Load data from a JDBC source
dfPS = spark.read \
.format("jdbc") \
.option("url", "jdbc:postgresql:dbserver") \
.option("dbtable", "schema.tablename") \
.option("user", "username") \
.option("password", "password") \
.load()
dfPS2 = spark.read \
.jdbc("jdbc:postgresql:dbserver", "schema.tablename",
properties={"user": "username", "password": "password"})
# Specify DataFrame column data types on read
dfPS3 = spark.read \
.format("jdbc") \
.option("url", "jdbc:postgresql:dbserver") \
.option("dbtable", "schema.tablename") \
.option("user", "username") \
.option("password", "password") \
.option("customSchema", "id DECIMAL(38, 0), name STRING") \
.load()
# Save data to a JDBC source
dfPS.write \
.format("jdbc") \
.option("url", "jdbc:postgresql:dbserver") \
.option("dbtable", "schema.tablename") \
.option("user", "username") \
.option("password", "password") \
.save()
dfPS2.write \
properties={"user": "username", "password": "password"})
# Specify create table column data types on write
dfPS.write \
.option("createTableColumnTypes", "name CHAR(64), comments VARCHAR(1024)") \
.jdbc("jdbc:postgresql:dbserver", "schema.tablename", properties={"user": "username", "password": "password"})
###Output
_____no_output_____
###Markdown
Connect to an Oracle Database
###Code
# Read a table from Oracle (table: hr.emp)
dfORA = spark.read \
.format("jdbc") \
.option("url", "jdbc:oracle:thin:username/password@//hostname:portnumber/SID") \
.option("dbtable", "hr.emp") \
.option("user", "db_user_name") \
.option("password", "password") \
.option("driver", "oracle.jdbc.driver.OracleDriver") \
.load()
dfORA.printSchema()
dfORA.show()
# Read a query from Oracle
query = "(select empno,ename,dname from emp, dept where emp.deptno = dept.deptno) emp"
dfORA1 = spark.read \
.format("jdbc") \
.option("url", "jdbc:oracle:thin:username/password@//hostname:portnumber/SID") \
.option("dbtable", query) \
.option("user", "db_user_name") \
.option("password", "password") \
.option("driver", "oracle.jdbc.driver.OracleDriver") \
.load()
dfORA1.printSchema()
dfORA1.show()
###Output
_____no_output_____
###Markdown
Connect to an MS SQL Server Database
###Code
# Read a table from MS SQL Server
dfMS = spark.read \
.format("jdbc") \
.options(url="jdbc:sqlserver:username/password@//hostname:portnumber/DB") \
.option("dbtable", "db_table_name") \
.option("user", "db_user_name") \
.option("password", "password") \
.option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver" ) \
.load()
dfMS.printSchema()
dfMS.show()
###Output
_____no_output_____
###Markdown
Connect to a Redshift Database
###Code
# Read data from a table
dfRS = spark.read \
.format("com.databricks.spark.redshift") \
.option("url", "jdbc:redshift://redshifthost:5439/database?user=username&password=pass") \
.option("dbtable", "my_table") \
.option("tempdir", "s3n://path/for/temp/data") \
.load()
# Read data from a query
dfRS = spark.read \
.format("com.databricks.spark.redshift") \
.option("url", "jdbc:redshift://redshifthost:5439/database?user=username&password=pass") \
.option("query", "select x, count(*) my_table group by x") \
.option("tempdir", "s3n://path/for/temp/data") \
.load()
# Write data back to a table
dfRS.write \
.format("com.databricks.spark.redshift") \
.option("url", "jdbc:redshift://redshifthost:5439/database?user=username&password=pass") \
.option("dbtable", "my_table_copy") \
.option("tempdir", "s3n://path/for/temp/data") \
.mode("error") \
.save()
# Use IAM role-based authentication
dfRS.write \
.format("com.databricks.spark.redshift") \
.option("url", "jdbc:redshift://redshifthost:5439/database?user=username&password=pass") \
.option("dbtable", "my_table_copy") \
.option("tempdir", "s3n://path/for/temp/data") \
.option("aws_iam_role", "arn:aws:iam::123456789000:role/redshift_iam_role") \
.mode("error") \
.save()
###Output
_____no_output_____
###Markdown
CleanupPrior to exiting, release disk space, computation, and memory resources consumed by the active session:- [Delete Data](spark-jdbc-delete-data)- [Release Spark Resources](spark-jdbc-release-spark-resources) Delete DataYou can optionally delete any of the directories or files that you created.See the instructions in the [Creating and Deleting Container Directories](https://www.iguazio.com/docs/tutorials/latest-release/getting-started/containers/create-delete-container-dirs) tutorial.For example, the following code uses a local file-system command to delete a **<running user>/examples/spark-jdbc** directory in the "users" container.Edit the path, as needed, then remove the comment mark (``) and run the code.
###Code
# !rm -rf /User/examples/spark-jdbc/
###Output
_____no_output_____
###Markdown
Release Spark ResourcesWhen you're done, run the following command to stop your Spark session and release its computation and memory resources:
###Code
spark.stop()
###Output
_____no_output_____
###Markdown
Spark JDBC to Databases1. [Overview](Overview)2. [Set Up](Set-UP)3. [Initiate Spark Session](Initiate-Spark-Session)4. [Spark JDBC to Databases](Spark-JDBC-to-Databases) 1. [Spark JDBC to MySQL](Spark-JDBC-to-MySQL) 2. [Spark JDBC to PostgreSQL](Spark-JDBC-to-PostgreSQL) 3. [Spark JDBC to Oracle](Spark-JDBC-to-Oracle) 4. [Spark JDBC to MS SQL Server](Spark-JDBC-to-MS-SQL-Server) 5. [Spark JDBC to Redshift](Spark-JDBC-to-Redshift)5. [Clean Up](Clean-Up) OverviewSpark SQL includes a data source that can read data from many databases using JDBC. The results are returned as a DataFrame that can easily be processed in Spark SQL or joined with other data sources. **Prerequisites** In SPARK_HOME/conf/spark-default.conf file, add path of JDBC drivers for PostgrsSQL, MySQL, Oracle, and SQLServer to the following two Spark properties : * spark.driver.extraClassPath* spark.executor.extraClassPathFor more details read [Spark JDBC to Databases](https://spark.apache.org/docs/2.3.1/sql-programming-guide.htmljdbc-to-other-databases) Set UP
###Code
import os
# Iguazio env
V3IO_USER = os.getenv('V3IO_USERNAME')
V3IO_HOME = os.getenv('V3IO_HOME')
V3IO_HOME_URL = os.getenv('V3IO_HOME_URL')
# Database properties
%env DB_HOST = "" # Database host's fully qualified name
%env DB_PORT = "" # Port num of the database
%env DB_DRIVER = "" # Database Driver [postgresql|mysql|oracle:thin|sqlserver]
%env DB_Name = "" # Database|Schema Name
%env DB_TABLE = "" # Table Name
%env DB_USER = "" # Database User Name
%env DB_PASSWORD = "" # Database User's Password
os.environ["PYSPARK_SUBMIT_ARGS"] = "--packages mysql:mysql-connector-java:5.1.39 pyspark-shell"
###Output
env: DB_HOST="" # Database host's fully qualified name
env: DB_PORT="" # Port num of the database
env: DB_DRIVER="" # Database Driver [postgresql|mysql|oracle:thin|sqlserver]
env: DB_Name="" # Database|Schema Name
env: DB_TABLE="" # Table Name
env: DB_USER="" # Database User Name
env: DB_PASSWORD="" # Database User's Password
###Markdown
Initiate Spark Session
###Code
from pyspark.conf import SparkConf
from pyspark.sql import SparkSession
# METHOD I:
# Update Spark configurations of the following two extraClassPath with the JDBC driver location
# prior to initiating a Spark session:
# spark.driver.extraClassPath
# spark.executor.extraClassPath
#
# NOTE:
# If you don't connnect to mysql, replace the mysql's connector by the other database's JDBC connector
# in the following two extraClassPath.
#
# Initiate a Spark Session
spark = SparkSession.builder.\
appName("Spark JDBC to Databases - ipynb").\
config("spark.driver.extraClassPath", "/spark/3rd_party/mysql-connector-java-8.0.13.jar").\
config("spark.executor.extraClassPath", "/spark/3rd_party/mysql-connector-java-8.0.13.jar").getOrCreate()
"""
# METHOD II:
# Use "PYSPARK_SUBMIT_ARGS" with "--packages" option. (The preferred way)
#
# Usage:
# os.environ["PYSPARK_SUBMIT_ARGS"] = "--packages dialect:dialect-specific-jdbc-connector:version# pyspark-shell"
#
# NOTE:
# If you don't connnect to mysql, replace the mysql's connector by the other database's JDBC connector
# in the following line.
"""
# Set PYSPARK_SUBMIT_ARGS
#os.environ["PYSPARK_SUBMIT_ARGS"] = "--packages mysql:mysql-connector-java:5.1.39 pyspark-shell"
"""
# NOTE:
# If you use PYSPARK_SUBMIT_ARGS, use the following way to initiate the Spark session WITHOUT update:
# spark.driver.extraClassPath
# spark.executor.extraClassPath
"""
# Initiate a Spark Session
#spark = SparkSession.builder.appName("Spark JDBC to Databases - ipynb").getOrCreate()
import pprint
# List Spark configurations
# Verify if JDBC drivers' path was properly added to both of Spark driver and executor extra LibPath.
conf = spark.sparkContext._conf.getAll()
pprint.pprint(conf)
###Output
[('spark.sql.catalogImplementation', 'in-memory'),
('spark.driver.extraLibraryPath', '/hadoop/etc/hadoop'),
('spark.app.id', 'app-20190704070308-0001'),
('spark.executor.memory', '2G'),
('spark.executor.id', 'driver'),
('spark.jars',
'file:///spark/v3io-libs/v3io-hcfs_2.11.jar,file:///spark/v3io-libs/v3io-spark2-object-dataframe_2.11.jar,file:///spark/v3io-libs/v3io-spark2-streaming_2.11.jar,file:///igz/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar'),
('spark.cores.max', '4'),
('spark.executorEnv.V3IO_ACCESS_KEY', 'bb79fffa-7582-4fd2-9347-a350335801fc'),
('spark.driver.extraClassPath',
'/spark/3rd_party/mysql-connector-java-8.0.13.jar'),
('spark.executor.extraJavaOptions', '"-Dsun.zip.disableMemoryMapping=true"'),
('spark.driver.port', '33751'),
('spark.driver.host', '10.233.92.91'),
('spark.executor.extraLibraryPath', '/hadoop/etc/hadoop'),
('spark.submit.pyFiles',
'/igz/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar'),
('spark.app.name', 'Spark JDBC to Databases - ipynb'),
('spark.repl.local.jars',
'file:///spark/v3io-libs/v3io-hcfs_2.11.jar,file:///spark/v3io-libs/v3io-spark2-object-dataframe_2.11.jar,file:///spark/v3io-libs/v3io-spark2-streaming_2.11.jar,file:///igz/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar'),
('spark.rdd.compress', 'True'),
('spark.serializer.objectStreamReset', '100'),
('spark.files',
'file:///igz/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar'),
('spark.executor.cores', '1'),
('spark.executor.extraClassPath',
'/spark/3rd_party/mysql-connector-java-8.0.13.jar'),
('spark.submit.deployMode', 'client'),
('spark.driver.extraJavaOptions', '"-Dsun.zip.disableMemoryMapping=true"'),
('spark.ui.showConsoleProgress', 'true'),
('spark.executorEnv.V3IO_USERNAME', 'iguazio'),
('spark.master', 'spark://spark-jddcm4iwas-qxw13-master:7077')]
###Markdown
Spark JDBC to Databases Spark JDBC to MySQL Connecting to a public mySQL instance
###Code
#Loading data from a JDBC source
dfMySQL = spark.read \
.format("jdbc") \
.option("url", "jdbc:mysql://mysql-rfam-public.ebi.ac.uk:4497/Rfam") \
.option("dbtable", "Rfam.family") \
.option("user", "rfamro") \
.option("password", "") \
.option("driver", "com.mysql.jdbc.Driver") \
.load()
dfMySQL.show()
###Output
+--------+-------------+---------+--------------------+--------------------+--------------------+----------------+--------------+------------+--------------------+--------------------+------------------+--------------------+--------------------+--------+--------+--------------+----------+--------------------+--------------------+-----------------+--------------------+---------------+--------+------------+---------+------------+--------------+----+----+---------------+-------+----------+-------------------+-------------------+
|rfam_acc| rfam_id|auto_wiki| description| author| seed_source|gathering_cutoff|trusted_cutoff|noise_cutoff| comment| previous_id| cmbuild| cmcalibrate| cmsearch|num_seed|num_full|num_genome_seq|num_refseq| type| structure_source|number_of_species|number_3d_structures|num_pseudonokts|tax_seed|ecmli_lambda| ecmli_mu|ecmli_cal_db|ecmli_cal_hits|maxl|clen|match_pair_node|hmm_tau|hmm_lambda| created| updated|
+--------+-------------+---------+--------------------+--------------------+--------------------+----------------+--------------+------------+--------------------+--------------------+------------------+--------------------+--------------------+--------+--------+--------------+----------+--------------------+--------------------+-----------------+--------------------+---------------+--------+------------+---------+------------+--------------+----+----+---------------+-------+----------+-------------------+-------------------+
| RF00001| 5S_rRNA| 1302| 5S ribosomal RNA|Griffiths-Jones S...|Szymanski et al, ...| 38.0| 38.0| 37.9|5S ribosomal RNA ...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 712| 139932| 0| 0| Gene; rRNA;|Published; PMID:1...| 8022| 0| null| | 0.59496| -5.32219| 1600000| 213632| 305| 119| true|-3.7812| 0.71822|2013-10-03 20:41:44|2019-01-04 15:01:52|
| RF00002| 5_8S_rRNA| 1303| 5.8S ribosomal RNA|Griffiths-Jones S...|Wuyts et al, Euro...| 42.0| 42.0| 41.9|5.8S ribosomal RN...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 61| 4716| 0| 0| Gene; rRNA;|Published; PMID:1...| 587| 0| null| | 0.65546| -9.33442| 1600000| 410495| 277| 154| true|-3.5135| 0.71791|2013-10-03 20:47:00|2019-01-04 15:01:52|
| RF00003| U1| 1304| U1 spliceosomal RNA|Griffiths-Jones S...|Zwieb C, The uRNA...| 40.0| 40.0| 39.9|U1 is a small nuc...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 100| 15436| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 837| 0| null| | 0.6869| -8.54663| 1600000| 421575| 267| 166| true|-3.7781| 0.71616|2013-10-03 20:57:11|2019-01-04 15:01:52|
| RF00004| U2| 1305| U2 spliceosomal RNA|Griffiths-Jones S...|The uRNA database...| 46.0| 46.0| 45.9|U2 is a small nuc...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 208| 16562| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 1102| 0| null| | 0.55222| -9.81359| 1600000| 403693| 301| 193| true|-3.5144| 0.71292|2013-10-03 20:58:30|2019-01-04 15:01:52|
| RF00005| tRNA| 1306| tRNA|Eddy SR, Griffith...| Eddy SR| 29.0| 29.0| 28.9|Transfer RNA (tRN...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 954| 1429129| 0| 0| Gene; tRNA;|Published; PMID:8...| 9934| 0| null| | 0.64375| -4.21424| 1600000| 283681| 253| 71| true|-2.6167| 0.73401|2013-10-03 21:00:26|2019-01-04 15:01:52|
| RF00006| Vault| 1307| Vault RNA|Bateman A, Gardne...|Published; PMID:1...| 34.0| 34.1| 33.9|This family of RN...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 73| 564| 0| 0| Gene;|Published; PMID:1...| 94| 0| null| | 0.63669| -4.8243| 1600000| 279629| 406| 101| true|-3.5531| 0.71855|2013-10-03 22:04:04|2019-01-04 15:01:52|
| RF00007| U12| 1308|U12 minor spliceo...|Griffiths-Jones S...|Shukla GC and Pad...| 53.0| 53.0| 52.9|The U12 small nuc...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 62| 531| 0| 0|Gene; snRNA; spli...|Predicted; Griffi...| 336| 0| null| | 0.55844| -9.95163| 1600000| 493455| 520| 155| true|-3.1678| 0.71782|2013-10-03 22:04:07|2019-01-04 15:01:52|
| RF00008| Hammerhead_3| 1309|Hammerhead ribozy...| Bateman A| Bateman A| 29.0| 29.0| 28.9|The hammerhead ri...| Hammerhead|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 82| 3098| 0| 0| Gene; ribozyme;|Published; PMID:7...| 176| 0| null| | 0.63206| -3.83325| 1600000| 199872| 394| 58| true| -4.375| 0.71923|2013-10-03 22:04:11|2019-01-04 15:01:52|
| RF00009| RNaseP_nuc| 1310| Nuclear RNase P|Griffiths-Jones S...|Brown JW, The Rib...| 28.0| 28.0| 27.9|Ribonuclease P (R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 116| 1237| 0| 0| Gene; ribozyme;|Published; PMID:9...| 763| 0| null| | 0.7641| -8.04053| 1600000| 274636|1082| 303| true|-4.3673| 0.70576|2013-10-03 22:04:14|2019-01-04 15:01:52|
| RF00010|RNaseP_bact_a| 2441|Bacterial RNase P...|Griffiths-Jones S...|Brown JW, The Rib...| 100.0| 100.5| 99.6|Ribonuclease P (R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 458| 6023| 0| 0| Gene; ribozyme;|Published; PMID:9...| 6324| 0| null| | 0.76804| -8.48988| 1600000| 366265| 873| 367| true|-4.3726| 0.70355|2013-10-03 22:04:21|2019-01-04 15:01:52|
| RF00011|RNaseP_bact_b| 2441|Bacterial RNase P...|Griffiths-Jones S...|Brown JW, The Rib...| 97.0| 97.1| 96.6|Ribonuclease P (R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 114| 676| 0| 0| Gene; ribozyme;|Published; PMID:9...| 767| 0| null| | 0.69906| -8.4903| 1600000| 418092| 675| 366| true|-4.0357| 0.70361|2013-10-03 22:04:51|2019-01-04 15:01:52|
| RF00012| U3| 1312|Small nucleolar R...| Gardner PP, Marz M|Published; PMID:1...| 34.0| 34.0| 33.9|Small nucleolar R...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 87| 3924| 0| 0|Gene; snRNA; snoR...|Published; PMID:1...| 416| 0| null| | 0.59795| -9.77278| 1600000| 400072| 326| 218| true|-3.8301| 0.71077|2013-10-03 22:04:54|2019-01-04 15:01:52|
| RF00013| 6S| 2461| 6S / SsrS RNA|Bateman A, Barric...| Barrick JE| 48.0| 48.0| 47.9|E. coli 6S RNA wa...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 149| 3576| 0| 0| Gene;|Published; PMID:1...| 3309| 0| null| | 0.56243|-10.04259| 1600000| 331091| 277| 188| true|-3.5895| 0.71351|2013-10-03 22:05:06|2019-01-04 15:01:52|
| RF00014| DsrA| 1237| DsrA RNA| Bateman A| Bateman A| 60.0| 61.5| 57.6|DsrA RNA regulate...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 5| 35| 0| 0| Gene; sRNA;|Published; PMID:9...| 39| 0| null| | 0.53383| -8.38474| 1600000| 350673| 177| 85| true|-3.3562| 0.71888|2013-02-01 11:56:19|2019-01-04 15:01:52|
| RF00015| U4| 1314| U4 spliceosomal RNA| Griffiths-Jones SR|Zwieb C, The uRNA...| 46.0| 46.0| 45.9|U4 small nuclear ...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 170| 7522| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 1025| 0| null| | 0.58145| -8.85604| 1600000| 407516| 575| 140| true|-3.5007| 0.71795|2013-10-03 22:05:22|2019-01-04 15:01:52|
| RF00016| SNORD14| 1242|Small nucleolar R...|Griffiths-Jones S...| Griffiths-Jones SR| 64.0| 64.1| 63.9|U14 small nucleol...| U14|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 18| 1182| 0| 0|Gene; snRNA; snoR...| Predicted; PFOLD| 221| 0| null| | 0.63073| -3.65386| 1600000| 232910| 229| 116| true| -3.128| 0.71819|2013-02-01 11:56:23|2019-01-04 15:01:52|
| RF00017| Metazoa_SRP| 1315|Metazoan signal r...| Gardner PP|Published; PMID:1...| 70.0| 70.0| 69.9|The signal recogn...|SRP_euk_arch; 7SL...|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 91| 42386| 0| 0| Gene;|Published; PMID:1...| 402| 0| null| | 0.64536| -9.85267| 1600000| 488632| 514| 301| true|-4.0177| 0.70604|2013-10-03 22:07:53|2019-01-04 15:01:52|
| RF00018| CsrB| 2460|CsrB/RsmB RNA family|Bateman A, Gardne...| Bateman A| 71.0| 71.4| 70.9|The CsrB RNA bind...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 38| 254| 0| 0| Gene; sRNA;|Predicted; PFOLD;...| 196| 0| null| | 0.69326| -9.81172| 1600000| 546392| 555| 356| true|-4.0652| 0.70388|2013-10-03 23:07:27|2019-01-04 15:01:52|
| RF00019| Y_RNA| 1317| Y RNA|Griffiths-Jones S...|Griffiths-Jones S...| 38.0| 38.0| 37.9|Y RNAs are compon...| Y1; Y2; Y3; Y5|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 104| 8521| 0| 0| Gene;|Published; PMID:1...| 123| 0| null| | 0.59183| -5.14312| 1600000| 189478| 249| 98| true|-2.8418| 0.7187|2013-10-03 23:07:38|2019-01-04 15:01:52|
| RF00020| U5| 1318| U5 spliceosomal RNA|Griffiths-Jones S...|Zwieb C, The uRNA...| 40.0| 40.0| 39.9|U5 RNA is a compo...| null|cmbuild -F CM SEED|cmcalibrate --mpi CM|cmsearch --cpu 4 ...| 180| 7524| 0| 0|Gene; snRNA; spli...|Published; PMID:2...| 1001| 0| null| | 0.50732| -5.54774| 1600000| 339349| 331| 116| true|-4.1327| 0.7182|2013-10-03 23:08:43|2019-01-04 15:01:52|
+--------+-------------+---------+--------------------+--------------------+--------------------+----------------+--------------+------------+--------------------+--------------------+------------------+--------------------+--------------------+--------+--------+--------------+----------+--------------------+--------------------+-----------------+--------------------+---------------+--------+------------+---------+------------+--------------+----+----+---------------+-------+----------+-------------------+-------------------+
only showing top 20 rows
###Markdown
Connecting to a testing/temporary mySQL instance**NOTE** It won't work, when this tesing mySQL instance was shutdown.
###Code
dfMySQL = spark.read \
.format("jdbc") \
.option("url", "jdbc:mysql://172.31.33.215:3306/db1") \
.option("dbtable", "db1.fruit") \
.option("user", "root") \
.option("password", "my-secret-pw") \
.option("driver", "com.mysql.jdbc.Driver") \
.load()
dfMySQL.show()
###Output
_____no_output_____
###Markdown
Spark JDBC to PostgreSQL
###Code
#Loading data from a JDBC source
dfPS = spark.read \
.format("jdbc") \
.option("url", "jdbc:postgresql:dbserver") \
.option("dbtable", "schema.tablename") \
.option("user", "username") \
.option("password", "password") \
.load()
dfPS2 = spark.read \
.jdbc("jdbc:postgresql:dbserver", "schema.tablename",
properties={"user": "username", "password": "password"})
# Specifying dataframe column data types on read
dfPS3 = spark.read \
.format("jdbc") \
.option("url", "jdbc:postgresql:dbserver") \
.option("dbtable", "schema.tablename") \
.option("user", "username") \
.option("password", "password") \
.option("customSchema", "id DECIMAL(38, 0), name STRING") \
.load()
# Saving data to a JDBC source
dfPS.write \
.format("jdbc") \
.option("url", "jdbc:postgresql:dbserver") \
.option("dbtable", "schema.tablename") \
.option("user", "username") \
.option("password", "password") \
.save()
dfPS2.write \
properties={"user": "username", "password": "password"})
# Specifying create table column data types on write
dfPS.write \
.option("createTableColumnTypes", "name CHAR(64), comments VARCHAR(1024)") \
.jdbc("jdbc:postgresql:dbserver", "schema.tablename", properties={"user": "username", "password": "password"})
###Output
_____no_output_____
###Markdown
Spark JDBC to Oracle
###Code
# Read a table from Orale (table: hr.emp)
dfORA = spark.read \
.format("jdbc") \
.option("url", "jdbc:oracle:thin:username/password@//hostname:portnumber/SID") \
.option("dbtable", "hr.emp") \
.option("user", "db_user_name") \
.option("password", "password") \
.option("driver", "oracle.jdbc.driver.OracleDriver") \
.load()
dfORA.printSchema()
dfORA.show()
# Read a query from Orale
query = "(select empno,ename,dname from emp, dept where emp.deptno = dept.deptno) emp"
dfORA1 = spark.read \
.format("jdbc") \
.option("url", "jdbc:oracle:thin:username/password@//hostname:portnumber/SID") \
.option("dbtable", query) \
.option("user", "db_user_name") \
.option("password", "password") \
.option("driver", "oracle.jdbc.driver.OracleDriver") \
.load()
dfORA1.printSchema()
dfORA1.show()
###Output
_____no_output_____
###Markdown
Spark JDBC to MS SQL Server
###Code
# Read a table from MS SQL Server
dfMS = spark.read \
.format("jdbc") \
.options(url="jdbc:sqlserver:username/password@//hostname:portnumber/DB") \
.option("dbtable", "db_table_name") \
.option("user", "db_user_name") \
.option("password", "password") \
.option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver" ) \
.load()
dfMS.printSchema()
dfMS.show()
###Output
_____no_output_____
###Markdown
Spark JDBC to Redshift
###Code
# Read data from a table
dfRS = spark.read \
.format("com.databricks.spark.redshift") \
.option("url", "jdbc:redshift://redshifthost:5439/database?user=username&password=pass") \
.option("dbtable", "my_table") \
.option("tempdir", "s3n://path/for/temp/data") \
.load()
# Read data from a query
dfRS = spark.read \
.format("com.databricks.spark.redshift") \
.option("url", "jdbc:redshift://redshifthost:5439/database?user=username&password=pass") \
.option("query", "select x, count(*) my_table group by x") \
.option("tempdir", "s3n://path/for/temp/data") \
.load()
# Write back to a table
dfRS.write \
.format("com.databricks.spark.redshift") \
.option("url", "jdbc:redshift://redshifthost:5439/database?user=username&password=pass") \
.option("dbtable", "my_table_copy") \
.option("tempdir", "s3n://path/for/temp/data") \
.mode("error") \
.save()
# Using IAM Role based authentication
dfRS.write \
.format("com.databricks.spark.redshift") \
.option("url", "jdbc:redshift://redshifthost:5439/database?user=username&password=pass") \
.option("dbtable", "my_table_copy") \
.option("tempdir", "s3n://path/for/temp/data") \
.option("aws_iam_role", "arn:aws:iam::123456789000:role/redshift_iam_role") \
.mode("error") \
.save()
###Output
_____no_output_____
###Markdown
Clean UPPrior to exiting, let's run housekeeping to release disk space, computation and memory resources taken by this session. Remove Data
###Code
# Unmark the comment
# !rm -rf $HOME/PATH/*
###Output
_____no_output_____
###Markdown
Stop Spark Session
###Code
spark.stop()
###Output
_____no_output_____ |
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb | ###Markdown
Apple Stock Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Apple StockCheck out [Apple Stock Exercises Video Tutorial](https://youtu.be/wpXkR_IZcug) to watch a data scientist go through the exercises Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple Stock Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple Stock Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple Stock Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple StockCheck out [Apple Stock Exercises Video Tutorial](https://youtu.be/wpXkR_IZcug) to watch a data scientist go through the exercises Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple Stock Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple Stock Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple Stock Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple Stock Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple Stock Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple StockCheck out [Apple Stock Exercises Video Tutorial](https://youtu.be/wpXkR_IZcug) to watch a data scientist go through the exercises Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_monyh.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple StockCheck out [Apple Stock Exercises Video Tutorial](https://youtu.be/wpXkR_IZcug) to watch a data scientist go through the exercises Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple Stock Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
苹果公司股价 简介:我使用苹果公司的股价进行时间序列分析 Step 1. 导入必须的Python库
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. 从这里 [下载地址](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) 导入数据. Step 3. 将这文件中的数据放到变量 apple.
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. 看看 apple 每一列的数据类型
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. 将 apple 的日期(Date)列转成datetime类型
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. 将日期列(Date)设成索引
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. 检查一下索引里的值是否不重复?
###Code
# 调用 is_unique 检查是否重复
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. 将数据按索引(日期) 升序排列
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. 得到每个月最后一个工作日的数据
###Code
# 译者补充:使用 resample 对日期相关的数据进行处理
# 比如我们把日销量变成月销量,年销量等(重要)
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. apple数据共经历了多少天
###Code
print("max day:",apple.index.max())
print("min day:",apple.index.min())
print("max day type:", type(apple.index.max()))
print("min day type:", type(apple.index.min()))
gap = apple.index.max() - apple.index.min()
print("gap:",gap)
print("gap type:",type(gap))
(apple.index.max() - apple.index.min()).days
###Output
max day: 2014-07-08 00:00:00
min day: 1980-12-12 00:00:00
max day type: <class 'pandas._libs.tslibs.timestamps.Timestamp'>
min day type: <class 'pandas._libs.tslibs.timestamps.Timestamp'>
gap: 12261 days 00:00:00
gap type: <class 'pandas._libs.tslibs.timedeltas.Timedelta'>
###Markdown
Step 11. apple数据经历了多少个月?
###Code
# 还是先找每个月的最后一个工作日
apple_months = apple.resample('BM').mean()
# 再统计一下有多少工作日就是朋多少个月了
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. 画出调整后收盘价('Adj Close'). 并把图片设成 13.5 x 9 英寸
###Code
# 直接使用 'Adj Close' 这一列的值
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# 改变图片大小
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple Stock Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple StockCheck out [Apple Stock Exercises Video Tutorial](https://youtu.be/wpXkR_IZcug) to watch a data scientist go through the exercises Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple StockCheck out [Apple Stock Exercises Video Tutorial](https://youtu.be/wpXkR_IZcug) to watch a data scientist go through the exercises Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple Stock Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple Stock Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
BONUS: Create your own question and answer it.
###Code
pd.describe('resample')
###Output
_____no_output_____
###Markdown
Apple StockCheck out [Apple Stock Exercises Video Tutorial](https://youtu.be/wpXkR_IZcug) to watch a data scientist go through the exercises Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple Stock Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
###Markdown
Apple StockCheck out [Apple Stock Exercises Video Tutorial](https://youtu.be/wpXkR_IZcug) to watch a data scientist go through the exercises Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____ |
6_new_models/bayesian_transformers.ipynb | ###Markdown
And now, for the pretrained non fine-tuned BERT the results are:
###Code
model, optimizer = load_model(args)
evaluate(model, validation_dataloader)
###Output
Acc 0.478
###Markdown
We clearly see that training the full model using variational Adam doesn't lead to good results. We conjecture that the reason for this is the additional stochasticity introduced in variational Adam via the weight perturbations, that makes it difficult for the model to learn. Vadam: Approximating the posterior of the final layer after converging to a local optima in the original dataset Given the impossibility of fine-tuning a full transformer using Vadam, we first do the fine-tuning using AdamW and, once we have converged to a local optimum, freeze all the layers except for the last one and start using Vadam, to approximate the posterior of the last layer parameters.We first apply this procedure to the already trained RoBERTa-large model provided by the authors in [Aligning AI with shared human values](https://arxiv.org/abs/2008.02275) (so we will only have to do the Since the model has already been trained, we only perform the step of freezing all its layers except for the last one and using Vadam.
###Code
args = MyArgs(model='roberta-large', ngpus=1)
!pip install gdown
import gdown
!gdown https://drive.google.com/uc?id=1MHvSFbHjvzebib90wW378VtDAtn1WVxc
model, optimizer = load_model(args, load_path='util_roberta-large.pt')
data_dir = '1_original_study_datasets'
train_name = 'util_train'
test_name = "util_test"
hard_test_name = "util_test_hard"
train_data = load_process_data(args, data_dir, "util", train_name)
test_hard_data = load_process_data(args, data_dir, "util", hard_test_name)
test_data = load_process_data(args, data_dir, "util", test_name)
train_dataloader = DataLoader(train_data, batch_size=args.batch_size // 2, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=args.batch_size // 2, shuffle=False)
hard_test_dataloader = DataLoader(test_hard_data, batch_size=args.batch_size // 2, shuffle=False)
evaluate(model, test_dataloader)
for name, param in model.named_parameters():
if name != 'module.classifier.out_proj.weight' and name != 'module.classifier.out_proj.bias':
param.requires_grad = False
for name, param in model.named_parameters():
if param.requires_grad:
print(name)
trainable_parameters = [model.module.classifier.out_proj.weight, model.module.classifier.out_proj.bias]
args.learning_rate = 1e-5
args.batch_size = 8
args.nepochs = 10
betas = (0.9,0.995)
train_mc_samples = 5
eval_mc_samples = 20
optimizer_vadam_last_layer = Vadam(trainable_parameters,
lr = args.learning_rate,
betas = betas,
prior_prec = 1.0,
prec_init = 1.0,
num_samples = train_mc_samples,
train_set_size = len(train_data))
current_precisions = None
precisions_difference = []
for epoch in range(args.nepochs):
print()
train_variational(model, optimizer_vadam_last_layer, train_dataloader, epoch, verbose=True)
if epoch > 0:
precisions_difference.append(torch.norm(optimizer_vadam_last_layer.get_weight_precs()[0][0][0] - current_precisions[0][0][0]))
print(precisions_difference)
epoch += 1
current_precisions = copy.deepcopy(optimizer_vadam_last_layer.get_weight_precs())
evaluate(model, test_dataloader)
precisions_difference
precs_roberta_original = optimizer_vadam_last_layer.get_weight_precs()
precs_roberta_original
std_weights = torch.sqrt(1./precs_roberta_original[0][0][0])
std_bias = torch.sqrt(1./precs_roberta_original[0][1][0])
#Save models
#f = open("variational_training_original_roberta.pkl","wb")
#pickle.dump(model,f)
#f.close()
#Save precisions
#f = open("precisions_weights_biases_original_roberta.pkl","wb")
#pickle.dump(precs_roberta_original,f)
#f.close()
###Output
_____no_output_____
###Markdown
Once the model has been trained, to make predictions we sample from the posterior weights, do the predictions for each of these samples, and average the results.
###Code
results_roberta = variational_inference_uncertainties_roberta(model, std_weights, std_bias, test_dataloader, mc_samples = 20)
results_roberta_hard = variational_inference_uncertainties_roberta(model, std_weights, std_bias, hard_test_dataloader, mc_samples = 20)
conf, acc, bins, num_in_bins = split_in_bins(results_roberta['predictions'], results_roberta['confidence'])
conf_hard, acc_hard, bins_hard, num_in_bins_hard = split_in_bins(results_roberta_hard['predictions'], results_roberta_hard['confidence'])
ece_easy = get_ECE(conf, acc, num_in_bins)
print(f"ECE easy test dataset: {ece_easy}")
ece_hard = get_ECE(conf_hard, acc_hard, num_in_bins_hard)
print(f"ECE hard test dataset: {ece_hard}")
plot_reliability_diagram_original(acc, bins, acc_hard, bins_hard, "vadam_roberta_original")
accuracy = np.mean(results_roberta['predictions'])
accuracy
accuracy_hard = np.mean(results_roberta_hard['predictions'])
accuracy_hard
###Output
_____no_output_____
###Markdown
Vadam: Approximating the posterior of the final layer after converging to a local optima in the reformulated dataset We now do the sam as above but for a RoBERTa model that has been trained in the reformulated dataset
###Code
args = MyArgs()
args = MyArgs(model='roberta-large', ngpus=1)
!pip install gdown
import gdown
!gdown https://drive.google.com/uc?id=1-Y19ljB76eESM6m8EVcZVeQjRfcNa7In
model, optimizer = load_model(args, load_path='final_rerelease_roberta-large_1e-05_16_2.pkl')
data_dir = '4_reformulated_datasets'
train_name = 'util_train_no_test_overlap'
test_name = "util_test_easy_matched"
hard_test_name = "util_test_hard_matched"
unmatched_test_name = "test_combined_unmatched"
train_data = load_process_data(args, data_dir, "util", train_name)
test_hard_data = load_process_data(args, data_dir, "util", hard_test_name)
test_data = load_process_data(args, data_dir, "util", test_name)
test_unmatched_data = load_process_data(args, data_dir, "util", unmatched_test_name)
train_dataloader = DataLoader(train_data, batch_size=args.batch_size // 2, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=args.batch_size // 2, shuffle=False)
hard_test_dataloader = DataLoader(test_hard_data, batch_size=args.batch_size // 2, shuffle=False)
unmatched_test_dataloader = DataLoader(test_unmatched_data, batch_size=args.batch_size // 2, shuffle=False)
evaluate(model, test_dataloader)
evaluate(model, hard_test_dataloader)
for name, param in model.named_parameters():
if name != 'module.classifier.out_proj.weight' and name != 'module.classifier.out_proj.bias':
param.requires_grad = False
for name, param in model.named_parameters():
if param.requires_grad:
print(name)
trainable_parameters = [model.module.classifier.out_proj.weight, model.module.classifier.out_proj.bias]
args.learning_rate = 1e-5
args.batch_size = 8
args.nepochs = 10
betas = (0.9,0.995)
train_mc_samples = 5
eval_mc_samples = 20
optimizer_vadam_last_layer = Vadam(trainable_parameters,
lr = args.learning_rate,
betas = betas,
prior_prec = 1.0,
prec_init = 1.0,
num_samples = train_mc_samples,
train_set_size = len(train_data))
current_precisions = None
precisions_difference = []
for epoch in range(args.nepochs):
print()
train_variational(model, optimizer_vadam_last_layer, train_dataloader, epoch, verbose=True)
if epoch > 0:
precisions_difference.append(torch.norm(optimizer_vadam_last_layer.get_weight_precs()[0][0][0] - current_precisions[0][0][0]))
print(precisions_difference)
epoch += 1
current_precisions = copy.deepcopy(optimizer_vadam_last_layer.get_weight_precs())
evaluate(model, test_dataloader)
evaluate(model, hard_test_dataloader)
precisions_difference
precs_roberta_large = optimizer_vadam_last_layer.get_weight_precs()
#Save models
#f = open("variational_training_rerelease_roberta_large_model.pkl","wb")
#pickle.dump(model,f)
#f.close()
#Save precisions
#f = open("precisions_weights_biases_rerelease_roberta_large.pkl","wb")
#pickle.dump(precs_roberta_large,f)
#f.close()
std_weights = torch.sqrt(1./precs_roberta_large[0][0][0])
std_bias = torch.sqrt(1./precs_roberta_large[0][1][0])
results_roberta_rerelease = variational_inference_uncertainties_roberta(model, std_weights, std_bias, test_dataloader, mc_samples = 20)
results_roberta_hard_rerelease = variational_inference_uncertainties_roberta(model, std_weights, std_bias, hard_test_dataloader, mc_samples = 20)
results_roberta_unmatched = variational_inference_uncertainties_roberta(model, std_weights, std_bias, unmatched_test_dataloader, mc_samples = 20)
conf, acc, bins, num_in_bins = split_in_bins(results_roberta_rerelease['predictions'], results_roberta_rerelease['confidence'])
conf_hard, acc_hard, bins_hard, num_in_bins_hard = split_in_bins(results_roberta_hard_rerelease['predictions'], results_roberta_hard_rerelease['confidence'])
conf_unm, acc_unm, bins_unm, num_in_bins_unm = split_in_bins(results_roberta_unmatched['predictions'], results_roberta_unmatched['confidence'])
ece_easy = get_ECE(conf, acc, num_in_bins)
print(f"ECE easy test dataset: {ece_easy}")
ece_hard = get_ECE(conf_hard, acc_hard, num_in_bins_hard)
print(f"ECE hard test dataset: {ece_hard}")
ece_unmatched = get_ECE(conf_unm, acc_unm, num_in_bins_unm)
plot_reliability_diagram_rerelease(acc, bins, acc_hard, bins_hard, acc_unm, bins_unm, "variational_rerlease_roberta")
print(f"ECE unmatched test dataset: {ece_unmatched}")
np.mean(results_roberta_rerelease['predictions'])
np.mean(results_roberta_hard_rerelease['predictions'])
np.mean(results_roberta_unmatched['predictions'])
#Save results
#f = open("results_variational_roberta_large_test_rerelease_final.pkl","wb")
#pickle.dump(results_roberta_large_test,f)
#f.close()
#f = open("results_variational_roberta_large_hard_test_rerelease_final.pkl","wb")
#pickle.dump(results_roberta_large_hard_test,f)
#f.close()
###Output
_____no_output_____
###Markdown
Doing Bayesian Deep Learning on Transformers In this notebook we describe the process that we took to train our models using Vadam [Emtiyaz Khan et. al, 2018](https://arxiv.org/pdf/1806.04854.pdf) and SGLD [Welling, Teh, 2011](https://www.ics.uci.edu/~welling/publications/papers/stoclangevin_v6.pdf) Setup For Colab only Set path root folder after change directory command
###Code
# # mount google drive
#from google.colab import drive
#drive.mount('/content/drive')
# # change to directory containing relevant files
#%cd INSERT DIRECTORY
#@title Run to check we're in the correct directory
#try:
# f = open("check_directory.txt")
# print('Success :)')
#except IOError:
# print("Wrong directory, please try again")
!pip install transformers
!pip install barbar
###Output
_____no_output_____
###Markdown
Importing Generic Libraries
###Code
import numpy as np
import pandas as pd
import torch
from torch.utils.data import TensorDataset, DataLoader
from barbar import Bar
from torch.optim.optimizer import Optimizer
from torch.nn.utils import parameters_to_vector, vector_to_parameters
import math
import pickle
import copy
import matplotlib
import matplotlib.pyplot as plt
from transformers import AutoTokenizer, AutoModelForSequenceClassification, AutoConfig, AdamW
import os
###Output
_____no_output_____
###Markdown
Functions needed to load and preprocess data. Source:[https://github.com/hendrycks/ethics](https://github.com/hendrycks/ethics)
###Code
def load_model(args, load_path=None, cache_dir=None):
if cache_dir is not None:
config = AutoConfig.from_pretrained(args.model, num_labels=1, cache_dir=cache_dir)
else:
config = AutoConfig.from_pretrained(args.model, num_labels=1)
model = AutoModelForSequenceClassification.from_pretrained(args.model, config=config)
if load_path is not None:
model.load_state_dict(torch.load(load_path), strict=False)
model.cuda()
model = torch.nn.DataParallel(model, device_ids=[i for i in range(args.ngpus)])
print('\nPretrained model "{}" loaded'.format(args.model))
no_decay = ['bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in model.named_parameters()
if not any(nd in n for nd in no_decay)],
'weight_decay': args.weight_decay},
{'params': [p for n, p in model.named_parameters()
if any(nd in n for nd in no_decay)],
'weight_decay': 0.0}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate, eps=1e-8)
return model, optimizer
def load_process_data(args, data_dir, dataset, name="train"):
load_fn = load_util_sentences
sentences, labels = load_fn(data_dir, name=name)
sentences = ["[CLS] " + s for s in sentences]
tokenizer = get_tokenizer(args.model)
ids, amasks = get_ids_mask(sentences, tokenizer, args.max_length)
within_bounds = [ids[i, -1] == 0 for i in range(len(ids))]
if np.mean(within_bounds) < 1:
print("{} fraction of examples within context window ({} tokens): {:.3f}".format(name, args.max_length, np.mean(within_bounds)))
inputs, labels, masks = torch.tensor(ids), torch.tensor(labels), torch.tensor(amasks)
even_mask = [i for i in range(inputs.shape[0]) if i % 2 == 0]
odd_mask = [i for i in range(inputs.shape[0]) if i % 2 == 1]
even_inputs, odd_inputs = inputs[even_mask], inputs[odd_mask]
even_labels, odd_labels = labels[even_mask], labels[odd_mask]
even_masks, odd_masks = masks[even_mask], masks[odd_mask]
inputs = torch.stack([even_inputs, odd_inputs], axis=1)
labels = torch.stack([even_labels, odd_labels], axis=1)
masks = torch.stack([even_masks, odd_masks], axis=1)
data = TensorDataset(inputs, masks, labels)
return data
def load_util_sentences(data_dir, name="train"):
path = os.path.join(data_dir, "{}.csv".format(name))
df = pd.read_csv(path, header=None)
sentences = []
for i in range(df.shape[0]):
sentences.append(df.iloc[i, 0])
sentences.append(df.iloc[i, 1])
labels = [-1 for _ in range(len(sentences))]
return sentences, labels
def get_tokenizer(model):
tokenizer = AutoTokenizer.from_pretrained(model)
return tokenizer
def get_ids_mask(sentences, tokenizer, max_length):
tokenized = [tokenizer.tokenize(s) for s in sentences]
tokenized = [t[:(max_length - 1)] + ['SEP'] for t in tokenized]
ids = [tokenizer.convert_tokens_to_ids(t) for t in tokenized]
ids = np.array([np.pad(i, (0, max_length - len(i)),
mode='constant') for i in ids])
amasks = []
for seq in ids:
seq_mask = [float(i > 0) for i in seq]
amasks.append(seq_mask)
return ids, amasks
def flatten(tensor):
tensor = torch.cat([tensor[:, 0], tensor[:, 1]])
return tensor
def unflatten(tensor):
tensor = torch.stack([tensor[:tensor.shape[0] // 2], tensor[tensor.shape[0] // 2:]], axis=1)
return tensor
###Output
_____no_output_____
###Markdown
The Optimizers that we use. Sources: Vadam: [https://github.com/emtiyaz/vadam](https://github.com/emtiyaz/vadam) SGLD: [https://github.com/noahgolmant/SGLD/blob/master/sgld/optimizers.py](https://github.com/noahgolmant/SGLD/blob/master/sgld/optimizers.py)
###Code
#@title
class Vadam(Optimizer):
"""Implements Vadam algorithm.
Arguments:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
train_set_size (int): number of data points in the full training set
(objective assumed to be on the form (1/M)*sum(-log p))
lr (float, optional): learning rate (default: 1e-3)
beta (Tuple[float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 0.999))
prior_prec (float, optional): prior precision on parameters
(default: 1.0)
prec_init (float, optional): initial precision for variational dist. q
(default: 1.0)
num_samples (float, optional): number of MC samples
(default: 1)
"""
def __init__(self, params, train_set_size, lr=1e-3, betas=(0.9, 0.999), prior_prec=1.0, prec_init=1.0, num_samples=1):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= prior_prec:
raise ValueError("Invalid prior precision value: {}".format(prior_prec))
if not 0.0 <= prec_init:
raise ValueError("Invalid initial s value: {}".format(prec_init))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
if num_samples < 1:
raise ValueError("Invalid num_samples parameter: {}".format(num_samples))
if train_set_size < 1:
raise ValueError("Invalid number of training data points: {}".format(train_set_size))
self.num_samples = num_samples
self.train_set_size = train_set_size
defaults = dict(lr=lr, betas=betas, prior_prec=prior_prec, prec_init=prec_init)
super(Vadam, self).__init__(params, defaults)
def step(self, closure):
"""Performs a single optimization step.
Arguments:
closure (callable): A closure that reevaluates the model
and returns the loss.
"""
if closure is None:
raise RuntimeError('For now, Vadam only supports that the model/loss can be reevaluated inside the step function')
grads = []
grads2 = []
for group in self.param_groups:
for p in group['params']:
grads.append([])
grads2.append([])
# Compute grads and grads2 using num_samples MC samples
for s in range(self.num_samples):
# Sample noise for each parameter
pid = 0
original_values = {}
for group in self.param_groups:
for p in group['params']:
original_values.setdefault(pid, p.detach().clone())
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.ones_like(p.data) * (group['prec_init'] - group['prior_prec']) / self.train_set_size
# A noisy sample
raw_noise = torch.normal(mean=torch.zeros_like(p.data), std=1.0)
p.data.addcdiv_(1., raw_noise, torch.sqrt(self.train_set_size * state['exp_avg_sq'] + group['prior_prec']))
pid = pid + 1
# Call the loss function and do BP to compute gradient
loss = closure()
# Replace original values and store gradients
pid = 0
for group in self.param_groups:
for p in group['params']:
# Restore original parameters
p.data = original_values[pid]
if p.grad is None:
continue
if p.grad.is_sparse:
raise RuntimeError('Vadam does not support sparse gradients')
# Aggregate gradients
g = p.grad.detach().clone()
if s==0:
grads[pid] = g
grads2[pid] = g**2
else:
grads[pid] += g
grads2[pid] += g**2
pid = pid + 1
# Update parameters and states
pid = 0
for group in self.param_groups:
for p in group['params']:
if grads[pid] is None:
continue
# Compute MC estimate of g and g2
grad = grads[pid].div(self.num_samples)
grad2 = grads2[pid].div(self.num_samples)
tlambda = group['prior_prec'] / self.train_set_size
state = self.state[p]
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
# Decay the first and second moment running average coefficient
exp_avg.mul_(beta1).add_(1 - beta1, grad + tlambda * original_values[pid])
exp_avg_sq.mul_(beta2).add_(1 - beta2, grad2)
bias_correction1 = 1 - beta1 ** state['step']
bias_correction2 = 1 - beta2 ** state['step']
numerator = exp_avg.div(bias_correction1)
denominator = exp_avg_sq.div(bias_correction2).sqrt().add(tlambda)
# Update parameters
p.data.addcdiv_(-group['lr'], numerator, denominator)
pid = pid + 1
return loss
def get_weight_precs(self, ret_numpy=False):
"""Returns the posterior weight precisions.
Arguments:
ret_numpy (bool): If true, the returned list contains numpy arrays,
otherwise it contains torch tensors.
"""
weight_precs = []
for group in self.param_groups:
weight_prec = []
for p in group['params']:
state = self.state[p]
prec = self.train_set_size * state['exp_avg_sq'] + group['prior_prec']
if ret_numpy:
prec = prec.cpu().numpy()
weight_prec.append(prec)
weight_precs.append(weight_prec)
return weight_precs
#Change in the mc prediction function to adapt it to our inputs
def get_mc_predictions(self, forward_function, inputs_ids, inputs_masks, mc_samples=1, ret_numpy=False, *args, **kwargs):
"""Returns Monte Carlo predictions.
Arguments:
forward_function (callable): The forward function of the model
that takes inputs and returns the outputs.
inputs (FloatTensor): The inputs to the model.
mc_samples (int): The number of Monte Carlo samples.
ret_numpy (bool): If true, the returned list contains numpy arrays,
otherwise it contains torch tensors.
"""
predictions = []
for mc_num in range(mc_samples):
pid = 0
original_values = {}
for group in self.param_groups:
for p in group['params']:
original_values.setdefault(pid, torch.zeros_like(p.data)+p.data)
state = self.state[p]
# State initialization
if len(state) == 0:
raise RuntimeError('Optimizer not initialized')
# A noisy sample
raw_noise = torch.normal(mean=torch.zeros_like(p.data), std=1.0)
p.data.addcdiv_(1., raw_noise, torch.sqrt(self.train_set_size * state['exp_avg_sq'] + group['prior_prec']))
pid = pid + 1
# Call the forward computation function
outputs = forward_function(inputs_ids, inputs_masks, *args, **kwargs)
if ret_numpy:
outputs = outputs.data.cpu().numpy()
predictions.append(outputs)
pid = 0
for group in self.param_groups:
for p in group['params']:
p.data = original_values[pid]
pid = pid + 1
return predictions
def _kl_gaussian(self, p_mu, p_sigma, q_mu, q_sigma):
var_ratio = (p_sigma / q_sigma).pow(2)
t1 = ((p_mu - q_mu) / q_sigma).pow(2)
return 0.5 * torch.sum((var_ratio + t1 - 1 - var_ratio.log()))
def kl_divergence(self):
"""Returns the KL divergence between the variational distribution
and the prior.
"""
kl = 0
for group in self.param_groups:
for p in group['params']:
state = self.state[p]
prec0 = group['prior_prec']
prec = self.train_set_size * state['exp_avg_sq'] + group['prior_prec']
kl += self._kl_gaussian(p_mu = p,
p_sigma = 1. / torch.sqrt(prec),
q_mu = 0.,
q_sigma = 1. / math.sqrt(prec0))
return kl
#Taken from https://github.com/noahgolmant/SGLD/blob/master/sgld/optimizers.py
class SGLD(Optimizer):
r"""Implements stochastic gradient descent (optionally with momentum).
Nesterov momentum is based on the formula from
`On the importance of initialization and momentum in deep learning`__.
Args:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float): learning rate
momentum (float, optional): momentum factor (default: 0)
weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
dampening (float, optional): dampening for momentum (default: 0)
nesterov (bool, optional): enables Nesterov momentum (default: False)
noise_scale (float, optional): variance of isotropic noise for langevin
Example:
>>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
>>> optimizer.zero_grad()
>>> loss_fn(model(input), target).backward()
>>> optimizer.step()
__ http://www.cs.toronto.edu/%7Ehinton/absps/momentum.pdf
.. note::
The implementation of SGD with Momentum/Nesterov subtly differs from
Sutskever et. al. and implementations in some other frameworks.
Considering the specific case of Momentum, the update can be written as
.. math::
v = \rho * v + g \\
p = p - lr * v
where p, g, v and :math:`\rho` denote the parameters, gradient,
velocity, and momentum respectively.
This is in contrast to Sutskever et. al. and
other frameworks which employ an update of the form
.. math::
v = \rho * v + lr * g \\
p = p - v
The Nesterov version is analogously modified.
"""
def __init__(self, params, lr, momentum=0, dampening=0,
weight_decay=0, nesterov=False,
noise_scale=0.1):
if lr < 0.0:
raise ValueError("Invalid learning rate: {}".format(lr))
if momentum < 0.0:
raise ValueError("Invalid momentum value: {}".format(momentum))
if weight_decay < 0.0:
raise ValueError("Invalid weight_decay value: {}".format(weight_decay))
defaults = dict(lr=lr, momentum=momentum, dampening=dampening,
weight_decay=weight_decay, nesterov=nesterov)
if nesterov and (momentum <= 0 or dampening != 0):
raise ValueError("Nesterov momentum requires a momentum and zero dampening")
super(SGLD, self).__init__(params, defaults)
self.noise_scale = noise_scale
def __setstate__(self, state):
super(SGLD, self).__setstate__(state)
for group in self.param_groups:
group.setdefault('nesterov', False)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
returns norm of the step we took for variance analysis later
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
weight_decay = group['weight_decay']
momentum = group['momentum']
dampening = group['dampening']
nesterov = group['nesterov']
for p in group['params']:
if p.grad is None:
continue
d_p = p.grad.data
if weight_decay != 0:
d_p.add_(weight_decay, p.data)
if momentum != 0:
param_state = self.state[p]
if 'momentum_buffer' not in param_state:
buf = param_state['momentum_buffer'] = torch.zeros_like(p.data)
buf.mul_(momentum).add_(d_p)
else:
buf = param_state['momentum_buffer']
buf.mul_(momentum).add_(1 - dampening, d_p)
if nesterov:
d_p = d_p.add(momentum, buf)
else:
d_p = buf
p.data.add_(-group['lr'], d_p)
p.data.add_(np.sqrt(self.noise_scale), torch.randn_like(p.data))
return loss
###Output
_____no_output_____
###Markdown
Functions to train the models. Source of train: [https://github.com/hendrycks/ethics](https://github.com/hendrycks/ethics). Train_variational is an adaptation of train to handle Vadam
###Code
def train(model, optimizer, train_dataloader, epoch, *args, track_validation_loss = False, log_interval = 50, verbose=False):
# Set model to training mode
model.train()
criterion = torch.nn.BCEWithLogitsLoss()
ntrain_steps = len(train_dataloader)
results_validation = []
# Loop over each batch from the training set
for step, batch in enumerate(train_dataloader):
# Copy data to GPU if needed
batch = tuple(t.cuda() for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# reshape
b_input_ids = flatten(b_input_ids)
b_input_mask = flatten(b_input_mask)
# Zero gradient buffers
optimizer.zero_grad()
# Forward pass
output = model(b_input_ids, attention_mask=b_input_mask)[0] # dim 1
output = unflatten(output)
diffs = output[:, 0] - output[:, 1]
loss = criterion(diffs.squeeze(dim=1), torch.ones(diffs.shape[0]).cuda())
# Backward pass
loss.backward()
# Update weights
optimizer.step()
if step % log_interval == 0 and step > 0 and verbose:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, step, ntrain_steps, 100. * step / ntrain_steps, loss))
if step % log_interval == 0 and track_validation_loss:
results_validation.append(evaluate(model, validation_dataloader))
if track_validation_loss:
return results_validation
def train_variational(model, optimizer, train_dataloader, epoch, track_validation_loss = False, log_interval = 10, verbose=False, *args):
# Set model to training mode
model.train()
criterion = torch.nn.BCEWithLogitsLoss()
ntrain_steps = len(train_dataloader)
results_validation = []
# Loop over each batch from the training set
for step, batch in enumerate(train_dataloader):
# Copy data to GPU if needed
batch = tuple(t.cuda() for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# reshape
b_input_ids = flatten(b_input_ids)
b_input_mask = flatten(b_input_mask)
def closure():
optimizer.zero_grad()
output = model(b_input_ids, attention_mask=b_input_mask)[0] # dim 1
output = unflatten(output)
diffs = output[:, 0] - output[:, 1]
loss = criterion(diffs.squeeze(dim=1), torch.ones(diffs.shape[0]).cuda())
loss.backward()
return loss
loss = optimizer.step(closure)
if step % log_interval == 0 and step > 0 and verbose:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, step, ntrain_steps, 100. * step / ntrain_steps, loss))
if step % log_interval == 0 and track_validation_loss:
results_validation.append(evaluate(model, validation_dataloader))
if track_validation_loss:
return results_validation
###Output
_____no_output_____
###Markdown
Functions to evaluate the models ant their uncertainties. Source of evaluate: [https://github.com/hendrycks/ethics](https://github.com/hendrycks/ethics). variational_inference_uncertainties_roberta is an adaptation of evaluate to do predict from samples of a posterior distribution and do certainty estimation
###Code
def evaluate(model, dataloader):
model.eval()
cors = []
for step, batch in enumerate(dataloader):
# Copy data to GPU if needed
batch = tuple(t.cuda() for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# reshape
b_input_ids = flatten(b_input_ids)
b_input_mask = flatten(b_input_mask)
# Forward pass
with torch.no_grad():
output = model(b_input_ids, attention_mask=b_input_mask)[0] # dim 1
output = unflatten(output)
diffs = output[:, 0] - output[:, 1]
diffs = diffs.squeeze(dim=1).detach().cpu().numpy()
cors.append(diffs > 0)
cors = np.concatenate(cors)
acc = np.mean(cors)
print('Acc {:.3f}'.format(acc))
return acc
def variational_inference_uncertainties_roberta(model, std_weights, std_bias, dataloader, mc_samples = 2):
model_sampled = copy.deepcopy(model)
model_sampled.eval()
vi_pred = {}
for i in range(mc_samples):
#print(i)
cors = []
model_sampled.module.classifier.out_proj.weight = copy.deepcopy(torch.nn.Parameter(torch.normal(model.module.classifier.out_proj.weight, std_weights)))
model_sampled.module.classifier.out_proj.bias = copy.deepcopy(torch.nn.Parameter(torch.normal(model.module.classifier.out_proj.bias, std_bias)))
model_sampled.eval()
for step, batch in enumerate(dataloader):
# Copy data to GPU if needed
batch = tuple(t.cuda() for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# reshape
b_input_ids = flatten(b_input_ids)
b_input_mask = flatten(b_input_mask)
# Forward pass
with torch.no_grad():
output = model_sampled(b_input_ids, attention_mask=b_input_mask)[0] # dim 1
output = unflatten(output)
diffs = output[:, 0] - output[:, 1]
diffs = diffs.squeeze(dim=1).detach().cpu().numpy()
cors.append(diffs > 0)
cors = np.concatenate(cors)
vi_pred[i] = cors
final_predictions = np.zeros((mc_samples, vi_pred[0].shape[0]))
for i in range(mc_samples):
final_predictions[i] = vi_pred[i]
#Calculate the means and variances
mean = np.mean(final_predictions, axis = 0)
variance = np.var(final_predictions, axis = 0)
# Finding the variance for the instances that were correctly and wrongly classified
correct_variance = variance[mean>=0.5]
wrong_variance = variance[mean<0.5]
# Get binary predictions
predictions = np.copy(mean)
predictions[mean>0.5] = 1
predictions[mean<0.5] = 0
#We do it unifromly at random when the uncertainty because otherwise we would artificiaslly be counting all those cases as successes
for j in range(predictions.shape[0]):
if predictions[j] == 0.5:
predictions[j] = np.round(np.random.rand())
# Calculate the confidence in each prediction
confidence = np.copy(mean)
confidence[mean<0.5] = 1 - confidence[mean<0.5] # For the instances where we're classifying as 0, the confidence is the opposite
results = {'predictions': predictions,
'confidence': confidence,
'mean': mean,
'variance': variance,
'avg_correct_variance': np.mean(correct_variance),
'avg_wrong_variance': np.mean(wrong_variance)}
return results
###Output
_____no_output_____
###Markdown
Functions needed to plot the cerrtainty calibration and compute the expected callibration error (ECE) [Guo et. al, 2017](https://arxiv.org/pdf/1706.04599.pdf) of each model
###Code
def split_in_bins(predictions, confidence):
num_bins = 5
l = np.linspace(0.5,1,num_bins+1)
bins = np.linspace(0.5,.9,num_bins)+.05
conf = []
acc = []
num_in_bins = []
for ind, (lower,upper) in enumerate(zip(l[:-1], l[1:])):
indxs = np.where((confidence<=upper) & (confidence>lower)) # B_m
this_bin_pred = predictions[indxs]
this_bin_conf = confidence[indxs]
# Get average confidence
avg_conf = np.mean(this_bin_conf)
# Get average accuracy
avg_acc = np.mean(this_bin_pred)
conf.append(avg_conf)
acc.append(avg_acc)
num_in_bins.append(len(this_bin_pred))
return conf, acc, bins, num_in_bins
def get_ECE(confidence, accuracy, num_in_bins):
'''
condifence: list of conf(B_m)
accuracy: list of acc(B_m)
num_in_bins: number of samples in each bin
'''
assert len(confidence) == len(accuracy)
num_in_bins = np.asarray(num_in_bins)
n = num_in_bins.sum() # Tot number of samples
ECE = 0
for i in range(len(confidence)):
ECE += (num_in_bins[i]/(n)) * np.abs(accuracy[i] - confidence[i])
return ECE
def plot_reliability_diagram_rerelease(accuracy_easy, bins_easy, accuracy_hard, bins_hard,accuracy_unmatched, bins_unmatched, figure_file): #accuracy_unmatched, bins_unmatched, figure_file):
accuracy = (accuracy_easy, accuracy_hard, accuracy_unmatched)
bins = (bins_easy, bins_hard, bins_unmatched)
width=0.1
fig, ax = plt.subplots(figsize=(13,5), nrows=1, ncols=3)
fig.suptitle("Assessing calibration of model certainty against accuracy\n(Vadam-optimised RoBERTa-large)", fontsize=14, fontweight='bold')
for i in range(3):
ax[i].bar(bins[i], accuracy[i], width=width, color='k', edgecolor='black')
ax[i].plot(np.linspace(0.5,1,6),np.linspace(0.5,1,6),linestyle='--', color='red')
ax[i].set_ylabel("Accuracy")
ax[i].set_xlabel("Model certainty")
if i==0:
ax[i].set_title("Easy reformulated test dataset")
if i==1:
ax[i].set_title("Hard reformulated test dataset")
if i == 2:
ax[i].set_title("Unmatched reformulated test dataset")
fig.tight_layout(rect=[0, 0, 1, 0.90])
fig.legend(['Perfect certainty','Model certainty'],loc=(0.75,0.15), facecolor="white")
fig.savefig(figure_file, dpi=250)
fig.show()
def plot_reliability_diagram_original(accuracy_easy, bins_easy, accuracy_hard, bins_hard, figure_file): #accuracy_unmatched, bins_unmatched, figure_file):
accuracy = (accuracy_easy, accuracy_hard)
bins = (bins_easy, bins_hard)
width=0.1
fig, ax = plt.subplots(figsize=(8,5), nrows=1, ncols=2)
#Change the suptitle with each model accordingly
fig.suptitle("Assessing calibration of model certainty against accuracy\n(Vadam-optimised RoBERTa-large)", fontsize=14, fontweight='bold')
for i in range(2):
ax[i].bar(bins[i], accuracy[i], width=width, color='k', edgecolor='black')
ax[i].plot(np.linspace(0.5,1,6),np.linspace(0.5,1,6),linestyle='--', color='red')
ax[i].set_ylabel("Accuracy")
ax[i].set_xlabel("Model certainty")
if i==0:
ax[i].set_title("Easy test dataset")
if i==1:
ax[i].set_title("Hard test dataset")
fig.tight_layout(rect=[0, 0, 1, 0.90])
fig.legend(['Perfect certainty','Model certainty'],loc=(0.75,0.15), facecolor="white")
fig.savefig(figure_file, dpi=250)
fig.show()
###Output
_____no_output_____
###Markdown
A class needed to specify the model and the training parameters we use (lr, epochs...)
###Code
class MyArgs:
def __init__(self, model = None, ngpus = 1, save = True, verbose = True, weight_decay=0.01, learning_rate=2e-5, nepochs=2, batch_size=16, max_length=64,
nruns=1):
self.model = model
self.ngpus = ngpus
self.weight_decay = weight_decay
self.learning_rate = learning_rate
self.nepochs = nepochs
self.batch_size = batch_size
self.max_length = max_length
self.nruns = nruns
self.save = save
self.verbose = verbose
###Output
_____no_output_____
###Markdown
SGLD Experiments We start by fine-tuning a BERT-base model using AdamW. Once we have converged to a local optima, we start using SGLD, in order to try to draw samples from the posterior distribution
###Code
args = MyArgs()
data_dir = '1_original_study_datasets'
train_name = "util_train"
#We now prepare the plots for the accuracy with the SGLD optimizer
gaussian_noise = [1e-4, 1e-5, 1e-6, 1e-8]
accuracy_graphs = {}
for noise in gaussian_noise:
#First we train the model using adamw
args.model = "bert-base-uncased"
args.learning_rate = 1e-5
args.batch_size = 8
args.nepochs = 2
model, optimizer = load_model(args)
train_data = load_process_data(args, data_dir, "util", train_name)
train_set, validation_set = torch.utils.data.random_split(train_data, [int(0.8*len(train_data)), \
len(train_data) - int(0.8*len(train_data))], generator=torch.Generator().manual_seed(42))
train_dataloader = DataLoader(train_set, batch_size=args.batch_size // 2, shuffle=True)
validation_dataloader = DataLoader(validation_set, batch_size=args.batch_size // 2, shuffle = False)
for epoch in range(1, args.nepochs + 1):
train(model, optimizer, train_dataloader, epoch, verbose=False)
accuracy_graphs[noise] = [evaluate(model, validation_dataloader)]
no_decay = ['bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in model.named_parameters()
if not any(nd in n for nd in no_decay)],
'weight_decay': args.weight_decay},
{'params': [p for n, p in model.named_parameters()
if any(nd in n for nd in no_decay)],
'weight_decay': 0.0}
]
optimizer_sgld = SGLD(optimizer_grouped_parameters,
lr = args.learning_rate,
noise_scale = noise)
#We only train for one epoch
accuracy_graphs[noise] += train(model, optimizer_sgld, train_dataloader, epoch, validation_dataloader, track_validation_loss=True, verbose=False)
plt.figure()
plt.plot(accuracy_graphs[1e-4])
plt.ylabel('Validation accuracy. Noise 1e-4')
plt.xlabel('SGLD_step/50')
plt.show()
plt.figure()
plt.plot(accuracy_graphs[1e-5])
plt.ylabel('Validation accuracy. Noise 1e-5')
plt.xlabel('SGLD_step/50')
plt.show()
plt.figure()
plt.plot(accuracy_graphs[1e-6])
plt.ylabel('Validation accuracy. Noise 1e-6')
plt.xlabel('SGLD_step/50')
plt.show()
plt.figure()
plt.plot(accuracy_graphs[1e-8])
plt.ylabel('Validation accuracy. Noise 1e-8')
plt.xlabel('SGLD_step/50')
plt.show()
###Output
_____no_output_____
###Markdown
As we can see, our BERT model is very sensitive to the amount of noise added in the SGLD. Unless the noise added is very small, it quickly falls off the optimum and it isn't able to learn. A reason for the behaviour we observe with SGLD is that, even though we are perturbing every weight by a very little amount, the total number of weights that we are perturbing is very big, so the total perturbation to the model could be significant. Vadam: Approximating the posterior of the full model We then try to train a full pretrained BERT model using variational Adam. We do a hyperparameter search over the prior precisions and posterior initializations but we don't observe any meaningful learning in any case. To give a better understanding of the quality of the results, we calculate tha accuracy of the pretrained BERT model without further fine tuning
###Code
args = MyArgs()
data_dir = '1_original_study_datasets'
train_name = "util_train"
args.model = "bert-base-uncased"
args.learning_rate = 1e-5
args.batch_size = 16
args.nepochs = 2
prior_prec = [1e7, 1e5, 1e3, 1e1, 1e-1, 1e-3, 1e-5]
betas = (0.9,0.995)
train_mc_samples = 5
eval_mc_samples = 20
training_accuracy = {}
validation_accuracy = {}
test_accuracy = {}
hard_test_accuracy = {}
model, optimizer = load_model(args)
train_data = load_process_data(args, data_dir, "util", train_name)
train_set, validation_set = torch.utils.data.random_split(train_data, [int(0.8*len(train_data)), \
len(train_data) - int(0.8*len(train_data))], generator=torch.Generator().manual_seed(42))
train_dataloader = DataLoader(train_set, batch_size=args.batch_size // 2, shuffle=True)
validation_dataloader = DataLoader(validation_set, batch_size=args.batch_size // 2, shuffle=False)
for precision in prior_prec:
model, optimizer = load_model(args)
no_decay = ['bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in model.named_parameters()
if not any(nd in n for nd in no_decay)],
'weight_decay': args.weight_decay},
{'params': [p for n, p in model.named_parameters()
if any(nd in n for nd in no_decay)],
'weight_decay': 0.0}
]
optimizer_vadam = Vadam(optimizer_grouped_parameters,
lr = args.learning_rate,
betas = betas,
prior_prec = precision,
prec_init = precision,
train_set_size = len(train_set))
for epoch in range(1, args.nepochs + 1):
print()
train_variational(model, optimizer_vadam, train_dataloader, epoch, verbose=True)
training_accuracy[precision] = evaluate(model, train_dataloader)
print(training_accuracy[precision])
validation_accuracy[precision] = evaluate(model, validation_dataloader)
print(validation_accuracy[precision])
validation_accuracy
#import pickle
#Save results
#f = open("hyperparameter_search_vadam_full_model_results_validation_final.pkl","wb")
#pickle.dump(validation_accuracy,f)
#f.close()
###Output
_____no_output_____ |
sorting_searching/radix_sort/radix_sort_challenge.ipynb | ###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement radix sort.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Is the input a list? * Yes* Can we assume the inputs are valid? * Check for None in place of an array * Assume array elements are ints* Do we know the max digits to handle? * No* Are the digits base 10? * Yes* Can we assume this fits memory? * Yes Test Cases* None -> Exception* [] -> []* [128, 256, 164, 8, 2, 148, 212, 242, 244] -> [2, 8, 128, 148, 164, 212, 242, 244, 256] AlgorithmRefer to the [Solution Notebook](http://nbviewer.jupyter.org/github/donnemartin/interactive-coding-challenges/blob/master/sorting_searching/radix_sort/radix_sort_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class RadixSort(object):
def sort(self, array, base=10):
# TODO: Implement me
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_radix_sort.py
from nose.tools import assert_equal, assert_raises
class TestRadixSort(object):
def test_sort(self):
radix_sort = RadixSort()
assert_raises(TypeError, radix_sort.sort, None)
assert_equal(radix_sort.sort([]), [])
array = [128, 256, 164, 8, 2, 148, 212, 242, 244]
expected = [2, 8, 128, 148, 164, 212, 242, 244, 256]
assert_equal(radix_sort.sort(array), expected)
print('Success: test_sort')
def main():
test = TestRadixSort()
test.test_sort()
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement radix sort.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Is the input a list? * Yes* Can we assume the inputs are valid? * Check for None in place of an array * Assume array elements are ints* Do we know the max digits to handle? * No* Are the digits base 10? * Yes* Can we assume this fits memory? * Yes Test Cases* None -> Exception* [] -> []* [128, 256, 164, 8, 2, 148, 212, 242, 244] -> [2, 8, 128, 148, 164, 212, 242, 244, 256] AlgorithmRefer to the [Solution Notebook](http://nbviewer.jupyter.org/github/donnemartin/interactive-coding-challenges/blob/master/sorting_searching/radix_sort/radix_sort_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class RadixSort(object):
def sort(self, array, base=10):
# TODO: Implement me
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_radix_sort.py
import unittest
class TestRadixSort(unittest.TestCase):
def test_sort(self):
radix_sort = RadixSort()
self.assertRaises(TypeError, radix_sort.sort, None)
self.assertEqual(radix_sort.sort([]), [])
array = [128, 256, 164, 8, 2, 148, 212, 242, 244]
expected = [2, 8, 128, 148, 164, 212, 242, 244, 256]
self.assertEqual(radix_sort.sort(array), expected)
print('Success: test_sort')
def main():
test = TestRadixSort()
test.test_sort()
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement radix sort.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Is the input a list? * Yes* Can we assume the inputs are valid? * Check for None in place of an array * Assume array elements are ints* Do we know the max digits to handle? * No* Are the digits base 10? * Yes* Can we assume this fits memory? * Yes Test Cases* None -> Exception* [] -> []* [128, 256, 164, 8, 2, 148, 212, 242, 244] -> [2, 8, 128, 148, 164, 212, 242, 244, 256] AlgorithmRefer to the [Solution Notebook](). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class RadixSort(object):
def sort(self, array, base=10):
# TODO: Implement me
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_radix_sort.py
from nose.tools import assert_equal, assert_raises
class TestRadixSort(object):
def test_sort(self):
radix_sort = RadixSort()
assert_raises(TypeError, radix_sort.sort, None)
assert_equal(radix_sort.sort([]), [])
array = [128, 256, 164, 8, 2, 148, 212, 242, 244]
expected = [2, 8, 128, 148, 164, 212, 242, 244, 256]
assert_equal(radix_sort.sort(array), expected)
print('Success: test_sort')
def main():
test = TestRadixSort()
test.test_sort()
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement radix sort.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Is the input a list? * Yes* Can we assume the inputs are valid? * Check for None in place of an array * Assume array elements are ints* Do we know the max digits to handle? * No* Are the digits base 10? * Yes* Can we assume this fits memory? * Yes Test Cases* None -> Exception* [] -> []* [128, 256, 164, 8, 2, 148, 212, 242, 244] -> [2, 8, 128, 148, 164, 212, 242, 244, 256] AlgorithmRefer to the [Solution Notebook](http://nbviewer.jupyter.org/github/donnemartin/interactive-coding-challenges/blob/master/sorting_searching/radix_sort/radix_sort_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class RadixSort(object):
def sort(self, array, base=10):
# TODO: Implement me
if array is None:
raise TypeError('array cannot be None')
elif len(array) == 0:
return array
else:
maxLength = False
tmp , placement = -1, 1
while not maxLength:
maxLength = True
# declare and initialize buckets
buckets = [list() for _ in range( base )]
# split lst between lists|
for i in array:
tmp = int((i / placement) % base)
print("tmp: ", tmp)
buckets[tmp].append(i)
if maxLength and tmp > 0:
maxLength = False
# empty lists into array
a = 0
for b in range( base ):
buck = buckets[b]
for i in buck:
array[a] = i
a += 1
print("array: ", array)
# move to next station: 1-->10-->100
placement *= base
return array
class RadixSort(object):
def sort(self, array, base=10):
# TODO: Implement me
new_array = array
if array is None:
raise TypeError('array cannot be None')
if len(array) == 0:
return array
maxNumber = max(array)
maxLength = len(list(str(abs(maxNumber))))
for length in range(maxLength):
# 分十个列表,装末尾的数,对应的数字分到处于该对应的列表里
buckets = [[] for i in range(base)]
for data in new_array:
#求尾数
tmp = (data//base**length)%base
# 末尾是tmp的数添加至buckets的第tmp
buckets[tmp].append(data)
print("buckets: ", buckets)
# 把buckets的数组合成新的array
new_array = []
for bucket in buckets:
new_array.extend(bucket)
return new_array
print(123//10)
print(123/10)
print(2%10)
len(list(str(int(192.2))))
###Output
12
12.3
2
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_radix_sort.py
from nose.tools import assert_equal, assert_raises
class TestRadixSort(object):
def test_sort(self):
radix_sort = RadixSort()
assert_raises(TypeError, radix_sort.sort, None)
assert_equal(radix_sort.sort([]), [])
array = [128, 256, 164, 8, 2, 148, 212, 242, 244]
expected = [2, 8, 128, 148, 164, 212, 242, 244, 256]
assert_equal(radix_sort.sort(array), expected)
print('Success: test_sort')
def main():
test = TestRadixSort()
test.test_sort()
if __name__ == '__main__':
main()
###Output
buckets: [[], [], [2, 212, 242], [], [164, 244], [], [256], [], [128, 8, 148], []]
buckets: [[2, 8], [212], [128], [], [242, 244, 148], [256], [164], [], [], []]
buckets: [[2, 8], [128, 148, 164], [212, 242, 244, 256], [], [], [], [], [], [], []]
Success: test_sort
|
notebooks/25. waves_price.ipynb | ###Markdown
Waves Price by: Widya Meiriska 1. Read Dataset
###Code
import csv
import pandas as pd
import numpy as np
df = pd.read_csv('../data/raw/bitcoin/waves_price.csv',parse_dates = ['Date'])
df.tail()
###Output
_____no_output_____
###Markdown
2. Data Investigation
###Code
df.count()
df.dtypes
###Output
_____no_output_____
###Markdown
There are missing data here and there are several data which have different format. Some of the data do not use number format
###Code
df['Volume'] = df['Volume'].apply(lambda x: float(str(x).replace(',','')))
df['Market Cap'] = df['Market Cap'].replace('-', 'NaN')
df['Market Cap'] = df['Market Cap'].apply(lambda x: float(str(x).replace(',','')))
df.tail()
df.count()
df.info()
missingdf = pd.DataFrame(df.isna().sum()).rename(columns = {0: 'total'})
missingdf['percent'] = missingdf['total'] / len(df)
missingdf
###Output
_____no_output_____
###Markdown
I try to fill in the missing value by interpolated the data
###Code
# Lets see the correlation between each column
correlation = df.corr(method="pearson")
correlation['Market Cap']
#Plot data to see the relation between each column
import matplotlib.pyplot as plt
plt.figure(figsize=(25, 25))
O = df['Open']
MC = df['Market Cap']
plt.subplot(5,5,5)
plt.scatter(MC, O)
plt.title('Open vs Market Cap')
plt.show
###Output
_____no_output_____
###Markdown
To fill the NaN value I try to interpolate the data using linear method using value from Open column. Because from the information above we can see that Market Cap has the closest correlation with Open.
###Code
from sklearn import linear_model
model = linear_model.LinearRegression()
Open = df[['Open']].iloc[0:442]
Market_Cap = df['Market Cap'].iloc[0:442]
#Train model
model.fit(Open, Market_Cap)
#The model score almost 1 so that indicate the model is near to the truth
model.score(Open, Market_Cap)
###Output
_____no_output_____
###Markdown
Here I make a new column Market Cap Predict which contains Market Cap with no NaN value
###Code
#Add a new column which is filled the missing data from model fit
open = df[['Open']]
Market_Cap_Predict = model.predict(open)
df['Market Cap Predict'] = Market_Cap_Predict
df.tail()
df.count()
df.describe()
###Output
_____no_output_____
###Markdown
Now the data is clean, no null value and has same format 3. Data Visualization
###Code
# Set Date as it's index
df.set_index('Date', inplace = True )
# Visualization the average of Open based on time (Week)
%matplotlib inline
plt.figure(figsize=(25, 25))
plt.subplot(3,3,1)
plt.ylabel('Open')
df.Open.plot()
plt.title('Date vs Open')
plt.subplot(3,3,2)
plt.ylabel('Low')
df.Low.plot()
plt.title('Date vs Low')
plt.subplot(3,3,3)
plt.ylabel('High')
df.High.plot()
plt.title('Date vs High')
plt.subplot(3,3,4)
plt.ylabel('Close')
df.Close.plot()
plt.title('Date vs Close')
plt.subplot(3,3,5)
plt.ylabel('Volume')
df.Volume.plot()
plt.title('Date vs Volume')
plt.subplot(3,3,6)
plt.ylabel('Market Cap Predict')
df['Market Cap Predict'].plot()
plt.title('Date vs Market Cap Predict')
###Output
_____no_output_____ |
Notebooks/Machine Learning - base line-5-clases.ipynb | ###Markdown
Machine learning Imports
###Code
import sys
import cufflinks
import pandas as pd
import numpy as np
from tqdm import tqdm
import warnings
warnings.filterwarnings('ignore')
sys.path.append('./..')
cufflinks.go_offline()
from Corpus.Corpus import get_corpus, filter_binary_pn, filter_corpus_small
from auxiliar.VectorizerHelper import vectorizer, vectorizerIdf, preprocessor
from auxiliar import parameters
from auxiliar.HtmlParser import HtmlParser
from sklearn.calibration import CalibratedClassifierCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import LinearSVC
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.metrics import accuracy_score
from sklearn.metrics import mean_squared_error
from sklearn.metrics import roc_auc_score
from sklearn.metrics import f1_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_score
from sklearn.metrics import roc_curve
from sklearn.model_selection import KFold
import copy
###Output
_____no_output_____
###Markdown
Config
###Code
polarity_dim = 5
clasificadores=['lr', 'ls', 'mb', 'rf']
idf = False
target_names=['Neg', 'Pos']
kfolds = 10
base_dir = '2-clases' if polarity_dim == 2 else ('3-clases' if polarity_dim == 3 else '5-clases')
name = 'machine_learning/tweeter/base_line'
###Output
_____no_output_____
###Markdown
Get data
###Code
cine = HtmlParser(200, "http://www.muchocine.net/criticas_ultimas.php", 1)
data_corpus = get_corpus('general-corpus', 'general-corpus', 1, None)
if polarity_dim == 2:
data_corpus = filter_binary_pn(data_corpus)
cine = filter_binary_pn(cine.get_corpus())
elif polarity_dim == 3:
data_corpus = filter_corpus_small(data_corpus)
cine = filter_corpus_small(cine.get_corpus())
elif polarity_dim == 5:
cine = cine.get_corpus()
cine = cine[:5000]
used_data = cine
split = used_data.shape[0] * 0.7
data_corpus = None
###Output
#Intentando obtener datos del archivo csv...
./../Corpus/../data/general-corpus.csv
#Datos recuperados!
###Markdown
Split data
###Code
used_data.reset_index().groupby('polarity').agg({'index': 'count'}).iplot(kind='bar')
train_corpus = used_data.loc[:split - 1 , :]
test_corpus = used_data.loc[split:, :]
###Output
_____no_output_____
###Markdown
Initialize ML
###Code
vect = vectorizerIdf if idf else vectorizer
ls = CalibratedClassifierCV(LinearSVC()) if polarity_dim == 2 else OneVsRestClassifier(CalibratedClassifierCV(LinearSVC()))
lr = LogisticRegression(solver='lbfgs') if polarity_dim == 2 else OneVsRestClassifier(LogisticRegression())
mb = MultinomialNB() if polarity_dim == 2 else OneVsRestClassifier(MultinomialNB())
rf = RandomForestClassifier() if polarity_dim == 2 else OneVsRestClassifier(RandomForestClassifier())
pipeline_ls = Pipeline([
('vect', copy.deepcopy(vect)),
('ls', ls)
])
pipeline_lr = Pipeline([
('vect', copy.deepcopy(vect)),
('lr', lr)
])
pipeline_mb = Pipeline([
('vect', copy.deepcopy(vect)),
('mb', mb)
])
pipeline_rf = Pipeline([
('vect', copy.deepcopy(vect)),
('rf', rf)
])
pipelines = {
'ls': pipeline_ls,
'lr': pipeline_lr,
'mb': pipeline_mb,
'rf': pipeline_rf
}
pipelines_train = {
'ls': ls,
'lr': lr,
'mb': mb,
'rf': rf
}
###Output
_____no_output_____
###Markdown
Train
###Code
folds = pd.read_pickle('../data/pkls/folds.pkl') # k-folds precargados
folds = folds.values
x_vect = vect.fit_transform(train_corpus.content, train_corpus.polarity).toarray()
cine_vect = vect.transform(cine.content).toarray()
vect.vocabulary_
results = {}
with tqdm(total=len(clasificadores) * 10) as pbar:
for c in clasificadores:
results[c] = { 'real': {}, 'cine_real': {}, 'predicted': {}, 'cine_predicted': {} }
i = 0
for train_index, test_index in folds:
train_x = x_vect[train_index]
train_y = train_corpus.polarity[train_index]
test_x = x_vect[test_index]
test_y = train_corpus.polarity[test_index]
pipelines_train[c].fit(train_x, train_y)
predicted = pipelines_train[c].predict(test_x)
cine_pred = pipelines_train[c].predict(cine_vect)
results[c]['real'][i] = test_y.values.tolist()
results[c]['cine_real'][i] = cine.polarity.values.tolist()
results[c]['predicted'][i] = predicted.tolist()
results[c]['cine_predicted'][i] = cine_pred.tolist()
i = i + 1
pbar.update(1)
pd.DataFrame(results).to_pickle('../results/'+name+'/'+base_dir+'/results.pkl')
###Output
_____no_output_____ |
AdventOfCode 2021/day 17.ipynb | ###Markdown
Part 1
###Code
ymin = int(re.search('y=(-?[0-9]+)', open('data/input.2021.17.txt').read()).groups()[0])
(ymin+1)*ymin // 2
(20, -10)
###Output
_____no_output_____ |
Text_Classify.ipynb | ###Markdown
###Code
print('Hello')
###Output
Hello
|
examples/ServiceXDemo.ipynb | ###Markdown
ServiceX Example
###Code
import requests
from minio import Minio
import tempfile
import pyarrow.parquet as pq
###Output
_____no_output_____
###Markdown
Submit the Transform RequestWe will create a REST request that specifies a DID along with a list of columns we want extracted.We also tell ServiceX that we want the resulting columns to be stored as parquet files in the object store
###Code
servicex_endpoint = "http://localhost:5000/servicex"
response = requests.post(servicex_endpoint+"/transformation", json={
"did": "mc15_13TeV:mc15_13TeV.361106.PowhegPythia8EvtGen_AZNLOCTEQ6L1_Zee.merge.DAOD_STDM3.e3601_s2576_s2132_r6630_r6264_p2363_tid05630052_00",
"columns": "Electrons.pt(), Electrons.eta(), Electrons.phi(), Electrons.e(), Muons.pt(), Muons.eta(), Muons.phi(), Muons.e()",
"image": "sslhep/servicex-transformer:latest",
"result-destination": "object-store",
"result-format": "parquet",
"chunk-size": 7000,
"workers": 1
})
print(response.json())
request_id = response.json()["request_id"]
status_endpoint = servicex_endpoint+"/transformation/{}/status".format(request_id)
###Output
{'request_id': '51722f8b-c6da-4821-8e4d-7325d0195aa2'}
###Markdown
Wait for the Transform to Complete
###Code
status = requests.get(status_endpoint).json()
print("We have processed {} files there are {} remainng".format(status['files-processed'], status['files-remaining']))
minio_endpoint = "localhost:9000"
minio_client = Minio(minio_endpoint,
access_key='miniouser',
secret_key='leftfoot1',
secure=False)
objects = minio_client.list_objects(request_id)
sample_file = list([file.object_name for file in objects])[0]
print(sample_file)
from IPython.display import display, HTML
with tempfile.TemporaryDirectory() as tmpdirname:
minio_client.fget_object(request_id,
sample_file,
sample_file)
pa_table = pq.read_table(sample_file)
display(pa_table.to_pandas())
###Output
_____no_output_____ |
docs/tutorials/ktr2.ipynb | ###Markdown
Kernel-based Time-varying Regression - Part IIThe previous tutorial covered the basic syntax and structure of **KTR** (or so called **BTVC**); time-series data was fitted with a KTR model accounting for trend and seasonality. In this tutorial a KTR model is fit with trend, seasonality, and additional regressors. To summarize part 1, **KTR** considers a time-series as an additive combination of local-trend, seasonality, and additional regressors. The coefficients for all three components are allowed to vary over time. The time-varying of the coefficients is modeled using kernel smoothing of latent variables. This can also be an advantage of picking this model over other static regression coefficients models. This tutorial covers:1. KTR model structure with regression2. syntax to initialize, fit and predict a model with regressors3. visualization of regression coefficients
###Code
import pandas as pd
import numpy as np
from math import pi
import matplotlib.pyplot as plt
import orbit
from orbit.models import KTR
from orbit.diagnostics.plot import plot_predicted_components
from orbit.utils.plot import get_orbit_style
from orbit.constants.palette import OrbitPalette
%matplotlib inline
pd.set_option('display.float_format', lambda x: '%.5f' % x)
orbit_style = get_orbit_style()
plt.style.use(orbit_style);
print(orbit.__version__)
###Output
1.1.1dev
###Markdown
Model Structure This section gives the mathematical structure of the KTR model. In short, it considers a time-series ($y_t$) as the linear combination of three parts. These are the local-trend ($l_t$), seasonality (s_t), and regression ($r_t$) terms at time $t$. That is $$y_t = l_t + s_t + r_t + \epsilon_t, ~ t = 1,\cdots, T,$$where - $\epsilon_t$s comprise a stationary random error process.- $r_t$ is the regression component which can be further expressed as $\sum_{i=1}^{I} {x_{i,t}\beta_{i, t}}$ with covariate $x$ and coefficient $\beta$ on indexes $i,t$For details of how on $l_t$ and $s_t$, please refer to **Part I**. Recall in **KTR**, we express coefficients as$$B=K b^T$$where- *coefficient matrix* $\text{B}$ has size $t \times P$ with rows equal to the $\beta_t$ - *knot matrix* $b$ with size $P\times J$; each entry is a latent variable $b_{p, j}$. The $b_j$ can be viewed as the "knots" from the perspective of spline regression and $j$ is a time index such that $t_j \in [1, \cdots, T]$.- *kernel matrix* $K$ with size $T\times J$ where the $i$th row and $j$th element can be viewed as the normalized weight $k(t_j, t) / \sum_{j=1}^{J} k(t_j, t)$In regression, we generate the matrix $K$ with Gaussian kernel $k_\text{reg}$ as such:$k_\text{reg}(t, t_j;\rho) = \exp ( -\frac{(t-t_j)^2}{2\rho^2} ),$where $\rho$ is the scale hyper-parameter. Data Simulation ModuleIn this example, we will use simulated data in order to have true regression coefficients for comparison. We propose two set of simulation data with three predictors each:The two data sets are:- random walk- sine-cosine like Note the data are random so it may be worthwhile to repeat the next few sets a few times to see how different data sets work. Random Walk Simulated Dataset
###Code
def sim_data_seasonal(n, RS):
""" coefficients curve are sine-cosine like
"""
np.random.seed(RS)
# make the time varing coefs
tau = np.arange(1, n+1)/n
data = pd.DataFrame({
'tau': tau,
'date': pd.date_range(start='1/1/2018', periods=n),
'beta1': 2 * tau,
'beta2': 1.01 + np.sin(2*pi*tau),
'beta3': 1.01 + np.sin(4*pi*(tau-1/8)),
'x1': np.random.normal(0, 10, size=n),
'x2': np.random.normal(0, 10, size=n),
'x3': np.random.normal(0, 10, size=n),
'trend': np.cumsum(np.concatenate((np.array([1]), np.random.normal(0, 0.1, n-1)))),
'error': np.random.normal(0, 1, size=n) #stats.t.rvs(30, size=n),#
})
data['y'] = data.x1 * data.beta1 + data.x2 * data.beta2 + data.x3 * data.beta3 + data.error
return data
def sim_data_rw(n, RS, p=3):
""" coefficients curve are random walk like
"""
np.random.seed(RS)
# initializing coefficients at zeros, simulate all coefficient values
lev = np.cumsum(np.concatenate((np.array([5.0]), np.random.normal(0, 0.01, n-1))))
beta = np.concatenate(
[np.random.uniform(0.05, 0.12, size=(1,p)),
np.random.normal(0.0, 0.01, size=(n-1,p))],
axis=0)
beta = np.cumsum(beta, 0)
# simulate regressors
covariates = np.random.normal(0, 10, (n, p))
# observation with noise
y = lev + (covariates * beta).sum(-1) + 0.3 * np.random.normal(0, 1, n)
regressor_col = ['x{}'.format(pp) for pp in range(1, p+1)]
data = pd.DataFrame(covariates, columns=regressor_col)
beta_col = ['beta{}'.format(pp) for pp in range(1, p+1)]
beta_data = pd.DataFrame(beta, columns=beta_col)
data = pd.concat([data, beta_data], axis=1)
data['y'] = y
data['date'] = pd.date_range(start='1/1/2018', periods=len(y))
return data
rw_data = sim_data_rw(n=300, RS=2021, p=3)
rw_data.head(10)
###Output
_____no_output_____
###Markdown
Sine-Cosine Like Simulated Dataset
###Code
sc_data = sim_data_seasonal(n=80, RS=2021)
sc_data.head(10)
###Output
_____no_output_____
###Markdown
Fitting a Model with Regressors The metadata for simulated data sets.
###Code
# num of predictors
p = 3
regressor_col = ['x{}'.format(pp) for pp in range(1, p + 1)]
response_col = 'y'
date_col='date'
###Output
_____no_output_____
###Markdown
As in **Part I** KTR follows sklearn model API style. First an instance of the Orbit class `KTR` is created. Second fit and predict methods are called for that instance. Besides providing meta data such `response_col`, `date_col` and `regressor_col`, there are additional args to provide to specify the estimator and the setting of the estimator. For details, please refer to other tutorials of the **Orbit** site.
###Code
ktr = KTR(
response_col=response_col,
date_col=date_col,
regressor_col=regressor_col,
prediction_percentiles=[2.5, 97.5],
seed=2021,
estimator='pyro-svi',
)
###Output
_____no_output_____
###Markdown
Here `predict` has the additional argument `decompose=True`. This returns the compponents ($l_t$, $s_t$, and $r_t$) of the regression along with the prediction.
###Code
ktr.fit(df=rw_data)
ktr.predict(df=rw_data, decompose=True).head(5)
###Output
INFO:root:Guessed max_plate_nesting = 1
###Markdown
Visualization of Regression Coefficient CurvesThe function `get_regression_coefs` to extract coefficients (they will have central credibility intervals if the argument `include_ci=True` is used).
###Code
coef_mid, coef_lower, coef_upper = ktr.get_regression_coefs(include_ci=True)
coef_mid.head(5)
###Output
_____no_output_____
###Markdown
Because this is simulated data it is possible to overlay the estimate with the true coefficients.
###Code
fig, axes = plt.subplots(p, 1, figsize=(12, 12), sharex=True)
x = np.arange(coef_mid.shape[0])
for idx in range(p):
axes[idx].plot(x, coef_mid['x{}'.format(idx + 1)], label='est' if idx == 0 else "", color=OrbitPalette.BLUE.value)
axes[idx].fill_between(x, coef_lower['x{}'.format(idx + 1)], coef_upper['x{}'.format(idx + 1)], alpha=0.2, color=OrbitPalette.BLUE.value)
axes[idx].scatter(x, rw_data['beta{}'.format(idx + 1)], label='truth' if idx == 0 else "", s=10, alpha=0.6, color=OrbitPalette.BLACK.value)
axes[idx].set_title('beta{}'.format(idx + 1))
fig.legend(bbox_to_anchor = (1,0.5));
###Output
_____no_output_____
###Markdown
To plot coefficients use the function `plot_regression_coefs` from the KTR class.
###Code
ktr.plot_regression_coefs(figsize=(10, 5), include_ci=True);
###Output
_____no_output_____
###Markdown
These type of time-varying coefficients detection problems are not new. Bayesian approach such as the R packages Bayesian Structural Time Series (a.k.a **BSTS**) by Scott and Varian (2014) and **tvReg** Isabel Casas and Ruben Fernandez-Casal (2021). Other frequentist approach such as Wu and Chiang (2000).For further studies on benchmarking coefficients detection, Ng, Wang and Dai (2021) provides a detailed comparison of **KTR** with other popular time-varying coefficients methods; **KTR** demonstrates superior performance in the random walk data simulation. Customizing Priors and Number of Knot Segments To demonstrate how to specify the number of knots and priors consider the sine-cosine like simulated dataset. In this dataset, the fitting is more tricky since there could be some better way to define the number and position of the knots. There are obvious "change points" within the sine-cosine like curves. In **KTR** there are a few arguments that can leveraged to asign a priori knot attributes:1. `regressor_init_knot_loc` is used to define the prior mean of the knot value. e.g. in this case, there is not a lot of prior knowledge so zeros are used.2. The `regressor_init_knot_scale` and `regressor_knot_scale` are used to tune the prior sd of the global mean of the knot and the sd of each knot from the global mean respectively. These create a plausible range for the knot values.3. The `regression_segments` defines the number of between knot segments (the number of knots - 1). The higher the number of segments the more change points are possible.
###Code
ktr = KTR(
response_col=response_col,
date_col=date_col,
regressor_col=regressor_col,
regressor_init_knot_loc=[0] * len(regressor_col),
regressor_init_knot_scale=[10.0] * len(regressor_col),
regressor_knot_scale=[2.0] * len(regressor_col),
regression_segments=6,
prediction_percentiles=[2.5, 97.5],
seed=2021,
estimator='pyro-svi',
)
ktr.fit(df=sc_data)
coef_mid, coef_lower, coef_upper = ktr.get_regression_coefs(include_ci=True)
fig, axes = plt.subplots(p, 1, figsize=(12, 12), sharex=True)
x = np.arange(coef_mid.shape[0])
for idx in range(p):
axes[idx].plot(x, coef_mid['x{}'.format(idx + 1)], label='est' if idx == 0 else "", color=OrbitPalette.BLUE.value)
axes[idx].fill_between(x, coef_lower['x{}'.format(idx + 1)], coef_upper['x{}'.format(idx + 1)], alpha=0.2, color=OrbitPalette.BLUE.value)
axes[idx].scatter(x, sc_data['beta{}'.format(idx + 1)], label='truth' if idx == 0 else "", s=10, alpha=0.6, color=OrbitPalette.BLACK.value)
axes[idx].set_title('beta{}'.format(idx + 1))
fig.legend(bbox_to_anchor = (1, 0.5));
###Output
_____no_output_____
###Markdown
Visualize the knots using the `plot_regression_coefs` function with `with_knot=True`.
###Code
ktr.plot_regression_coefs(with_knot=True, figsize=(10, 5), include_ci=True);
###Output
_____no_output_____
###Markdown
Kernel-based Time-varying Regression - Part IIThe previous tutorial covered the basic syntax and structure of **KTR** (or so called **BTVC**); time-series data was fitted with a KTR model accounting for trend and seasonality. In this tutorial a KTR model is fit with trend, seasonality, and additional regressors. To summarize part 1, **KTR** considers a time-series as an additive combination of local-trend, seasonality, and additional regressors. The coefficients for all three components are allowed to vary over time. The time-varying of the coefficients is modeled using kernel smoothing of latent variables. This can also be an advantage of picking this model over other static regression coefficients models. This tutorial covers:1. KTR model structure with regression2. syntax to initialize, fit and predict a model with regressors3. visualization of regression coefficients
###Code
import pandas as pd
import numpy as np
from math import pi
import matplotlib.pyplot as plt
import orbit
from orbit.models import KTR
from orbit.diagnostics.plot import plot_predicted_components
from orbit.utils.plot import get_orbit_style
from orbit.constants.palette import OrbitPalette
%matplotlib inline
pd.set_option('display.float_format', lambda x: '%.5f' % x)
orbit_style = get_orbit_style()
plt.style.use(orbit_style);
print(orbit.__version__)
###Output
1.1.1dev
###Markdown
Model Structure This section gives the mathematical structure of the KTR model. In short, it considers a time-series ($y_t$) as the linear combination of three parts. These are the local-trend ($l_t$), seasonality (s_t), and regression ($r_t$) terms at time $t$. That is $$y_t = l_t + s_t + r_t + \epsilon_t, ~ t = 1,\cdots, T,$$where - $\epsilon_t$s comprise a stationary random error process.- $r_t$ is the regression component which can be further expressed as $\sum_{i=1}^{I} {x_{i,t}\beta_{i, t}}$ with covariate $x$ and coefficient $\beta$ on indexes $i,t$For details of how on $l_t$ and $s_t$, please refer to **Part I**. Recall in **KTR**, we express coefficients as$$B=K b^T$$where- *coefficient matrix* $\text{B}$ has size $t \times P$ with rows equal to the $\beta_t$ - *knot matrix* $b$ with size $P\times J$; each entry is a latent variable $b_{p, j}$. The $b_j$ can be viewed as the "knots" from the perspective of spline regression and $j$ is a time index such that $t_j \in [1, \cdots, T]$.- *kernel matrix* $K$ with size $T\times J$ where the $i$th row and $j$th element can be viewed as the normalized weight $k(t_j, t) / \sum_{j=1}^{J} k(t_j, t)$In regression, we generate the matrix $K$ with Gaussian kernel $k_\text{reg}$ as such:$k_\text{reg}(t, t_j;\rho) = \exp ( -\frac{(t-t_j)^2}{2\rho^2} ),$where $\rho$ is the scale hyper-parameter. Data Simulation ModuleIn this example, we will use simulated data in order to have true regression coefficients for comparison. We propose two set of simulation data with three predictors each:The two data sets are:- random walk- sine-cosine like Note the data are random so it may be worthwhile to repeat the next few sets a few times to see how different data sets work. Random Walk Simulated Dataset
###Code
def sim_data_seasonal(n, RS):
""" coefficients curve are sine-cosine like
"""
np.random.seed(RS)
# make the time varing coefs
tau = np.arange(1, n+1)/n
data = pd.DataFrame({
'tau': tau,
'date': pd.date_range(start='1/1/2018', periods=n),
'beta1': 2 * tau,
'beta2': 1.01 + np.sin(2*pi*tau),
'beta3': 1.01 + np.sin(4*pi*(tau-1/8)),
'x1': np.random.normal(0, 10, size=n),
'x2': np.random.normal(0, 10, size=n),
'x3': np.random.normal(0, 10, size=n),
'trend': np.cumsum(np.concatenate((np.array([1]), np.random.normal(0, 0.1, n-1)))),
'error': np.random.normal(0, 1, size=n) #stats.t.rvs(30, size=n),#
})
data['y'] = data.x1 * data.beta1 + data.x2 * data.beta2 + data.x3 * data.beta3 + data.error
return data
def sim_data_rw(n, RS, p=3):
""" coefficients curve are random walk like
"""
np.random.seed(RS)
# initializing coefficients at zeros, simulate all coefficient values
lev = np.cumsum(np.concatenate((np.array([5.0]), np.random.normal(0, 0.01, n-1))))
beta = np.concatenate(
[np.random.uniform(0.05, 0.12, size=(1,p)),
np.random.normal(0.0, 0.01, size=(n-1,p))],
axis=0)
beta = np.cumsum(beta, 0)
# simulate regressors
covariates = np.random.normal(0, 10, (n, p))
# observation with noise
y = lev + (covariates * beta).sum(-1) + 0.3 * np.random.normal(0, 1, n)
regressor_col = ['x{}'.format(pp) for pp in range(1, p+1)]
data = pd.DataFrame(covariates, columns=regressor_col)
beta_col = ['beta{}'.format(pp) for pp in range(1, p+1)]
beta_data = pd.DataFrame(beta, columns=beta_col)
data = pd.concat([data, beta_data], axis=1)
data['y'] = y
data['date'] = pd.date_range(start='1/1/2018', periods=len(y))
return data
rw_data = sim_data_rw(n=300, RS=2021, p=3)
rw_data.head(10)
###Output
_____no_output_____
###Markdown
Sine-Cosine Like Simulated Dataset
###Code
sc_data = sim_data_seasonal(n=80, RS=2021)
sc_data.head(10)
###Output
_____no_output_____
###Markdown
Fitting a Model with Regressors The metadata for simulated data sets.
###Code
# num of predictors
p = 3
regressor_col = ['x{}'.format(pp) for pp in range(1, p + 1)]
response_col = 'y'
date_col='date'
###Output
_____no_output_____
###Markdown
As in **Part I** KTR follows sklearn model API style. First an instance of the Orbit class `KTR` is created. Second fit and predict methods are called for that instance. Besides providing meta data such `response_col`, `date_col` and `regressor_col`, there are additional args to provide to specify the estimator and the setting of the estimator. For details, please refer to other tutorials of the **Orbit** site.
###Code
ktr = KTR(
response_col=response_col,
date_col=date_col,
regressor_col=regressor_col,
prediction_percentiles=[2.5, 97.5],
seed=2021,
estimator='pyro-svi',
)
###Output
_____no_output_____
###Markdown
Here `predict` has the additional argument `decompose=True`. This returns the compponents ($l_t$, $s_t$, and $r_t$) of the regression along with the prediction.
###Code
ktr.fit(df=rw_data)
ktr.predict(df=rw_data, decompose=True).head(5)
###Output
INFO:orbit:Optimizing(PyStan) with algorithm:LBFGS .
INFO:orbit:Using SVI(Pyro) with steps:301 , samples:100 , learning rate:0.1, learning_rate_total_decay:1.0 and particles:100 .
INFO:root:Guessed max_plate_nesting = 1
INFO:orbit:step 0 loss = 3107, scale = 0.091353
INFO:orbit:step 100 loss = 319.68, scale = 0.050692
INFO:orbit:step 200 loss = 305.49, scale = 0.05142
INFO:orbit:step 300 loss = 314.01, scale = 0.05279
###Markdown
Visualization of Regression Coefficient CurvesThe function `get_regression_coefs` to extract coefficients (they will have central credibility intervals if the argument `include_ci=True` is used).
###Code
coef_mid, coef_lower, coef_upper = ktr.get_regression_coefs(include_ci=True)
coef_mid.head(5)
###Output
_____no_output_____
###Markdown
Because this is simulated data it is possible to overlay the estimate with the true coefficients.
###Code
fig, axes = plt.subplots(p, 1, figsize=(12, 12), sharex=True)
x = np.arange(coef_mid.shape[0])
for idx in range(p):
axes[idx].plot(x, coef_mid['x{}'.format(idx + 1)], label='est' if idx == 0 else "", color=OrbitPalette.BLUE.value)
axes[idx].fill_between(x, coef_lower['x{}'.format(idx + 1)], coef_upper['x{}'.format(idx + 1)], alpha=0.2, color=OrbitPalette.BLUE.value)
axes[idx].scatter(x, rw_data['beta{}'.format(idx + 1)], label='truth' if idx == 0 else "", s=10, alpha=0.6, color=OrbitPalette.BLACK.value)
axes[idx].set_title('beta{}'.format(idx + 1))
fig.legend(bbox_to_anchor = (1,0.5));
###Output
_____no_output_____
###Markdown
To plot coefficients use the function `plot_regression_coefs` from the KTR class.
###Code
ktr.plot_regression_coefs(figsize=(10, 5), include_ci=True);
###Output
_____no_output_____
###Markdown
These type of time-varying coefficients detection problems are not new. Bayesian approach such as the R packages Bayesian Structural Time Series (a.k.a **BSTS**) by Scott and Varian (2014) and **tvReg** Isabel Casas and Ruben Fernandez-Casal (2021). Other frequentist approach such as Wu and Chiang (2000).For further studies on benchmarking coefficients detection, Ng, Wang and Dai (2021) provides a detailed comparison of **KTR** with other popular time-varying coefficients methods; **KTR** demonstrates superior performance in the random walk data simulation. Customizing Priors and Number of Knot Segments To demonstrate how to specify the number of knots and priors consider the sine-cosine like simulated dataset. In this dataset, the fitting is more tricky since there could be some better way to define the number and position of the knots. There are obvious "change points" within the sine-cosine like curves. In **KTR** there are a few arguments that can leveraged to asign a priori knot attributes:1. `regressor_init_knot_loc` is used to define the prior mean of the knot value. e.g. in this case, there is not a lot of prior knowledge so zeros are used.2. The `regressor_init_knot_scale` and `regressor_knot_scale` are used to tune the prior sd of the global mean of the knot and the sd of each knot from the global mean respectively. These create a plausible range for the knot values.3. The `regression_segments` defines the number of between knot segments (the number of knots - 1). The higher the number of segments the more change points are possible.
###Code
ktr = KTR(
response_col=response_col,
date_col=date_col,
regressor_col=regressor_col,
regressor_init_knot_loc=[0] * len(regressor_col),
regressor_init_knot_scale=[10.0] * len(regressor_col),
regressor_knot_scale=[2.0] * len(regressor_col),
regression_segments=6,
prediction_percentiles=[2.5, 97.5],
seed=2021,
estimator='pyro-svi',
)
ktr.fit(df=sc_data)
coef_mid, coef_lower, coef_upper = ktr.get_regression_coefs(include_ci=True)
fig, axes = plt.subplots(p, 1, figsize=(12, 12), sharex=True)
x = np.arange(coef_mid.shape[0])
for idx in range(p):
axes[idx].plot(x, coef_mid['x{}'.format(idx + 1)], label='est' if idx == 0 else "", color=OrbitPalette.BLUE.value)
axes[idx].fill_between(x, coef_lower['x{}'.format(idx + 1)], coef_upper['x{}'.format(idx + 1)], alpha=0.2, color=OrbitPalette.BLUE.value)
axes[idx].scatter(x, sc_data['beta{}'.format(idx + 1)], label='truth' if idx == 0 else "", s=10, alpha=0.6, color=OrbitPalette.BLACK.value)
axes[idx].set_title('beta{}'.format(idx + 1))
fig.legend(bbox_to_anchor = (1, 0.5));
###Output
_____no_output_____
###Markdown
Visualize the knots using the `plot_regression_coefs` function with `with_knot=True`.
###Code
ktr.plot_regression_coefs(with_knot=True, figsize=(10, 5), include_ci=True);
###Output
_____no_output_____
###Markdown
Kernel-based Time-varying Regression - Part IIThe previous tutorial covered the basic syntax and structure of **KTR** (or so called **BTVC**); time-series data was fitted with a KTR model accounting for trend and seasonality. In this tutorial a KTR model is fit with trend, seasonality, and additional regressors. To summarize part 1, **KTR** considers a time-series as an additive combination of local-trend, seasonality, and additional regressors. The coefficients for all three components are allowed to vary over time. The time-varying of the coefficients is modeled using kernel smoothing of latent variables. This can also be an advantage of picking this model over other static regression coefficients models. This tutorial covers:1. KTR model structure with regression2. syntax to initialize, fit and predict a model with regressors3. visualization of regression coefficients
###Code
import pandas as pd
import numpy as np
from math import pi
import matplotlib.pyplot as plt
import orbit
from orbit.models import KTR
from orbit.diagnostics.plot import plot_predicted_components
from orbit.utils.plot import get_orbit_style
from orbit.constants.palette import OrbitPalette
%matplotlib inline
pd.set_option('display.float_format', lambda x: '%.5f' % x)
orbit_style = get_orbit_style()
plt.style.use(orbit_style);
print(orbit.__version__)
###Output
1.1.0dev
###Markdown
Model Structure This section gives the mathematical structure of the KTR model. In short, it considers a time-series ($y_t$) as the linear combination of three parts. These are the local-trend ($l_t$), seasonality (s_t), and regression ($r_t$) terms at time $t$. That is $$y_t = l_t + s_t + r_t + \epsilon_t, ~ t = 1,\cdots, T,$$where - $\epsilon_t$s comprise a stationary random error process.- $r_t$ is the regression component which can be further expressed as $\sum_{i=1}^{I} {x_{i,t}\beta_{i, t}}$ with covariate $x$ and coefficient $\beta$ on indexes $i,t$For details of how on $l_t$ and $s_t$, please refer to **Part I**. Recall in **KTR**, we express coefficients as$$B=K b^T$$where- *coefficient matrix* $\text{B}$ has size $t \times P$ with rows equal to the $\beta_t$ - *knot matrix* $b$ with size $P\times J$; each entry is a latent variable $b_{p, j}$. The $b_j$ can be viewed as the "knots" from the perspective of spline regression and $j$ is a time index such that $t_j \in [1, \cdots, T]$.- *kernel matrix* $K$ with size $T\times J$ where the $i$th row and $j$th element can be viewed as the normalized weight $k(t_j, t) / \sum_{j=1}^{J} k(t_j, t)$In regression, we generate the matrix $K$ with Gaussian kernel $k_\text{reg}$ as such:$k_\text{reg}(t, t_j;\rho) = \exp ( -\frac{(t-t_j)^2}{2\rho^2} ),$where $\rho$ is the scale hyper-parameter. Data Simulation ModuleIn this example, we will use simulated data in order to have true regression coefficients for comparison. We propose two set of simulation data with three predictors each:The two data sets are:- random walk- sine-cosine like Note the data are random so it may be worthwhile to repeat the next few sets a few times to see how different data sets work. Random Walk Simulated Dataset
###Code
def sim_data_seasonal(n, RS):
""" coefficients curve are sine-cosine like
"""
np.random.seed(RS)
# make the time varing coefs
tau = np.arange(1, n+1)/n
data = pd.DataFrame({
'tau': tau,
'date': pd.date_range(start='1/1/2018', periods=n),
'beta1': 2 * tau,
'beta2': 1.01 + np.sin(2*pi*tau),
'beta3': 1.01 + np.sin(4*pi*(tau-1/8)),
'x1': np.random.normal(0, 10, size=n),
'x2': np.random.normal(0, 10, size=n),
'x3': np.random.normal(0, 10, size=n),
'trend': np.cumsum(np.concatenate((np.array([1]), np.random.normal(0, 0.1, n-1)))),
'error': np.random.normal(0, 1, size=n) #stats.t.rvs(30, size=n),#
})
data['y'] = data.x1 * data.beta1 + data.x2 * data.beta2 + data.x3 * data.beta3 + data.error
return data
def sim_data_rw(n, RS, p=3):
""" coefficients curve are random walk like
"""
np.random.seed(RS)
# initializing coefficients at zeros, simulate all coefficient values
lev = np.cumsum(np.concatenate((np.array([5.0]), np.random.normal(0, 0.01, n-1))))
beta = np.concatenate(
[np.random.uniform(0.05, 0.12, size=(1,p)),
np.random.normal(0.0, 0.01, size=(n-1,p))],
axis=0)
beta = np.cumsum(beta, 0)
# simulate regressors
covariates = np.random.normal(0, 10, (n, p))
# observation with noise
y = lev + (covariates * beta).sum(-1) + 0.3 * np.random.normal(0, 1, n)
regressor_col = ['x{}'.format(pp) for pp in range(1, p+1)]
data = pd.DataFrame(covariates, columns=regressor_col)
beta_col = ['beta{}'.format(pp) for pp in range(1, p+1)]
beta_data = pd.DataFrame(beta, columns=beta_col)
data = pd.concat([data, beta_data], axis=1)
data['y'] = y
data['date'] = pd.date_range(start='1/1/2018', periods=len(y))
return data
rw_data = sim_data_rw(n=300, RS=2021, p=3)
rw_data.head(10)
###Output
_____no_output_____
###Markdown
Sine-Cosine Like Simulated Dataset
###Code
sc_data = sim_data_seasonal(n=80, RS=2021)
sc_data.head(10)
###Output
_____no_output_____
###Markdown
Fitting a Model with Regressors The metadata for simulated data sets.
###Code
# num of predictors
p = 3
regressor_col = ['x{}'.format(pp) for pp in range(1, p + 1)]
response_col = 'y'
date_col='date'
###Output
_____no_output_____
###Markdown
As in **Part I** KTR follows sklearn model API style. First an instance of the Orbit class `KTR` is created. Second fit and predict methods are called for that instance. Besides providing meta data such `response_col`, `date_col` and `regressor_col`, there are additional args to provide to specify the estimator and the setting of the estimator. For details, please refer to other tutorials of the **Orbit** site.
###Code
ktr = KTR(
response_col=response_col,
date_col=date_col,
regressor_col=regressor_col,
prediction_percentiles=[2.5, 97.5],
seed=2021,
estimator='pyro-svi',
)
###Output
_____no_output_____
###Markdown
Here `predict` has the additional argument `decompose=True`. This returns the compponents ($l_t$, $s_t$, and $r_t$) of the regression along with the prediction.
###Code
ktr.fit(df=rw_data)
ktr.predict(df=rw_data, decompose=True).head(5)
###Output
INFO:root:Guessed max_plate_nesting = 1
###Markdown
Visualization of Regression Coefficient CurvesThe function `get_regression_coefs` to extract coefficients (they will have central credibility intervals if the argument `include_ci=True` is used).
###Code
coef_mid, coef_lower, coef_upper = ktr.get_regression_coefs(include_ci=True)
coef_mid.head(5)
###Output
_____no_output_____
###Markdown
Because this is simulated data it is possible to overlay the estimate with the true coefficients.
###Code
fig, axes = plt.subplots(p, 1, figsize=(12, 12), sharex=True)
x = np.arange(coef_mid.shape[0])
for idx in range(p):
axes[idx].plot(x, coef_mid['x{}'.format(idx + 1)], label='est' if idx == 0 else "", color=OrbitPalette.BLUE.value)
axes[idx].fill_between(x, coef_lower['x{}'.format(idx + 1)], coef_upper['x{}'.format(idx + 1)], alpha=0.2, color=OrbitPalette.BLUE.value)
axes[idx].scatter(x, rw_data['beta{}'.format(idx + 1)], label='truth' if idx == 0 else "", s=10, alpha=0.6, color=OrbitPalette.BLACK.value)
axes[idx].set_title('beta{}'.format(idx + 1))
fig.legend(bbox_to_anchor = (1,0.5));
###Output
_____no_output_____
###Markdown
To plot coefficients use the function `plot_regression_coefs` from the KTR class.
###Code
ktr.plot_regression_coefs(figsize=(10, 5), include_ci=True);
###Output
_____no_output_____
###Markdown
These type of time-varying coefficients detection problems are not new. Bayesian approach such as the R packages Bayesian Structural Time Series (a.k.a **BSTS**) by Scott and Varian (2014) and **tvReg** Isabel Casas and Ruben Fernandez-Casal (2021). Other frequentist approach such as Wu and Chiang (2000).For further studies on benchmarking coefficients detection, Ng, Wang and Dai (2021) provides a detailed comparison of **KTR** with other popular time-varying coefficients methods; **KTR** demonstrates superior performance in the random walk data simulation. Customizing Priors and Number of Knot Segments To demonstrate how to specify the number of knots and priors consider the sine-cosine like simulated dataset. In this dataset, the fitting is more tricky since there could be some better way to define the number and position of the knots. There are obvious "change points" within the sine-cosine like curves. In **KTR** there are a few arguments that can leveraged to asign a priori knot attributes:1. `regressor_init_knot_loc` is used to define the prior mean of the knot value. e.g. in this case, there is not a lot of prior knowledge so zeros are used.2. The `regressor_init_knot_scale` and `regressor_knot_scale` are used to tune the prior sd of the global mean of the knot and the sd of each knot from the global mean respectively. These create a plausible range for the knot values.3. The `regression_segments` defines the number of between knot segments (the number of knots - 1). The higher the number of segments the more change points are possible.
###Code
ktr = KTR(
response_col=response_col,
date_col=date_col,
regressor_col=regressor_col,
regressor_init_knot_loc=[0] * len(regressor_col),
regressor_init_knot_scale=[10.0] * len(regressor_col),
regressor_knot_scale=[2.0] * len(regressor_col),
regression_segments=6,
prediction_percentiles=[2.5, 97.5],
seed=2021,
estimator='pyro-svi',
)
ktr.fit(df=sc_data)
coef_mid, coef_lower, coef_upper = ktr.get_regression_coefs(include_ci=True)
fig, axes = plt.subplots(p, 1, figsize=(12, 12), sharex=True)
x = np.arange(coef_mid.shape[0])
for idx in range(p):
axes[idx].plot(x, coef_mid['x{}'.format(idx + 1)], label='est' if idx == 0 else "", color=OrbitPalette.BLUE.value)
axes[idx].fill_between(x, coef_lower['x{}'.format(idx + 1)], coef_upper['x{}'.format(idx + 1)], alpha=0.2, color=OrbitPalette.BLUE.value)
axes[idx].scatter(x, sc_data['beta{}'.format(idx + 1)], label='truth' if idx == 0 else "", s=10, alpha=0.6, color=OrbitPalette.BLACK.value)
axes[idx].set_title('beta{}'.format(idx + 1))
fig.legend(bbox_to_anchor = (1, 0.5));
###Output
_____no_output_____
###Markdown
Visualize the knots using the `plot_regression_coefs` function with `with_knot=True`.
###Code
ktr.plot_regression_coefs(with_knot=True, figsize=(10, 5), include_ci=True);
###Output
_____no_output_____ |
jw/modeling_logistic.ipynb | ###Markdown
1. 아웃라이어 제거 (하려고 했지만 로지스틱 리그레션은 아웃라이어에 robust하단다) 2. 스케일링 3. 정규화 4. SEX_ID, AGE, ITEM_COUNT, GENRE_NAME, PRICE_RATE, CATALOG_PRICE, DISCOUNT_PRICE, DISPERIOD, VALIDPERIOD, ken_name, LATITUDE, LONGITUDE, PAGE_SERIAL
###Code
formula = 'PURCHASE_FLG ~ C(GENRE_NAME) + DISCOUNT_PRICE + C(ken_name)'
model = sm.Logit.from_formula(formula, df)
result = model.fit(disp=0)
# predict test data
predict_proba = clf.predict_proba(X_test)
# 클래스가 1이면 산다라고 분류
# 그러니까 아래 코드는 사는 놈들의 스코어를 위한 인덱스
pos_idx = np.where(clf.classes_ == True)[0][0]
test_df["predict"] = predict_proba[:, pos_idx]
top10_coupon = test_df.groupby("USER_ID_hash").apply(top_merge)
top10_coupon.name = "PURCHASED_COUPONS"
top10_coupon.to_csv("submission.csv", header=True)
###Output
_____no_output_____ |
notebook/Slide_Deck.ipynb | ###Markdown
Ford GoBike System Data Visualization happened on Feb, 2018 by Mohamed Elaraby Investigation Overview> in this presentation I will analyze the Ford GoBike System data happened in February 2018, my visualization will focus on the duration, start_time(day, hour) and user type Dataset Overview> GoBike Data is a dataset for bike trips happened on Febraury 2018, Each trip is anonymized and includes:-- Trip Duration (seconds).- Start Time and Date.- End Time and Date- Start Station ID.- Start Station Name.- Start Station Latitude.- Start Station Longitude.- End Station ID.- End Station Name.- End Station Latitude.- End Station Longitude.- Bike ID.- User Type (Subscriber or Customer – “Subscriber” = Member or “Customer” = Casual).data source: https://www.lyft.com/bikes/bay-wheels/system-data
###Code
# import all packages and set plots to be embedded inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
import calendar
import matplotlib.gridspec as gridspec
%matplotlib inline
# suppress warnings from final output
import warnings
warnings.simplefilter("ignore")
# load in the dataset into a pandas dataframe
df= pd.read_csv('Ford GoBike.csv')
df.head()
df[['start_time','end_time']]= df[['start_time','end_time']].apply(pd.to_datetime, format='%Y-%m-%d %H:%M:%S.%f')
df['user_type']= df['user_type'].astype('category')
# we will extract month, dayofweek, hour from the start_time
df['start_month']= df.start_time.dt.strftime('%b')
df['start_day_week']= df.start_time.dt.strftime('%a')
df['start_hour']= df.start_time.dt.strftime('%H')
df.head()
###Output
_____no_output_____
###Markdown
distribution of duration using histogram it seems to be normally distributed- with range of 60 seconds to > 20K seconds.- the most common values almost in the range of 200:2K seconds.
###Code
plt.figure(figsize=[8,6])
bins= 10** np.arange(np.log10(df.duration_sec.min()), np.log10(df.duration_sec.max())+0.02, 0.02)
plt.hist(data=df, x='duration_sec',bins=bins)
plt.xscale('log')
plt.xticks([50,100,200,300,500, 1e3, 2e3, 5e3, 1e4, 2e4], [50,100,200,300,500, '1k', '2k', '5k', '10k', '20k'])
plt.title('The distribution of log duration (seconds)')
plt.xlabel('Log duration (Seconds)')
plt.ylabel('Frequency');
###Output
_____no_output_____
###Markdown
Month/Day/Hour Biker Trensds - all data recorded at February.- Tuesday, Thursday, Wedensday almost equal and have the most bikers in the week.- 8 AM and 5 PM is the most hours have bikers in the day.
###Code
fig, ax = plt.subplots(nrows=3, figsize = [10,8], constrained_layout=True)
default_color = sb.color_palette()[0]
sb.countplot(data = df, x = 'start_month', color = default_color, order=df.start_month.value_counts().index, ax = ax[0])
sb.countplot(data = df, x = 'start_day_week', color = default_color, order=df.start_day_week.value_counts().index, ax = ax[1])
sb.countplot(data = df, x = 'start_hour', color = default_color, order=df.start_hour.value_counts().index, ax = ax[2])
ax[0].set_title('Biking Trips (Month)')
ax[1].set_title('Biking Trips (Day of week)')
ax[2].set_title('Biking Trips (Hour of day)');
###Output
_____no_output_____
###Markdown
User types Vs. Duration- the median duration of cutomer type is higher than median duration of subscriber type.
###Code
plt.figure(figsize=[8,6])
base_color= sb.color_palette()[0]
sb.boxplot(data=df, x='user_type', y='duration_sec', color=base_color)
plt.yscale('log')
plt.yticks([50,100,200,500, 1e3, 2e3, 5e3, 1e4, 2e4,4e4,8e4], [50,100,200,500, '1k', '2k', '5k', '10k', '20k','40k','80k'])
plt.title('Duration comparison of user type')
plt.ylabel('Log duration (Seconds)');
###Output
_____no_output_____
###Markdown
Duration during the weekday and hour - Friday and Thursday have the longest duration compared to other days of the week.- 2 AM, 3 AM and 4 AM have the most duration compared to other hours of the day.
###Code
plt.figure(figsize=[14,6])
sb.pointplot(data = df, x = 'start_hour', y = 'duration_sec', hue = 'start_day_week',
dodge = 0.5, linestyles = "",palette='Blues')
plt.yscale('log')
plt.yticks([50,100,200,500, 1e3, 2e3, 5e3, 1e4, 2e4,4e4,8e4], [50,100,200,500, '1k', '2k', '5k', '10k', '20k','40k','80k'])
plt.title('biking duration across weekdays and dayhours')
plt.ylabel('Average Log Duration (Seconds)');
###Output
_____no_output_____
###Markdown
user types duration acroos weekday- during all days of week the median (duration) of customer type is higher than the median (duration) of subscriber type.
###Code
g = sb.FacetGrid(data = df, col = 'start_day_week',col_wrap=2, height = 4)
g.map(sb.boxplot, 'user_type', 'duration_sec',order=df.user_type.value_counts().index)
plt.yscale('log')
plt.yticks([50,100,200,500, 1e3, 2e3, 5e3, 1e4, 2e4,4e4,8e4], [50,100,200,500, '1k', '2k', '5k', '10k', '20k','40k','80k']);
###Output
_____no_output_____
###Markdown
user types duration acroos hours of the day- for customer type 3 AM, 4 AM the duration of biking is longer than any hour.- for subscriber type 2 AM, 3 AM the duration of biking is longer than any hour.
###Code
plt.figure(figsize=[10,6])
ax = sb.barplot(data = df, x = 'start_hour', y = 'duration_sec', hue = 'user_type')
ax.legend(loc = 1, framealpha = 1, title = 'user_type')
plt.yscale('log')
plt.yticks([50,100,200,500, 1e3, 2e3, 5e3, 1e4, 2e4,4e4,8e4], [50,100,200,500, '1k', '2k', '5k', '10k', '20k','40k','80k'])
plt.title('different average duration of user types across the day hours')
plt.ylabel('Average Log duration (Seconds)');
###Output
_____no_output_____ |
RKI_COVID_Data_Comparison.ipynb | ###Markdown
Compare and Analyze Data on COVID-19 Infections Provided by the [Robert Koch Institute (RKI)](https://www.rki.de/EN/Home/homepage_node.html) The [Robert Koch Institute (RKI)](https://www.rki.de/EN/Home/homepage_node.html) is the federal government agency responsible for disease control and prevention in Germany. Is publishes data about COVID-19 for all of Germany and uses various channels for that (see [this](https://www.rki.de/EN/Content/infections/epidemiology/outbreaks/COVID-19/COVID19.html) page for an overview).This notebook has been created for analyzing and comparing data from two different sources that are updated by the RKI daily, but have a different level of detail. The main objective is to understand how the very fine grained numbers provided via github can be aggregated such that they match what is shown in the [RKI's COVID-19 dashboard](https://corona.rki.de/). Preliminaries
###Code
import json
import humanize
import datetime as dt
import numpy as np
import pandas as pd
import plotly.express as px
import local_constants as LC
from math import sqrt
from urllib.request import urlopen
from IPython.display import Image
###Output
_____no_output_____
###Markdown
Load and Analyze Data From the [NPGEO Corona Hub 2020](https://npgeo-corona-npgeo-de.hub.arcgis.com/) For all German districts, up-to-date COVID-19 data is available via [this](https://opendata.arcgis.com/datasets/917fc37a709542548cc3be077a786c17_0) page. This data appears to be the basis for the [COVID-19 dashboard](https://corona.rki.de/). Read Data
###Code
# load main data, but restrict the created dataframe to the most relevant columns
RKI_ARCGIS_COVID_BY_DISTRICT = \
pd.read_csv(LC.RKI_ARCGIS_URL, usecols=list(LC.RKI_ARCGIS_COLUMN_NAME_MAPPER.keys()))\
.rename(columns=LC.RKI_ARCGIS_COLUMN_NAME_MAPPER)\
.sort_values(by="district ID")\
.set_index("district ID")
# fill missing values
RKI_ARCGIS_COVID_BY_DISTRICT.fillna(method="pad", axis="index", inplace=True)
# convert to datetime64[ns]
RKI_ARCGIS_COVID_BY_DISTRICT["update time"] = pd.to_datetime(RKI_ARCGIS_COVID_BY_DISTRICT["update time"], format="%d.%m.%Y, %H:%M Uhr")
# add column for the number of deaths within the last seven days adjusted to a population size of 100.000 people
RKI_ARCGIS_COVID_BY_DISTRICT["deaths last 7 days per 100k"] = \
10**5 * RKI_ARCGIS_COVID_BY_DISTRICT["deaths last 7 days"] \
/ RKI_ARCGIS_COVID_BY_DISTRICT["population"]
# print date of most recent entry in the data
print(RKI_ARCGIS_COVID_BY_DISTRICT["update time"].max().strftime("last update is from %Y-%m-%d"))
###Output
last update is from 2022-04-18
###Markdown
Load and Analyze RKI's Data on COVID-19 From [GitHub](https://github.com/robert-koch-institut/SARS-CoV-2_Infektionen_in_Deutschland) Repository ["SARS-CoV-2 Infektionen in Deutschland"](https://github.com/robert-koch-institut/SARS-CoV-2_Infektionen_in_Deutschland) (SARS-CoV-2 Infections in Germany) containsup-to-date numbers of COVID-19 cases in Germany. The data appears to be what is reported by the districts to the RKI as it lists new cases based on the reporting date, beginning of the disease (reference date), age group, sex and district.In [Readme.md](https://github.com/robert-koch-institut/SARS-CoV-2_Infektionen_in_Deutschland/blob/master/Readme.md), an explanation for the data is provided (in German). Read Metadata The RKI publishes metadata based on [zenodo's](https://about.zenodo.org/) JSON format. Here, it is used to detect the publication date of the data. Typically, this is shortly after 3 AM of the current day (local time in Germany).
###Code
RKI_GITHUB_METADATA = json.loads(urlopen(LC.RKI_GITHUB_RAW_DATA_BASE_URL + LC.RKI_GITHUB_ZENODO_REL_URL).read())
RKI_GITHUB_PUBLICATION_DATE = pd.to_datetime(RKI_GITHUB_METADATA["publication_date"])
print(f"publication date is {RKI_GITHUB_PUBLICATION_DATE:%Y-%m-%d %H:%M:%S%z}")
###Output
publication date is 2022-04-18 03:01:57+0200
###Markdown
Read Data Load data describing COVID-19 infections and deaths etc.
###Code
RKI_GITHUB_COVID_INFECTIONS = pd.read_csv(LC.RKI_GITHUB_RAW_DATA_BASE_URL + LC.RKI_GITHUB_COVID_INFECTIONS_REL_URL,
converters=LC.RKI_GITHUB_VALUE_CONVERTERS)\
.rename(columns=LC.RKI_GITHUB_COLUMN_NAME_MAPPER)\
.astype(LC.RKI_GITHUB_COLUMN_TYPES_MAPPER)
###Output
_____no_output_____
###Markdown
Trying to Understand [Readme.md](https://github.com/robert-koch-institut/SARS-CoV-2_Infektionen_in_Deutschland/blob/master/Readme.md) The text in `Readme.md` states that negative values in column `cases` (`AnzahlFall` in the original data) are for correcting positives cases that have been falsely reported in the past. The same is described for columns`deaths` (`AnzahlTodesfall` in the original data) and `recovered` (`AnzahlGenesen` in the original data). This can be interpreted in a way that the correct number of total cases can be computed by simply adding all values of column `cases`, such that negative values "delete" the cases reported erroneously before.
###Code
print("total cases for Germany considering also negative values:")
print(f' GitHub: {RKI_GITHUB_COVID_INFECTIONS["cases"].sum().sum():,d}')
# sum of all cases in the data from ARCGIS
print(f' ARCGIS: {RKI_ARCGIS_COVID_BY_DISTRICT["cases"].sum():,d}')
###Output
total cases for Germany considering also negative values:
GitHub: 23,436,950
ARCGIS: 23,437,145
###Markdown
... but this number is **too small**. The number that is shown on the dashboard can be retrieved from the data by adding _all positive values_ in column `cases`.
###Code
print("total cases for Germany:")
# print the sum for all positive values in column "cases"
print(f' GitHub: {RKI_GITHUB_COVID_INFECTIONS.loc[RKI_GITHUB_COVID_INFECTIONS["cases"] > 0, "cases"].sum():,d}')
# sum of all cases in the data from ARCGIS
print(f' ARCGIS: {RKI_ARCGIS_COVID_BY_DISTRICT["cases"].sum():,d}')
###Output
total cases for Germany:
GitHub: 23,437,145
ARCGIS: 23,437,145
###Markdown
Compute Totals Based on the interpretation of the data described above, the sum of columns `cases`, `deaths` and `recovered` is calculated for each district.For the sake of convenience, a new dataframe is defined, in which negative values for the number of COVID-19 cases, deaths and recovered patients are set to zero. This is used later for the aggregation of data.
###Code
# define dataframe that without the negative values in RKI_GITHUB_COVID_INFECTIONS
RKI_GITHUB_COVID_INFECTIONS_WITHOUT_NEGATIVES = \
pd.concat([RKI_GITHUB_COVID_INFECTIONS[["district ID", "age group", "sex", "reporting date", "reference date"]],
RKI_GITHUB_COVID_INFECTIONS[["cases", "deaths", "recovered"]].apply(lambda a: np.maximum(a,0))],
axis="columns")
# sum up "cases", "deaths", "recovered" for each district
RKI_GITHUB_COVID_BY_DISTRICT_TOTALS = \
RKI_GITHUB_COVID_INFECTIONS_WITHOUT_NEGATIVES[["district ID", "cases", "deaths", "recovered"]].groupby(by="district ID").sum()
# copy population size for each district from RKI_ARCGIS_COVID_BY_DISTRICT
RKI_GITHUB_COVID_BY_DISTRICT_TOTALS["population"] = RKI_ARCGIS_COVID_BY_DISTRICT["population"]
###Output
_____no_output_____
###Markdown
Compute Numbers per 100K People The sums in columns `cases`, `deaths` and `recovered` are normalized by the population size of the district. For the district's population size, the data from the [NPGEO Corona Hub 2020](https://npgeo-corona-npgeo-de.hub.arcgis.com/) is used.
###Code
# define a new dataframe by dividing the totals by the population size of each district and multiplying that with 100,000
RKI_GITHUB_COVID_BY_DISTRICT_PER_100K = \
pd.DataFrame(data=10**5 * RKI_GITHUB_COVID_BY_DISTRICT_TOTALS[["cases", "deaths", "recovered"]].values \
/ np.array(3 * [RKI_GITHUB_COVID_BY_DISTRICT_TOTALS["population"].values]).T,
index=RKI_GITHUB_COVID_BY_DISTRICT_TOTALS.index,
columns=["cases per 100k", "deaths per 100k", "recovered per 100k"])
###Output
_____no_output_____
###Markdown
Compute Totals for the Last Seven Days The values in columns `cases`, `deaths` and `recovered` of the last seven days are summed up.
###Code
# define the date of the earlist data that should be considered
number_of_days = 7
cut_off_date = np.datetime64(RKI_GITHUB_PUBLICATION_DATE.date() - dt.timedelta(days=number_of_days))
# compute sums for data that has been reported on or after the cut_off_date
RKI_GITHUB_COVID_BY_DISTRICT_LAST_7_DAYS_TOTALS = \
RKI_GITHUB_COVID_INFECTIONS_WITHOUT_NEGATIVES\
.loc[RKI_GITHUB_COVID_INFECTIONS_WITHOUT_NEGATIVES["reporting date"] >= cut_off_date]\
.groupby(by="district ID").sum()\
.rename(columns={"cases": "cases last 7 days", "deaths": "deaths last 7 days", "recovered": "recovered last 7 days"})
# ensure that there is data for each district by filling missing data with zeros
# define a dataframe containing the most recent data of each district
df = RKI_GITHUB_COVID_INFECTIONS_WITHOUT_NEGATIVES[["district ID", "reporting date"]].sort_values(by=["district ID", "reporting date"])\
.groupby(["district ID"]).last()
missing_rows = df.loc[df["reporting date"] < cut_off_date].index.shape[0]
if missing_rows > 0:
# there are districts that have not reported data within the last 7 days
# create dataframe containing the zeroes filling the missing data
ddf = pd.DataFrame(data=np.zeros((missing_rows, RKI_GITHUB_COVID_BY_DISTRICT_LAST_7_DAYS_TOTALS.shape[1]), dtype=np.int64),
index=df.loc[df["reporting date"] < cut_off_date].index,
columns=RKI_GITHUB_COVID_BY_DISTRICT_LAST_7_DAYS_TOTALS.columns)
# append zeros
RKI_GITHUB_COVID_BY_DISTRICT_LAST_7_DAYS_TOTALS = RKI_GITHUB_COVID_BY_DISTRICT_LAST_7_DAYS_TOTALS.append(ddf)
###Output
_____no_output_____
###Markdown
Compute Numbers for the Last Seven Days per 100K People normalize the sums for the last seven days by the districts' population size
###Code
# define a new dataframe computing values by dividing the totals for the last seven days by the population size
# of each district and multiplying it with 100,000
RKI_GITHUB_COVID_BY_DISTRICT_LAST_7_DAYS_PER_100K = \
pd.DataFrame(data=10**5 * RKI_GITHUB_COVID_BY_DISTRICT_LAST_7_DAYS_TOTALS.values / np.array(3 * \
[RKI_GITHUB_COVID_BY_DISTRICT_TOTALS.loc[RKI_GITHUB_COVID_BY_DISTRICT_LAST_7_DAYS_TOTALS.index, "population"]]).T,
index=RKI_GITHUB_COVID_BY_DISTRICT_LAST_7_DAYS_TOTALS.index,
columns=["cases last 7 days per 100k", "deaths last 7 days per 100k", "recovered last 7 days per 100k"])
###Output
_____no_output_____
###Markdown
Create a DataFrame Containing all Values Derived From the COVID-19 Data from GitHub
###Code
# combine all of the dataframes with data that has been derived from RKI_GITHUB_COVID_INFECTIONS into one dataframe,
# only columns "district name" and "state name" are taken from RKI_ARCGIS_COVID_BY_DISTRICT
RKI_GITHUB_COVID_BY_DISTRICT = \
pd.concat([RKI_ARCGIS_COVID_BY_DISTRICT[["district name", "state name"]],
RKI_GITHUB_COVID_BY_DISTRICT_TOTALS,
RKI_GITHUB_COVID_BY_DISTRICT_PER_100K,
RKI_GITHUB_COVID_BY_DISTRICT_LAST_7_DAYS_TOTALS,
RKI_GITHUB_COVID_BY_DISTRICT_LAST_7_DAYS_PER_100K], axis="columns")
###Output
_____no_output_____
###Markdown
Verify that Derived Data From the [COVID-19 Data from GitHub](https://github.com/robert-koch-institut/SARS-CoV-2_Infektionen_in_Deutschland) and the Data From [NPGEO Corona Hub 2020](https://npgeo-corona-npgeo-de.hub.arcgis.com/) are (More or Less) Identical
###Code
EPSILON = 10**-10 # threshold for treating float values as zero
common_columns = list(set(RKI_ARCGIS_COVID_BY_DISTRICT.columns) & set(RKI_GITHUB_COVID_BY_DISTRICT.columns))
common_numerical_columns = [c for c in common_columns if RKI_GITHUB_COVID_BY_DISTRICT[c].dtypes != object]
# return True, if all the differences between the absolute values in the common numerical columns is smaller than the threshold (EPSILON)
np.amax(np.abs(RKI_ARCGIS_COVID_BY_DISTRICT[common_numerical_columns].values \
- RKI_GITHUB_COVID_BY_DISTRICT[common_numerical_columns].values)) < EPSILON
###Output
_____no_output_____
###Markdown
Show Some Data Based on the result of the verification above, it seems that the interpretation of the data that is provided via GitHub is correct ... or at least identical with what is shownin the COVID-19 Dashboard. Hence, the following list of the most affected districts in Germany can be assumed to be correct.
###Code
n = 30
RKI_GITHUB_COVID_BY_DISTRICT.sort_values(by="cases last 7 days per 100k", ascending=False)\
.head(n).style.hide(axis="index").format(LC.FORMAT_MAPPER)
###Output
_____no_output_____
###Markdown
Show sums by state
###Code
base_columns = ["cases", "deaths", "recovered"]
last_7_days_columns = [c + " last 7 days" for c in base_columns]
index_column = "state name"
columns = [index_column, "population"] + base_columns + last_7_days_columns
RKI_GITHUB_COVID_BY_STATE = RKI_GITHUB_COVID_BY_DISTRICT[columns].groupby(by=index_column).sum()
per_100k_columns = [c + " per 100k" for c in base_columns]
last_7_days_per_100k_columns = [c + " per 100k" for c in last_7_days_columns]
RKI_GITHUB_COVID_BY_STATE[per_100k_columns + last_7_days_per_100k_columns] = \
10**5 * RKI_GITHUB_COVID_BY_STATE[base_columns + last_7_days_columns].values\
/ np.array(6 * [RKI_GITHUB_COVID_BY_STATE["population"]]).T
RKI_GITHUB_COVID_BY_STATE.style.format(LC.FORMAT_MAPPER)
###Output
_____no_output_____
###Markdown
equally, the following totals for all of Germany appear correct
###Code
columns = ["population", "cases", "cases last 7 days", "deaths", "deaths last 7 days", "recovered", "recovered last 7 days"]
RKI_GITHUB_COVID_BY_DISTRICT[columns].sum().to_frame().T.style.hide(axis="index").format(LC.FORMAT_MAPPER)
###Output
_____no_output_____
###Markdown
likewise, the sums for the last 7 days per 100,000 people can be computed
###Code
columns = ["cases last 7 days", "deaths last 7 days", "recovered last 7 days"]
(10**5 * RKI_GITHUB_COVID_BY_DISTRICT[columns].sum() / RKI_GITHUB_COVID_BY_DISTRICT["population"].sum())\
.to_frame().T.rename(columns={c:c+" per 100k" for c in columns}).style.hide(axis="index").format(LC.FORMAT_MAPPER)
###Output
_____no_output_____
###Markdown
For the increase in the numbers of cases, deaths and recovered people within the last day, the data for the previous day is loaded and subtracted from the totals of the current day.
###Code
PREVIOUS_DAY = pd.Timestamp(RKI_GITHUB_PUBLICATION_DATE.date() - dt.timedelta(days=1))
RKI_GITHUB_COVID_INFECTIONS_PREVIOUS_DAY \
= pd.read_csv(LC.RKI_GITHUB_RAW_DATA_BASE_URL + "/Archiv" + \
PREVIOUS_DAY.strftime("/%Y-%m-%d_Deutschland_SarsCov2_Infektionen.csv"),
converters=LC.RKI_GITHUB_VALUE_CONVERTERS)\
.rename(columns=LC.RKI_GITHUB_COLUMN_NAME_MAPPER)\
.astype(LC.RKI_GITHUB_COLUMN_TYPES_MAPPER)
RKI_GITHUB_COVID_INFECTIONS_PREVIOUS_DAY_WITHOUT_NEGATIVES = \
pd.concat([RKI_GITHUB_COVID_INFECTIONS_PREVIOUS_DAY[["district ID", "age group", "sex", "reporting date", "reference date"]],
RKI_GITHUB_COVID_INFECTIONS_PREVIOUS_DAY[["cases", "deaths", "recovered"]].apply(lambda a: np.maximum(a,0))],
axis="columns")
(RKI_GITHUB_COVID_INFECTIONS_WITHOUT_NEGATIVES[["cases", "deaths", "recovered"]].sum()
- RKI_GITHUB_COVID_INFECTIONS_PREVIOUS_DAY_WITHOUT_NEGATIVES[["cases", "deaths", "recovered"]].sum())\
.to_frame().T.style.hide(axis="index").format(LC.FORMAT_MAPPER)
###Output
_____no_output_____
###Markdown
Compute Totals by Age Group and Sex In order to close this notebook with something that is not mysterious, the total number of cases, deaths and recovered people is computed by age group and sex. This is again in sync with the data of the COVID-19 dashboard.
###Code
RKI_GITHUB_COVID_BY_AGE_GROUP_AND_SEX_TOTALS = \
RKI_GITHUB_COVID_INFECTIONS_WITHOUT_NEGATIVES[["age group", "sex", "cases", "deaths", "recovered"]]\
.groupby(by=["age group", "sex"]).sum()
RKI_GITHUB_COVID_BY_AGE_GROUP_AND_SEX_TOTALS.style.format(LC.FORMAT_MAPPER)
###Output
_____no_output_____
###Markdown
Compare RKI Data with WHO Data The World Health Organization (WHO) publishes data on COVID-19 for a large number of countries (see [covid19.who.int](https://covid19.who.int/)), including Germany. Apparently, these numbers differ from each other.The following analysis quantifies those differences.
###Code
# load data from WHO
WHO_COVID_WORLDWIDE = pd.read_csv("https://covid19.who.int/WHO-COVID-19-global-data.csv", parse_dates=["Date_reported"])\
.rename(columns=LC.WHO_COLUMN_NAME_MAPPER).set_index("reporting date")
last_update = WHO_COVID_WORLDWIDE.loc[WHO_COVID_WORLDWIDE["country"] == "Germany"].index.max()
print(f"most recent data for Germany is from {last_update.strftime('%Y-%m-%d')}")
# verify that the cumulative sums for Germany are correct
assert (WHO_COVID_WORLDWIDE[WHO_COVID_WORLDWIDE["country"] == "Germany"][["cases", "deaths"]]\
.cumsum().rename(columns={"cases":"cumulative cases", "deaths":"cumulative deaths"}) == \
WHO_COVID_WORLDWIDE[WHO_COVID_WORLDWIDE["country"] == "Germany"][["cumulative cases", "cumulative deaths"]]).all().all()
# define a start and end date that covers all the data available from RKI and WHO
START_DATE = min(RKI_GITHUB_COVID_INFECTIONS_WITHOUT_NEGATIVES["reporting date"].min(),
WHO_COVID_WORLDWIDE[WHO_COVID_WORLDWIDE["country"] == "Germany"].index.min())
END_DATE = max(RKI_GITHUB_COVID_INFECTIONS_WITHOUT_NEGATIVES["reporting date"].max(),
WHO_COVID_WORLDWIDE[WHO_COVID_WORLDWIDE["country"] == "Germany"].index.max())
DATE_RANGE = pd.date_range(start=START_DATE, end=END_DATE)
# define DataFrames with identical structure for the data from RKI and WHO on COVID-19 cases and deaths in Germany
RKI_COVID_GERMANY = RKI_GITHUB_COVID_INFECTIONS_WITHOUT_NEGATIVES[["reporting date", "cases", "deaths"]]\
.groupby(by="reporting date").sum().reindex(index=DATE_RANGE)
WHO_COVID_GERMANY = WHO_COVID_WORLDWIDE[WHO_COVID_WORLDWIDE["country"] == "Germany"][["cases", "deaths"]]\
.reindex(index=DATE_RANGE)
###Output
_____no_output_____
###Markdown
Due to possible delays in the transmission and processing of data, it is likely that the time axis (index `reporting date`) is not in aligned for the two sets of data (i.e. RKI and WHO). Hence a optimal time shift of the WHO's data is computed based on the euclidean distance of the two vectors.
###Code
# determine best shift (as multiplies of one day) for the WHO-data, so that the euclidean distance for cases
# and deaths combined is minimal
IMAGE_WIDTH, IMAGE_HEIGHT = 1500, 700
window, best_shift, min_distance, x, y = 7, None, float("Inf"), [], []
for shift in range(-window,window+1):
distance = sqrt(np.nansum(np.square((RKI_COVID_GERMANY \
- WHO_COVID_GERMANY.shift(periods=shift, freq=dt.timedelta(days=1))).values)))
x.append(shift)
y.append(distance)
if distance < min_distance:
best_shift, min_distance = shift, distance
fig = px.line(pd.DataFrame(data = {"x": x, "y":y}),
x="x", y="y",
title = f"Best Shift in Days is {best_shift:+3d} With Minimal Euclidian Distance of {min_distance:11,.2f}",
labels = {"y": "euclidean distance", "x": "shift in days"},
markers = True)
img_bytes = fig.to_image(format = "png", width=IMAGE_WIDTH, height=IMAGE_HEIGHT)
display(Image(img_bytes))
WHO_SHIFT = dt.timedelta(days=best_shift)
WHO_COVID_GERMANY_SHIFTED = WHO_COVID_GERMANY.shift(periods=best_shift, freq=dt.timedelta(days=1)).reindex(index=DATE_RANGE)
###Output
_____no_output_____
###Markdown
after aligning the data as good as possible, there should be difference in the total number of COVID-19 cases and deaths - unfortunately, this is not the case ...
###Code
print("Difference in Totals Between Data From RKI and WHO for the Timeframe From"\
+ f"{humanize.naturaldate(START_DATE)} Until {humanize.naturaldate(END_DATE + WHO_SHIFT)}")
df = pd.concat([pd.DataFrame(data=RKI_COVID_GERMANY.loc[START_DATE:END_DATE + WHO_SHIFT].sum().to_dict(), index=["RKI"]),
pd.DataFrame(data=WHO_COVID_GERMANY_SHIFTED.loc[START_DATE:END_DATE + WHO_SHIFT].sum().to_dict(), index=["WHO"]),],
axis="index")
df = pd.concat([df, pd.DataFrame((df.iloc[0] - df.iloc[1]).to_dict(), index=["difference"])], axis="index")
df.style.format("{:,.0f}")
###Output
Difference in Totals Between Data From RKI and WHO for the Timeframe FromJan 02 2020 Until Apr 16
###Markdown
maybe there is some obvious flaw, which can be see in the diagram showing the differences of a daily basis ...
###Code
fig = px.line((RKI_COVID_GERMANY
- WHO_COVID_GERMANY_SHIFTED).reset_index(), x="index", y=["cases", "deaths"],
title="Differences in Daily Numbers Provided by RKI and WHO for COVID Cases and Deaths in Germany",
labels={"value": "difference", "index": "time", "variable": ""})
img_bytes = fig.to_image(format="png", width=IMAGE_WIDTH, height=IMAGE_HEIGHT)
Image(img_bytes)
###Output
_____no_output_____
###Markdown
compare "smoothed" data
###Code
window = '14d'
fig = px.line((RKI_COVID_GERMANY.rolling(window, center=True, min_periods=1).mean()
- WHO_COVID_GERMANY_SHIFTED.rolling(window, center=True, min_periods=1).mean()).reset_index(),
x="index", y=["cases", "deaths"],
title=f"Differences in Moving Averages ({window}-Window) of Daily Number "+
"Provided by RKI and WHO for COVID Cases and Deaths in Germany",
labels={"value": "difference", "index": "time", "variable": ""})
img_bytes = fig.to_image(format="png", width=IMAGE_WIDTH, height=IMAGE_HEIGHT)
Image(img_bytes)
###Output
_____no_output_____
###Markdown
show differences in the data sets based on the cumulative sum of cases and deaths
###Code
column = "cases"
df = pd.concat([RKI_COVID_GERMANY[[column]].cumsum().rename(columns={column: column + " - RKI"}),\
WHO_COVID_GERMANY_SHIFTED[[column]].cumsum().rename(columns={column: column + " - WHO"})], axis="columns").reset_index()
fig = px.line(df, x="index", y=[column + " - RKI", column + " - WHO"],
title=f"Cumulative Sums in COVID {column.capitalize()} in Germany from RKI and WHO",
labels={"value": column, "index": "time", "variable": ""})
img_bytes = fig.to_image(format="png", width=IMAGE_WIDTH, height=IMAGE_HEIGHT)
Image(img_bytes)
column = "deaths"
df = pd.concat([RKI_COVID_GERMANY[[column]].cumsum().rename(columns={column: column + " - RKI"}),\
WHO_COVID_GERMANY_SHIFTED[[column]].cumsum().rename(columns={column: column + " - WHO"})], axis="columns").reset_index()
fig = px.line(df, x="index", y=[column + " - RKI", column + " - WHO"],
title=f"Cumulative Sums in COVID {column.capitalize()} in Germany from RKI and WHO",
labels={"value": column, "index": "time", "variable": ""})
img_bytes = fig.to_image(format="png", width=IMAGE_WIDTH, height=IMAGE_HEIGHT)
Image(img_bytes)
###Output
_____no_output_____ |
06. Support Vector Machines - Python.ipynb | ###Markdown
Exercise 6 - Support Vector Machines=====Support vector machines (SVMs) let us predict categories. This exercise will demonstrate a simple support vector machine that can predict a category from a small number of features. Our problem is that we want to be able to categorise which type of tree an new specimen belongs to. To do this, we will use features of three different types of trees to train an SVM. __Run the code__ in the cell below.
###Code
from google.colab import drive
drive.mount('/content/drive')
# Run this code!
# It sets up the graphing configuration.
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as graph
%matplotlib inline
graph.rcParams['figure.figsize'] = (15,5)
graph.rcParams["font.family"] = 'DejaVu Sans'
graph.rcParams["font.size"] = '12'
graph.rcParams['image.cmap'] = 'rainbow'
###Output
_____no_output_____
###Markdown
Step 1-----First, we will take a look at the raw data first to see what features we have. Replace `` with `print(dataset.head())` and then __run the code__.
###Code
import pandas as pd
import numpy as np
# Loads the SVM library
from sklearn import svm
# Loads the dataset
dataset = pd.read_csv('/content/drive/My Drive/ms learn/Data/trees.csv')
###
# REPLACE <printDataHere> with print(dataset.head()) TO PREVIEW THE DATASET
###
print(dataset.head())
###
###Output
leaf_width leaf_length trunk_girth trunk_height tree_type
0 5.13 6.18 8.26 8.74 0
1 7.49 4.02 8.07 6.78 0
2 9.22 4.16 5.46 8.45 1
3 6.98 11.10 6.96 4.06 2
4 3.46 5.19 8.72 10.40 0
###Markdown
It looks like we have _four features_ (leaf_width, leaf_length, trunk_girth, trunk_height) and _one label_ (tree_type).Let's plot it.__Run the code__ in the cell below.
###Code
# Run this code to plot the leaf features
# This extracts the features. drop() deletes the column we state (tree_type), leaving on the features
allFeatures = dataset.drop(['tree_type'], axis = 1)
# This keeps only the column we state (tree_type), leaving only our label
labels = np.array(dataset['tree_type'])
#Plots the graph
X = allFeatures['leaf_width']
Y = allFeatures['leaf_length']
color=labels
graph.scatter(X, Y, c = color)
graph.title('classification plot for leaf features')
graph.xlabel('leaf width')
graph.ylabel('leaf length')
graph.legend()
graph.show()
###Output
No handles with labels found to put in legend.
###Markdown
__Run the code__ in the cell below to plot the trunk features
###Code
# Run this code to plot the trunk features
graph.scatter(allFeatures['trunk_girth'], allFeatures['trunk_height'], c = labels)
graph.title('Classification plot for trunk features')
graph.xlabel('trunk girth')
graph.ylabel('trunk height')
graph.show()
###Output
_____no_output_____
###Markdown
Step 2-----Lets make a support vector machine.The syntax for a support vector machine is as follows:__`model = svm.SVC().fit(features, labels)`__Your features set will be called __`train_X`__ and your labels set will be called __`train_Y`__ Let's first run the SVM in the cell below using the first two features, the leaf features.
###Code
# Sets up the feature and target sets for leaf features
# Feature 1
feature_one = allFeatures['leaf_width'].values
# Feature 2
feature_two = allFeatures['leaf_length'].values
# Features
train_X = np.asarray([feature_one, feature_two]).transpose()
# Labels
train_Y = labels
# Fits the SVM model
###
# REPLACE THE <makeSVM> WITH THE CODE TO MAKE A SVM MODEL AS ABOVE
###
model = svm.SVC().fit(features, labels)
###
print("Model ready. Now plot it to see the result.")
###Output
_____no_output_____
###Markdown
Let's plot it! Run the cell below to visualise the SVM with our dataset.
###Code
# Run this to plots the SVM model
X_min, X_max = train_X[:, 0].min() - 1, train_X[:, 0].max() + 1
Y_min, Y_max = train_X[:, 1].min() - 1, train_X[:, 1].max() + 1
XX, YY = np.meshgrid(np.arange(X_min, X_max, .02), np.arange(Y_min, Y_max, .02))
Z = model.predict(np.c_[XX.ravel(), YY.ravel()]).reshape(XX.shape)
graph.scatter(feature_one, feature_two, c = train_Y, cmap = graph.cm.rainbow, zorder = 10, edgecolor = 'k', s = 40)
graph.contourf(XX, YY, Z, cmap = graph.cm.rainbow, alpha = 1.0)
graph.contour(XX, YY, Z, colors = 'k', linestyles = '--', alpha=0.5)
graph.title('SVM plot for leaf features')
graph.xlabel('leaf width')
graph.ylabel('leaf length')
graph.show()
###Output
_____no_output_____
###Markdown
The graph shows three colored zones that the SVM has chosen to group the datapoints in. Color, here, means type of tree. As we can see, the zones correspond reasonably well with the actual tree types of our training data. This means that the SVM can group, for its training data, quite well calculate tree type based on leaf features.Now let's do the same using trunk features. In the cell below replace: 1. `` with `'trunk_girth'` 2. `` with `'trunk_height'` Then __run the code__.
###Code
# Feature 1
###--- REPLACE THE <addTrunkGirth> BELOW WITH 'trunk_girth' (INCLUDING THE QUOTES) ---###
###
trunk_girth = allFeatures[<addTrunkGirth>].values
###
# Feature 2
###--- REPLACE THE <addTrunkHeight> BELOW WITH 'trunk_height' (INCLUDING THE QUOTES) ---###
trunk_height = allFeatures[<addTrunkHeight>].values
###
# Features
trunk_features = np.asarray([trunk_girth, trunk_height]).transpose()
# Fits the SVM model
model = svm.SVC().fit(trunk_features, train_Y)
# Plots the SVM model
X_min, X_max = trunk_features[:, 0].min() - 1, trunk_features[:, 0].max() + 1
Y_min, Y_max = trunk_features[:, 1].min() - 1, trunk_features[:, 1].max() + 1
XX, YY = np.meshgrid(np.arange(X_min, X_max, .02), np.arange(Y_min, Y_max, .02))
Z = model.predict(np.c_[XX.ravel(), YY.ravel()]).reshape(XX.shape)
graph.scatter(trunk_girth, trunk_height, c = train_Y, cmap = graph.cm.rainbow, zorder = 10, edgecolor = 'k', s = 40)
graph.contourf(XX, YY, Z, cmap = graph.cm.rainbow, alpha = 1.0)
graph.contour(XX, YY, Z, colors = 'k', linestyles = '--', alpha = 0.5)
graph.title('SVM plot for leaf features')
graph.xlabel('trunk girth')
graph.ylabel('trunk height')
graph.show()
###Output
_____no_output_____
###Markdown
Exercise 6 - Support Vector Machines=====Support vector machines (SVMs) let us predict categories. This exercise will demonstrate a simple support vector machine that can predict a category from a small number of features. Our problem is that we want to be able to categorise which type of tree an new specimen belongs to. To do this, we will use features of three different types of trees to train an SVM. __Run the code__ in the cell below.
###Code
# Run this code!
# It sets up the graphing configuration.
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as graph
%matplotlib inline
graph.rcParams['figure.figsize'] = (15,5)
graph.rcParams["font.family"] = 'DejaVu Sans'
graph.rcParams["font.size"] = '12'
graph.rcParams['image.cmap'] = 'rainbow'
###Output
_____no_output_____
###Markdown
Step 1-----First, we will take a look at the raw data first to see what features we have. Replace `` with `print(dataset.head())` and then __run the code__.
###Code
import pandas as pd
import numpy as np
# Loads the SVM library
from sklearn import svm
# Loads the dataset
dataset = pd.read_csv('Data/trees.csv')
###
# REPLACE <printDataHere> with print(dataset.head()) TO PREVIEW THE DATASET
###
print(dataset.head())
###
###Output
leaf_width leaf_length trunk_girth trunk_height tree_type
0 5.13 6.18 8.26 8.74 0
1 7.49 4.02 8.07 6.78 0
2 9.22 4.16 5.46 8.45 1
3 6.98 11.10 6.96 4.06 2
4 3.46 5.19 8.72 10.40 0
###Markdown
It looks like we have _four features_ (leaf_width, leaf_length, trunk_girth, trunk_height) and _one label_ (tree_type).Let's plot it.__Run the code__ in the cell below.
###Code
# Run this code to plot the leaf features
# This extracts the features. drop() deletes the column we state (tree_type), leaving on the features
allFeatures = dataset.drop(['tree_type'], axis = 1)
# This keeps only the column we state (tree_type), leaving only our label
labels = np.array(dataset['tree_type'])
#Plots the graph
X = allFeatures['leaf_width']
Y = allFeatures['leaf_length']
color=labels
graph.scatter(X, Y, c = color)
graph.title('classification plot for leaf features')
graph.xlabel('leaf width')
graph.ylabel('leaf length')
graph.legend()
graph.show()
###Output
No handles with labels found to put in legend.
###Markdown
__Run the code__ in the cell below to plot the trunk features
###Code
# Run this code to plot the trunk features
graph.scatter(allFeatures['trunk_girth'], allFeatures['trunk_height'], c = labels)
graph.title('Classification plot for trunk features')
graph.xlabel('trunk girth')
graph.ylabel('trunk height')
graph.show()
###Output
_____no_output_____
###Markdown
Step 2-----Lets make a support vector machine.The syntax for a support vector machine is as follows:__`model = svm.SVC().fit(features, labels)`__Your features set will be called __`train_X`__ and your labels set will be called __`train_Y`__ Let's first run the SVM in the cell below using the first two features, the leaf features.
###Code
# Sets up the feature and target sets for leaf features
# Feature 1
feature_one = allFeatures['leaf_width'].values
# Feature 2
feature_two = allFeatures['leaf_length'].values
# Features
train_X = np.asarray([feature_one, feature_two]).transpose()
# Labels
train_Y = labels
# Fits the SVM model
###
# REPLACE THE <makeSVM> WITH THE CODE TO MAKE A SVM MODEL AS ABOVE
###
model = svm.SVC().fit(train_X, train_Y)
###
print("Model ready. Now plot it to see the result.")
###Output
Model ready. Now plot it to see the result.
###Markdown
Let's plot it! Run the cell below to visualise the SVM with our dataset.
###Code
# Run this to plots the SVM model
X_min, X_max = train_X[:, 0].min() - 1, train_X[:, 0].max() + 1
Y_min, Y_max = train_X[:, 1].min() - 1, train_X[:, 1].max() + 1
XX, YY = np.meshgrid(np.arange(X_min, X_max, .02), np.arange(Y_min, Y_max, .02))
Z = model.predict(np.c_[XX.ravel(), YY.ravel()]).reshape(XX.shape)
graph.scatter(feature_one, feature_two, c = train_Y, cmap = graph.cm.rainbow, zorder = 10, edgecolor = 'k', s = 40)
graph.contourf(XX, YY, Z, cmap = graph.cm.rainbow, alpha = 1.0)
graph.contour(XX, YY, Z, colors = 'k', linestyles = '--', alpha=0.5)
graph.title('SVM plot for leaf features')
graph.xlabel('leaf width')
graph.ylabel('leaf length')
graph.show()
###Output
_____no_output_____
###Markdown
The graph shows three colored zones that the SVM has chosen to group the datapoints in. Color, here, means type of tree. As we can see, the zones correspond reasonably well with the actual tree types of our training data. This means that the SVM can group, for its training data, quite well calculate tree type based on leaf features.Now let's do the same using trunk features. In the cell below replace: 1. `` with `'trunk_girth'` 2. `` with `'trunk_height'` Then __run the code__.
###Code
# Feature 1
###--- REPLACE THE <addTrunkGirth> BELOW WITH 'trunk_girth' (INCLUDING THE QUOTES) ---###
###
trunk_girth = allFeatures['trunk_girth'].values
###
# Feature 2
###--- REPLACE THE <addTrunkHeight> BELOW WITH 'trunk_height' (INCLUDING THE QUOTES) ---###
trunk_height = allFeatures['trunk_height'].values
###
# Features
trunk_features = np.asarray([trunk_girth, trunk_height]).transpose()
# Fits the SVM model
model = svm.SVC().fit(trunk_features, train_Y)
# Plots the SVM model
X_min, X_max = trunk_features[:, 0].min() - 1, trunk_features[:, 0].max() + 1
Y_min, Y_max = trunk_features[:, 1].min() - 1, trunk_features[:, 1].max() + 1
XX, YY = np.meshgrid(np.arange(X_min, X_max, .02), np.arange(Y_min, Y_max, .02))
Z = model.predict(np.c_[XX.ravel(), YY.ravel()]).reshape(XX.shape)
graph.scatter(trunk_girth, trunk_height, c = train_Y, cmap = graph.cm.rainbow, zorder = 10, edgecolor = 'k', s = 40)
graph.contourf(XX, YY, Z, cmap = graph.cm.rainbow, alpha = 1.0)
graph.contour(XX, YY, Z, colors = 'k', linestyles = '--', alpha = 0.5)
graph.title('SVM plot for leaf features')
graph.xlabel('trunk girth')
graph.ylabel('trunk height')
graph.show()
###Output
_____no_output_____
###Markdown
Exercise 6 - Support Vector Machines=====Support vector machines (SVMs) let us predict categories. This exercise will demonstrate a simple support vector machine that can predict a category from a small number of features. Our problem is that we want to be able to categorise which type of tree an new specimen belongs to. To do this, we will use features of three different types of trees to train an SVM. __Run the code__ in the cell below.
###Code
# Run this code!
# It sets up the graphing configuration.
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as graph
%matplotlib inline
graph.rcParams['figure.figsize'] = (15,5)
graph.rcParams["font.family"] = 'DejaVu Sans'
graph.rcParams["font.size"] = '12'
graph.rcParams['image.cmap'] = 'rainbow'
###Output
_____no_output_____
###Markdown
Step 1-----First, we will take a look at the raw data first to see what features we have. Replace `` with `print(dataset.head())` and then __run the code__.
###Code
import pandas as pd
import numpy as np
# Loads the SVM library
from sklearn import svm
# Loads the dataset
dataset = pd.read_csv('trees.csv')
###
# REPLACE <printDataHere> with print(dataset.head()) TO PREVIEW THE DATASET
###
print(dataset.head())
###
###Output
leaf_width leaf_length trunk_girth trunk_height tree_type
0 5.13 6.18 8.26 8.74 0
1 7.49 4.02 8.07 6.78 0
2 9.22 4.16 5.46 8.45 1
3 6.98 11.10 6.96 4.06 2
4 3.46 5.19 8.72 10.40 0
###Markdown
It looks like we have _four features_ (leaf_width, leaf_length, trunk_girth, trunk_height) and _one label_ (tree_type).Let's plot it.__Run the code__ in the cell below.
###Code
# Run this code to plot the leaf features
# This extracts the features. drop() deletes the column we state (tree_type), leaving on the features
allFeatures = dataset.drop(['tree_type'], axis = 1)
# This keeps only the column we state (tree_type), leaving only our label
labels = np.array(dataset['tree_type'])
#Plots the graph
X = allFeatures['leaf_width']
Y = allFeatures['leaf_length']
color=labels
graph.scatter(X, Y, c = color)
graph.title('classification plot for leaf features')
graph.xlabel('leaf width')
graph.ylabel('leaf length')
graph.legend()
graph.show()
###Output
No handles with labels found to put in legend.
###Markdown
__Run the code__ in the cell below to plot the trunk features
###Code
# Run this code to plot the trunk features
graph.scatter(allFeatures['trunk_girth'], allFeatures['trunk_height'], c = labels)
graph.title('Classification plot for trunk features')
graph.xlabel('trunk girth')
graph.ylabel('trunk height')
graph.show()
###Output
_____no_output_____
###Markdown
Step 2-----Lets make a support vector machine.The syntax for a support vector machine is as follows:__`model = svm.SVC().fit(features, labels)`__Your features set will be called __`train_X`__ and your labels set will be called __`train_Y`__ Let's first run the SVM in the cell below using the first two features, the leaf features.
###Code
# Sets up the feature and target sets for leaf features
# Feature 1
feature_one = allFeatures['leaf_width'].values
# Feature 2
feature_two = allFeatures['leaf_length'].values
# Features
train_X = np.asarray([feature_one, feature_two]).transpose()
# Labels
train_Y = labels
# Fits the SVM model
###
# REPLACE THE <makeSVM> WITH THE CODE TO MAKE A SVM MODEL AS ABOVE
###
model = svm.SVC().fit(train_X, train_Y)
###
print("Model ready. Now plot it to see the result.")
###Output
Model ready. Now plot it to see the result.
###Markdown
Let's plot it! Run the cell below to visualise the SVM with our dataset.
###Code
# Run this to plots the SVM model
X_min, X_max = train_X[:, 0].min() - 1, train_X[:, 0].max() + 1
Y_min, Y_max = train_X[:, 1].min() - 1, train_X[:, 1].max() + 1
XX, YY = np.meshgrid(np.arange(X_min, X_max, .02), np.arange(Y_min, Y_max, .02))
Z = model.predict(np.c_[XX.ravel(), YY.ravel()]).reshape(XX.shape)
graph.scatter(feature_one, feature_two, c = train_Y, cmap = graph.cm.rainbow, zorder = 10, edgecolor = 'k', s = 40)
graph.contourf(XX, YY, Z, cmap = graph.cm.rainbow, alpha = 1.0)
graph.contour(XX, YY, Z, colors = 'k', linestyles = '--', alpha=0.5)
graph.title('SVM plot for leaf features')
graph.xlabel('leaf width')
graph.ylabel('leaf length')
graph.show()
###Output
_____no_output_____
###Markdown
The graph shows three colored zones that the SVM has chosen to group the datapoints in. Color, here, means type of tree. As we can see, the zones correspond reasonably well with the actual tree types of our training data. This means that the SVM can group, for its training data, quite well calculate tree type based on leaf features.Now let's do the same using trunk features. In the cell below replace: 1. `` with `'trunk_girth'` 2. `` with `'trunk_height'` Then __run the code__.
###Code
# Feature 1
###--- REPLACE THE <addTrunkGirth> BELOW WITH 'trunk_girth' (INCLUDING THE QUOTES) ---###
###
trunk_girth = allFeatures['trunk_girth'].values
###
# Feature 2
###--- REPLACE THE <addTrunkHeight> BELOW WITH 'trunk_height' (INCLUDING THE QUOTES) ---###
trunk_height = allFeatures['trunk_height'].values
###
# Features
trunk_features = np.asarray([trunk_girth, trunk_height]).transpose()
# Fits the SVM model
model = svm.SVC().fit(trunk_features, train_Y)
# Plots the SVM model
X_min, X_max = trunk_features[:, 0].min() - 1, trunk_features[:, 0].max() + 1
Y_min, Y_max = trunk_features[:, 1].min() - 1, trunk_features[:, 1].max() + 1
XX, YY = np.meshgrid(np.arange(X_min, X_max, .02), np.arange(Y_min, Y_max, .02))
Z = model.predict(np.c_[XX.ravel(), YY.ravel()]).reshape(XX.shape)
graph.scatter(trunk_girth, trunk_height, c = train_Y, cmap = graph.cm.rainbow, zorder = 10, edgecolor = 'k', s = 40)
graph.contourf(XX, YY, Z, cmap = graph.cm.rainbow, alpha = 1.0)
graph.contour(XX, YY, Z, colors = 'k', linestyles = '--', alpha = 0.5)
graph.title('SVM plot for leaf features')
graph.xlabel('trunk girth')
graph.ylabel('trunk height')
graph.show()
###Output
_____no_output_____
###Markdown
Exercise 6 - Support Vector Machines=====Support vector machines (SVMs) let us predict categories. This exercise will demonstrate a simple support vector machine that can predict a category from a small number of features. Our problem is that we want to be able to categorise which type of tree an new specimen belongs to. To do this, we will use features of three different types of trees to train an SVM. __Run the code__ in the cell below.
###Code
# Run this code!
# It sets up the graphing configuration.
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as graph
%matplotlib inline
graph.rcParams['figure.figsize'] = (15,5)
graph.rcParams["font.family"] = 'DejaVu Sans'
graph.rcParams["font.size"] = '12'
graph.rcParams['image.cmap'] = 'rainbow'
###Output
_____no_output_____
###Markdown
Step 1-----First, we will take a look at the raw data first to see what features we have. Replace `` with `print(dataset.head())` and then __run the code__.
###Code
import pandas as pd
import numpy as np
# Loads the SVM library
from sklearn import svm
# Loads the dataset
dataset = pd.read_csv('Data/trees.csv')
###
# REPLACE <printDataHere> with print(dataset.head()) TO PREVIEW THE DATASET
###
<printDataHere>
###
###Output
_____no_output_____
###Markdown
It looks like we have _four features_ (leaf_width, leaf_length, trunk_girth, trunk_height) and _one label_ (tree_type).Let's plot it.__Run the code__ in the cell below.
###Code
# Run this code to plot the leaf features
# This extracts the features. drop() deletes the column we state (tree_type), leaving on the features
allFeatures = dataset.drop(['tree_type'], axis = 1)
# This keeps only the column we state (tree_type), leaving only our label
labels = np.array(dataset['tree_type'])
#Plots the graph
X = allFeatures['leaf_width']
Y = allFeatures['leaf_length']
color=labels
graph.scatter(X, Y, c = color)
graph.title('classification plot for leaf features')
graph.xlabel('leaf width')
graph.ylabel('leaf length')
graph.legend()
graph.show()
###Output
_____no_output_____
###Markdown
__Run the code__ in the cell below to plot the trunk features
###Code
# Run this code to plot the trunk features
graph.scatter(allFeatures['trunk_girth'], allFeatures['trunk_height'], c = labels)
graph.title('Classification plot for trunk features')
graph.xlabel('trunk girth')
graph.ylabel('trunk height')
graph.show()
###Output
_____no_output_____
###Markdown
Step 2-----Lets make a support vector machine.The syntax for a support vector machine is as follows:__`model = svm.SVC().fit(features, labels)`__Your features set will be called __`train_X`__ and your labels set will be called __`train_Y`__ Let's first run the SVM in the cell below using the first two features, the leaf features.
###Code
# Sets up the feature and target sets for leaf features
# Feature 1
feature_one = allFeatures['leaf_width'].values
# Feature 2
feature_two = allFeatures['leaf_length'].values
# Features
train_X = np.asarray([feature_one, feature_two]).transpose()
# Labels
train_Y = labels
# Fits the SVM model
###
# REPLACE THE <makeSVM> WITH THE CODE TO MAKE A SVM MODEL AS ABOVE
###
model = <makeSVM>
###
print("Model ready. Now plot it to see the result.")
###Output
_____no_output_____
###Markdown
Let's plot it! Run the cell below to visualise the SVM with our dataset.
###Code
# Run this to plots the SVM model
X_min, X_max = train_X[:, 0].min() - 1, train_X[:, 0].max() + 1
Y_min, Y_max = train_X[:, 1].min() - 1, train_X[:, 1].max() + 1
XX, YY = np.meshgrid(np.arange(X_min, X_max, .02), np.arange(Y_min, Y_max, .02))
Z = model.predict(np.c_[XX.ravel(), YY.ravel()]).reshape(XX.shape)
graph.scatter(feature_one, feature_two, c = train_Y, cmap = graph.cm.rainbow, zorder = 10, edgecolor = 'k', s = 40)
graph.contourf(XX, YY, Z, cmap = graph.cm.rainbow, alpha = 1.0)
graph.contour(XX, YY, Z, colors = 'k', linestyles = '--', alpha=0.5)
graph.title('SVM plot for leaf features')
graph.xlabel('leaf width')
graph.ylabel('leaf length')
graph.show()
###Output
_____no_output_____
###Markdown
The graph shows three colored zones that the SVM has chosen to group the datapoints in. Color, here, means type of tree. As we can see, the zones correspond reasonably well with the actual tree types of our training data. This means that the SVM can group, for its training data, quite well calculate tree type based on leaf features.Now let's do the same using trunk features. In the cell below replace: 1. `` with `'trunk_girth'` 2. `` with `'trunk_height'` Then __run the code__.
###Code
# Feature 1
###--- REPLACE THE <addTrunkGirth> BELOW WITH 'trunk_girth' (INCLUDING THE QUOTES) ---###
###
trunk_girth = allFeatures[<addTrunkGirth>].values
###
# Feature 2
###--- REPLACE THE <addTrunkHeight> BELOW WITH 'trunk_height' (INCLUDING THE QUOTES) ---###
trunk_height = allFeatures[<addTrunkHeight>].values
###
# Features
trunk_features = np.asarray([trunk_girth, trunk_height]).transpose()
# Fits the SVM model
model = svm.SVC().fit(trunk_features, train_Y)
# Plots the SVM model
X_min, X_max = trunk_features[:, 0].min() - 1, trunk_features[:, 0].max() + 1
Y_min, Y_max = trunk_features[:, 1].min() - 1, trunk_features[:, 1].max() + 1
XX, YY = np.meshgrid(np.arange(X_min, X_max, .02), np.arange(Y_min, Y_max, .02))
Z = model.predict(np.c_[XX.ravel(), YY.ravel()]).reshape(XX.shape)
graph.scatter(trunk_girth, trunk_height, c = train_Y, cmap = graph.cm.rainbow, zorder = 10, edgecolor = 'k', s = 40)
graph.contourf(XX, YY, Z, cmap = graph.cm.rainbow, alpha = 1.0)
graph.contour(XX, YY, Z, colors = 'k', linestyles = '--', alpha = 0.5)
graph.title('SVM plot for leaf features')
graph.xlabel('trunk girth')
graph.ylabel('trunk height')
graph.show()
###Output
_____no_output_____
###Markdown
Exercise 6 - Support Vector Machines=====Support vector machines (SVMs) let us predict categories. This exercise will demonstrate a simple support vector machine that can predict a category from a small number of features. Our problem is that we want to be able to categorise which type of tree an new specimen belongs to. To do this, we will use features of three different types of trees to train an SVM. __Run the code__ in the cell below.
###Code
# Run this code!
# It sets up the graphing configuration.
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as graph
%matplotlib inline
graph.rcParams['figure.figsize'] = (15,5)
graph.rcParams["font.family"] = 'DejaVu Sans'
graph.rcParams["font.size"] = '12'
graph.rcParams['image.cmap'] = 'rainbow'
###Output
_____no_output_____
###Markdown
Step 1-----First, we will take a look at the raw data first to see what features we have. Replace `` with `print(dataset.head())` and then __run the code__.
###Code
from google.colab import drive
drive.mount('/content/drive')
import pandas as pd
import numpy as np
# Loads the SVM library
from sklearn import svm
# Loads the dataset
dataset = pd.read_csv('/content/drive/My Drive/Colab Notebooks/Regresión/Data/trees.csv')
###
# REPLACE <printDataHere> with print(dataset.head()) TO PREVIEW THE DATASET
###
print(dataset.head())
###
###Output
leaf_width leaf_length trunk_girth trunk_height tree_type
0 5.13 6.18 8.26 8.74 0
1 7.49 4.02 8.07 6.78 0
2 9.22 4.16 5.46 8.45 1
3 6.98 11.10 6.96 4.06 2
4 3.46 5.19 8.72 10.40 0
###Markdown
It looks like we have _four features_ (leaf_width, leaf_length, trunk_girth, trunk_height) and _one label_ (tree_type).Let's plot it.__Run the code__ in the cell below.
###Code
# Run this code to plot the leaf features
# This extracts the features. drop() deletes the column we state (tree_type), leaving on the features
allFeatures = dataset.drop(['tree_type'], axis = 1)
# This keeps only the column we state (tree_type), leaving only our label
labels = np.array(dataset['tree_type'])
#Plots the graph
X = allFeatures['leaf_width']
Y = allFeatures['leaf_length']
color=labels
graph.scatter(X, Y, c = color)
graph.title('classification plot for leaf features')
graph.xlabel('leaf width')
graph.ylabel('leaf length')
graph.legend()
graph.show()
###Output
No handles with labels found to put in legend.
###Markdown
__Run the code__ in the cell below to plot the trunk features
###Code
# Run this code to plot the trunk features
graph.scatter(allFeatures['trunk_girth'], allFeatures['trunk_height'], c = labels)
graph.title('Classification plot for trunk features')
graph.xlabel('trunk girth')
graph.ylabel('trunk height')
graph.show()
###Output
_____no_output_____
###Markdown
Step 2-----Lets make a support vector machine.The syntax for a support vector machine is as follows:__`model = svm.SVC().fit(features, labels)`__Your features set will be called __`train_X`__ and your labels set will be called __`train_Y`__ Let's first run the SVM in the cell below using the first two features, the leaf features.
###Code
# Sets up the feature and target sets for leaf features
# Feature 1
feature_one = allFeatures['leaf_width'].values
# Feature 2
feature_two = allFeatures['leaf_length'].values
# Features
train_X = np.asarray([feature_one, feature_two]).transpose()
# Labels
train_Y = labels
# Fits the SVM model
###
# REPLACE THE <makeSVM> WITH THE CODE TO MAKE A SVM MODEL AS ABOVE
###
model = svm.SVC().fit(train_X, train_Y)
###
print("Model ready. Now plot it to see the result.")
###Output
Model ready. Now plot it to see the result.
###Markdown
Let's plot it! Run the cell below to visualise the SVM with our dataset.
###Code
# Run this to plots the SVM model
X_min, X_max = train_X[:, 0].min() - 1, train_X[:, 0].max() + 1
Y_min, Y_max = train_X[:, 1].min() - 1, train_X[:, 1].max() + 1
XX, YY = np.meshgrid(np.arange(X_min, X_max, .02), np.arange(Y_min, Y_max, .02))
Z = model.predict(np.c_[XX.ravel(), YY.ravel()]).reshape(XX.shape)
graph.scatter(feature_one, feature_two, c = train_Y, cmap = graph.cm.rainbow, zorder = 10, edgecolor = 'k', s = 40)
graph.contourf(XX, YY, Z, cmap = graph.cm.rainbow, alpha = 1.0)
graph.contour(XX, YY, Z, colors = 'k', linestyles = '--', alpha=0.5)
graph.title('SVM plot for leaf features')
graph.xlabel('leaf width')
graph.ylabel('leaf length')
graph.show()
###Output
_____no_output_____
###Markdown
The graph shows three colored zones that the SVM has chosen to group the datapoints in. Color, here, means type of tree. As we can see, the zones correspond reasonably well with the actual tree types of our training data. This means that the SVM can group, for its training data, quite well calculate tree type based on leaf features.Now let's do the same using trunk features. In the cell below replace: 1. `` with `'trunk_girth'` 2. `` with `'trunk_height'` Then __run the code__.
###Code
# Feature 1
###--- REPLACE THE <addTrunkGirth> BELOW WITH 'trunk_girth' (INCLUDING THE QUOTES) ---###
###
trunk_girth = allFeatures['trunk_girth'].values
###
# Feature 2
###--- REPLACE THE <addTrunkHeight> BELOW WITH 'trunk_height' (INCLUDING THE QUOTES) ---###
trunk_height = allFeatures['trunk_height'].values
###
# Features
trunk_features = np.asarray([trunk_girth, trunk_height]).transpose()
# Fits the SVM model
model = svm.SVC().fit(trunk_features, train_Y)
# Plots the SVM model
X_min, X_max = trunk_features[:, 0].min() - 1, trunk_features[:, 0].max() + 1
Y_min, Y_max = trunk_features[:, 1].min() - 1, trunk_features[:, 1].max() + 1
XX, YY = np.meshgrid(np.arange(X_min, X_max, .02), np.arange(Y_min, Y_max, .02))
Z = model.predict(np.c_[XX.ravel(), YY.ravel()]).reshape(XX.shape)
graph.scatter(trunk_girth, trunk_height, c = train_Y, cmap = graph.cm.rainbow, zorder = 10, edgecolor = 'k', s = 40)
graph.contourf(XX, YY, Z, cmap = graph.cm.rainbow, alpha = 1.0)
graph.contour(XX, YY, Z, colors = 'k', linestyles = '--', alpha = 0.5)
graph.title('SVM plot for leaf features')
graph.xlabel('trunk girth')
graph.ylabel('trunk height')
graph.show()
###Output
_____no_output_____ |
Basic_topography/Lesson_04_basic_topographic_features.ipynb | ###Markdown
Lesson 04: Basic topographic features *This lesson made by Simon M Mudd and last updated 22/11/2021* In this lesson we are going to do some basic analysis of topography. There are a lot of different software tools for doing this, for example:* [Whitebox](https://www.whiteboxgeo.com/download-whiteboxtools/)* [TopoToolbox](https://topotoolbox.wordpress.com/)* [SAGA](http://www.saga-gis.org/en/index.html)However, for this example we will use [LSDTopoTools](https://github.com/LSDtopotools) because the person writing this lesson is also the lead developer of that software. The expectation for this lesson is that you are on a GeoSciences Notable notebook, where LSDTopoTools is already installed. If you aren't on that system you can also [install it on a Colab notebook](https://github.com/LSDtopotools/lsdtt_notebooks/blob/master/lsdtopotools_on_colab.ipynb). The objective of this practical is to give you a taster of what kinds of things you might do with topographic data. First import some stuff we need First we make sure lsdviztools version is updated (it needs to be > 0.4.7):
###Code
!pip install lsdviztools --upgrade
###Output
_____no_output_____
###Markdown
Now we import a bunch of stuff
###Code
import lsdviztools.lsdbasemaptools as bmt
from lsdviztools.lsdplottingtools import lsdmap_gdalio as gio
import lsdviztools.lsdmapwrappers as lsdmw
import pandas as pd
import geopandas as gpd
import cartopy as cp
import cartopy.crs as ccrs
import rasterio as rio
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Data preprocessing For various historical reasons, **LSDTopoTools** does not read *GeoTiff* format, so we need to convert any rasters to [ENVI bil](https://www.l3harrisgeospatial.com/docs/enviimagefiles.html:~:text=The%20ENVI%20image%20format%20is,an%20accompanying%20ASCII%20header%20file.&text=Band%2Dinterleaved%2Dby%2Dline,to%20the%20number%20of%20bands.) format. **This is not the same as ESRI bil! MAKE SURE YOU USE ENVI BIL!!**You could convert any file to `ENVI bil` format using `gdalwarp` and then including the parameter `-of ENVI` (`of` stands for output format) but `lsdviztools` has some built in functions for doing that for you in python. We are going to use the ALOS W3D dataset (from the last lesson) for this lesson, and here is the conversion syntax:
###Code
DataDirectory = "./"
RasterFile = "rio_aguas_AW3D30.tif"
gio.convert4lsdtt(DataDirectory, RasterFile,minimum_elevation=0.01,resolution=30)
###Output
_____no_output_____
###Markdown
You can search for specific files using the `!ls` command, so we can look for the file that has been created. There is a `.tif` file from the last lesson, but the two files with extensions `.bil` and `.hdr` are from the conversion. ENVI bil files have all the data in the `.bil` file and all the georeferencing in the `.hdr` file. The `.hdr` file is an ascii file so you can easily open these files in a text editor and see all the important metadata.
###Code
!ls rio_aguas_AW3D30_UTM*
###Output
_____no_output_____
###Markdown
Now lets do some basic topographic analysis Now will extract some topographic metrics using `lsdtopotools`. The `lsdtt_parameters` are the various parameters that you can use to run an analysis. We will discuss these later. For now, we will just follow this recipe.
###Code
lsdtt_parameters = {"write_hillshade" : "true",
"surface_fitting_radius" : "60",
"print_slope" : "true"}
lsdtt_drive = lsdmw.lsdtt_driver(read_prefix = "rio_aguas_AW3D30_UTM",
write_prefix= "rio_aguas_AW3D30_UTM",
read_path = "./",
write_path = "./",
parameter_dictionary=lsdtt_parameters)
lsdtt_drive.print_parameters()
lsdtt_drive.run_lsdtt_command_line_tool()
###Output
_____no_output_____
###Markdown
Plot some data We are now going to do some simple plots using a mapping package that we put together. There are more general ways to visualise data, but this makes pretty pictures quickly.
###Code
%matplotlib inline
Base_file = "rio_aguas_AW3D30_UTM"
DataDirectory = "./"
this_img = lsdmw.SimpleHillshade(DataDirectory,Base_file,cmap="gist_earth", save_fig=True, size_format="geomorphology", dpi=600)
from IPython.display import Image
Image('rio_aguas_AW3D30_UTM_hillshade.png')
Base_file = "rio_aguas_AW3D30_UTM"
Drape_prefix = "rio_aguas_AW3D30_UTM_SLOPE"
DataDirectory = "./"
img_name2 = lsdmw.SimpleDrape(DataDirectory,Base_file, Drape_prefix,
cmap = "bwr", cbar_loc = "right",
cbar_label = "Gradient (m/m)",
save_fig=True, size_format="geomorphology",
colour_min_max = [0,1.25])
from IPython.display import Image
Image('rio_aguas_AW3D30_UTM_drape.png')
###Output
_____no_output_____
###Markdown
Get some channel profiles Okay, we will now run a different analysis. We will get some channel profiles.
###Code
lsdtt_parameters = {"print_basin_raster" : "true",
"print_chi_data_maps" : "true"}
lsdtt_drive = lsdmw.lsdtt_driver(read_prefix = "rio_aguas_AW3D30_UTM",
write_prefix= "rio_aguas_AW3D30_UTM",
read_path = "./",
write_path = "./",
parameter_dictionary=lsdtt_parameters)
lsdtt_drive.print_parameters()
lsdtt_drive.run_lsdtt_command_line_tool()
###Output
_____no_output_____
###Markdown
We can look to see what files we have using the following command. the `!` tells this notebook to run a command on the underlying linux operating system, and `ls` in linux is a command to list files.
###Code
!ls
###Output
_____no_output_____
###Markdown
The file with the channels is the one with `chi_data_map` in the filename. We are going to load this into a `pandas` dataframe. You can think of `pandas` as a kind of excel for python. It does data handling of spreadsheet-like information (and loads more.)
###Code
df = pd.read_csv("rio_aguas_AW3D30_UTM_chi_data_map.csv")
df.head()
###Output
_____no_output_____
###Markdown
Okay, now let's look at what we got. This script allows you to plot the channels.
###Code
# Look at the data frame above and see if you can change the plotting_column to something else like the
%matplotlib inline
fname_prefix = "rio_aguas_AW3D30_UTM"
ChannelFileName = "rio_aguas_AW3D30_UTM_chi_data_map.csv"
DataDirectory = "./"
lsdmw.PrintChiChannelsAndBasins(DataDirectory,fname_prefix, ChannelFileName,
add_basin_labels = True, cmap = "jet", cbar_loc = "right",
colorbarlabel = "elevation (m)", size_format = "ESURF", fig_format = "png",
dpi = 400,plotting_column = "elevation")
Image('rio_aguas_AW3D30_UTM_chi_channels_and_basins.png')
###Output
_____no_output_____
###Markdown
Looking at individual channels using pandas Okay, lets look at some individual channels. We can do this by selecting data in the dataframe.
###Code
# First lets isolate just one of these basins. There is only basin 0 and 1
df_b1 = df[(df['basin_key'] == 0)]
df_b1.head()
###Output
_____no_output_____
###Markdown
We can plot this channel:
###Code
plt.rcParams['figure.figsize'] = [10, 5]
# First lets isolate just one of these basins. There is only basin 0 and 1
df_b1 = df[(df['basin_key'] == 0)]
# The main stem channel is the one with the minimum source key in this basin
min_source = np.amin(df_b1.source_key)
df_b2 = df_b1[(df_b1['source_key'] == min_source)]
# Now make channel profile plots
z = df_b2.elevation
x_locs = df_b2.flow_distance
# Create two subplots and unpack the output array immediately
plt.clf()
f, (ax1) = plt.subplots(1, 1)
ax1.scatter(x_locs, z,s = 0.2)
ax1.set_xlabel("Distance from outlet ($m$)")
ax1.set_ylabel("elevation (m)")
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Maybe you want to know the slope of this channel. You can do this by using the `numpy` gradient function.
###Code
z = df.elevation
x = df.flow_distance
S = np.gradient(np.asarray(z),np.asarray(x))
df["slope"] = S
df.head()
###Output
_____no_output_____
###Markdown
Now we plot this. It is very similar to the plot above but now has the slope
###Code
plt.rcParams['figure.figsize'] = [10, 10]
# First lets isolate just one of these basins. There is only basin 0 and 1
df_b1 = df[(df['basin_key'] == 0)]
# The main stem channel is the one with the minimum source key in this basin
min_source = np.amin(df_b1.source_key)
df_b2 = df_b1[(df_b1['source_key'] == min_source)]
# Now make channel profile plots
z = df_b2.elevation
x_locs = df_b2.flow_distance
S = df_b2.slope
# Create two subplots and unpack the output array immediately
plt.clf()
f, (ax1, ax2) = plt.subplots(2, 1)
ax1.scatter(x_locs, z,s = 0.2)
ax2.scatter(x_locs, S,s = 1,c="r")
ax1.set_xlabel("Distance from outlet ($m$)")
ax1.set_ylabel("elevation (m)")
ax2.set_xlabel("Distance from outlet ($m$)")
ax2.set_ylabel("Slope (m/m)")
plt.tight_layout()
###Output
_____no_output_____
###Markdown
This slope (bottom figure) is very noisy. One way to deal with this is to smooth the data. We can smooth the data by running a moving window over it and doing some averaging inside the window. Python has lots of tools for this. In this case I use a `rolling` window and I have picked various settings. You don't need to worry about this too much, the only number that you might want to play with is the first number after `rolling` which is the number of datapoints in the window. The bigger this number, the more smoothed the data becomes.
###Code
df['slope_rolling'] = df.slope.rolling(40,win_type='hamming').mean()
df.head()
###Output
_____no_output_____
###Markdown
Lets have a look at what this smoothing has done.
###Code
plt.rcParams['figure.figsize'] = [10, 5]
# First lets isolate just one of these basins. There is only basin 0 and 1
df_b1 = df[(df['basin_key'] == 0)]
# The main stem channel is the one with the minimum source key in this basin
min_source = np.amin(df_b1.source_key)
df_b2 = df_b1[(df_b1['source_key'] == min_source)]
# Now make channel profile plots
z = df_b2.elevation
x_locs = df_b2.flow_distance
S = df_b2.slope
SR = df_b2.slope_rolling
# Create two subplots and unpack the output array immediately
plt.clf()
f, (ax1) = plt.subplots(1, 1)
ax1.scatter(x_locs, S,s = 0.2, label = "slope")
ax1.scatter(x_locs, SR,s = 1,c="r", label = "rolling slope")
ax1.set_xlabel("Distance from outlet ($m$)")
ax1.set_ylabel("Slope and rolling slope (m/m)")
plt.legend()
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Now we can compare the channel profile to the channel gradients and see if the channel gradient is steep where you think it might be.
###Code
plt.rcParams['figure.figsize'] = [10, 5]
# First lets isolate just one of these basins. There is only basin 0 and 1
df_b1 = df[(df['basin_key'] == 0)]
# The main stem channel is the one with the minimum source key in this basin
# If you want to play with this a bit you can change the source number to look at different channels
min_source = np.amin(df_b1.source_key)
df_b2 = df_b1[(df_b1['source_key'] == min_source)]
# Now make channel profile plots
z = df_b2.elevation
x_locs = df_b2.flow_distance
S = df_b2.slope
SR = df_b2.slope_rolling
# Create two subplots and unpack the output array immediately
plt.clf()
f, (ax1) = plt.subplots(1, 1)
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
# Make the scatter plots
ax1.scatter(x_locs, z,s = 1, label='Longitudinal profile')
ax2.scatter(x_locs, SR,s = 1,c="r", label='Channel slope')
# Some code to make sure the legend renders on the same axis
lines, labels = ax1.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.legend(lines + lines2, labels + labels2, loc=0)
ax1.set_xlabel("Distance from outlet ($m$)")
ax1.set_ylabel("Elevation (m)")
ax2.set_xlabel("Distance from outlet ($m$)")
ax2.set_ylabel("Rolling Slope (m/m)")
plt.tight_layout()
###Output
_____no_output_____ |
Interactives/SmallAngleEquation.ipynb | ###Markdown
Small Angle Approximation InteractiveThis interactive can be used to explore the relationship between an object's size, its distance, and its observed angular size. When an object is far away compared to its size, astronomers use the *small angle approximation* to simplify the relationship.There are two control sliders: the first for the size of the object (s) and the second for the distance from Earth to the object (d). The interactive uses both the "exact" equation and small angle approximation to estimate the angular size of the object:$$\theta_{exact} = 2\arctan\left(\frac{s}{2d}\right) \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \theta_{approx} = \frac{s}{d}$$**Note**: The above espressions give the angle $\theta$ in radians. To get an angle in degrees, we must multiply by the conversion factor $\frac{180^{\circ}}{\pi}$ (because there are $2\pi$ radians or $360^{\circ}$ in a circle).
###Code
# Originally created on June 13, 2018 by Samuel Holen
import ipywidgets as widgets
import numpy as np
import bqplot.pyplot as bq
from IPython.display import display
def approx_theta(s,d):
# Small angle approximation equation
return s/d
def exact_theta(s,d):
# Exact equation for theta
return 2*np.arctan(s/(2*d))
def circle(r,d=0.):
# Creates a circle given a radius r and displacement along the
# x-axis d
theta = np.linspace(0,2*np.pi,1000)
return (r*np.cos(theta)+d,r*np.sin(theta))
def par_circ(r,theta0,theta1,d=0.,h=0.):
theta = np.linspace(theta0,theta1,1000)
return (r*np.cos(theta)+d,r*np.sin(theta)+h)
def ellipse(a,b,d=0.):
# Creates an ellipse centered at (d,0) with a semimajor axis of 'a'
# in the x direction and 'b' in the y direction
theta = np.linspace(0,2*np.pi,1000)
return (a*np.cos(theta)+d,b*np.sin(theta))
def update(change=None):
# Update the display.
D1.y = [0,h_slider.value/2]
D2.y = [0,-h_slider.value/2]
D1.x = [0,d_slider.value]
D2.x = [0,d_slider.value]
ref.x = [0,d_slider.value]
# Note that the ellipse is used so that the display needn't be a square.
X_new, Y_new = circle(h_slider.value/2, d_slider.value)
Object.x = X_new
Object.y = Y_new
# Update the resulting angles
theta_approx = 180/np.pi*approx_theta(h_slider.value,d_slider.value)
theta_exact = 180/np.pi*exact_theta(h_slider.value,d_slider.value)
approx_eqn.value='<p><b>"Small Angle" Equation:</b> <br/> {:.5f}° = (180°/π) * ({} / {})</p>'.format(theta_approx,h_slider.value,d_slider.value)
exact_eqn.value ='<p><b>"Exact" Equation:</b> <br/> {:.5f}° = 2 arctan( {} / 2*{} )</p>'.format(theta_exact,h_slider.value,d_slider.value)
difference_eqn.value='<p><b>Difference:</b> <br/> {:.5f}° ({:.2f}%)</p>'.format(abs(theta_exact-theta_approx), 100*(abs(theta_exact-theta_approx)/theta_exact))
arc_loc = (d_slider.value - h_slider.value/2)/3
angle_loc = exact_theta(h_slider.value,d_slider.value)/2
xc,yc = par_circ(r=arc_loc, theta0=2*np.pi-angle_loc, theta1=2*np.pi+angle_loc)
angle_ex.x = xc
angle_ex.y = yc
h_slider = widgets.FloatSlider(
value=5,
min=0.1,
max=30.05,
step=0.1,
disabled=False,
continuous_update=True,
orientation='horizontal',
readout=True,
readout_format='.2f',
)
d_slider = widgets.FloatSlider(
value=50,
min=20.,
max=100,
step=0.1,
disabled=False,
continuous_update=True,
orientation='horizontal',
readout=True
)
theta_approx = 180/np.pi*approx_theta(h_slider.value,d_slider.value)
theta_exact = 180/np.pi*exact_theta(h_slider.value,d_slider.value)
# Create labels for sliders
h_label = widgets.Label(value='Size of star (s)')
d_label = widgets.Label(value='Distance to star (d)')
# Create text display
approx_eqn = widgets.HTML(value='<p><b>"Small Angle" Equation:</b> <br/> {:.5f}° = (180°/π) * ({} / {})</p>'.format(theta_approx,h_slider.value,d_slider.value))
exact_eqn = widgets.HTML(value='<p><b>"Exact" Equation:</b> <br/> {:.5f}° = 2 arctan( {} / 2*{} )</p>'.format(theta_exact,h_slider.value,d_slider.value))
difference_eqn = widgets.HTML(value='<p><b>Difference:</b> <br/> {:.5f}° ({:.2f}%)</p>'.format(abs(theta_exact-theta_approx), 100*(abs(theta_exact-theta_approx)/theta_exact)))
blank = widgets.Label(value='')
## PLOT/FIGURE ##
# Sets axis scale for x and y to
sc_x = bq.LinearScale(min=-5,max=115)
sc_y = bq.LinearScale(min=-26,max=26)
# Get the range to work with
x_range = sc_x.max - sc_x.min
y_range = sc_y.max - sc_y.min
# Initial height and distance of star
init_h = h_slider.value
init_d = d_slider.value
# Note that the ellipse is used so that the display needn't be a square.
# Creates a circular 'star'
X,Y = circle(r=init_h/2, d=init_d)
# Sets up the axes, grid-lines are set to black so that they blend in with the background.
ax_x = bq.Axis(scale=sc_x, grid_color='white', num_ticks=0)
ax_y = bq.Axis(scale=sc_y, orientation='vertical', grid_color='white', num_ticks=0)
# Draws the lines to the top and bottom of the star respectively
D1 = bq.Lines(x=[0,init_d], y=[0,init_h/2], scales={'x': sc_x, 'y': sc_y}, colors=['white'])
D2 = bq.Lines(x=[0,init_d], y=[0,-init_h/2], scales={'x': sc_x, 'y': sc_y}, colors=['white'])
# Creates the star
Object = bq.Lines(scales={'x': sc_x, 'y': sc_y}, x=X, y=Y, colors=['blue'],
fill='inside', fill_colors=['blue'])
# Creates a reference line.
ref = bq.Lines(x=[0,init_d], y=[0,0], scales={'x': sc_x, 'y': sc_y}, colors=['white'], line_style='dashed')
arc_loc = (init_d - init_h/2)/3
angle_loc = exact_theta(init_h,init_d)/2
xc,yc = par_circ(r=arc_loc, theta0=2*np.pi-angle_loc, theta1=2*np.pi+angle_loc)
angle_ex = bq.Lines(x=xc, y=yc, scales={'x': sc_x, 'y': sc_y}, colors=['white'])
angle_label = bq.Label(x=[2], y=[0], scales={'x': sc_x, 'y': sc_y},
text=[r'$$\theta$$'], default_size=15, font_weight='bolder',
colors=['white'], update_on_move=False)
# Update the the plot/display
h_slider.observe(update, names=['value'])
d_slider.observe(update, names=['value'])
# Creates the figure. The background color is set to black so that it looks like 'space.' Also,
# removes the default y padding.
fig = bq.Figure(title='Small Angle Approximation', marks=[Object,D1,D2,angle_ex], axes=[ax_x, ax_y],
padding_x=0, padding_y=0, animation=100, background_style={'fill' : 'black'})
# Display to the screen
# Set up the figure
fig_width = 750
fig.layout.width = '{:.0f}px'.format(fig_width)
fig.layout.height = '{:.0f}px'.format(fig_width/2)
# Set up the bottom part containing the controls and display of equations
h_box = widgets.VBox([h_label, h_slider])
d_box = widgets.VBox([d_label, d_slider])
slide_box = widgets.HBox([h_box, d_box])
h_box.layout.width = '{:.0f}px'.format(fig_width/2)
d_box.layout.width = '{:.0f}px'.format(fig_width/2)
eqn_box = widgets.HBox([exact_eqn, approx_eqn, difference_eqn])
exact_eqn.layout.width = '{:.0f}px'.format(fig_width/3)
approx_eqn.layout.width = '{:.0f}px'.format(fig_width/3)
difference_eqn.layout.width = '{:.0f}px'.format(fig_width/3)
BOX = widgets.VBox([fig, slide_box, eqn_box])
display(BOX)
###Output
_____no_output_____ |
01 Boston Wohnungsgrundstueck-Preise.ipynb | ###Markdown
Boston Wohnungsgrundstueck-PreiseHier wird exemplarisch gezeigt, wie scikit-learn für eine Aufgabe wie eine lineare Regression eingestetzt werden kann.Dies heißt, wir suchen nach den passenden Gewichten $\beta_i$ für die folgende Gleichung:$$ y=\beta _{0} + \beta _{1} \cdot x_{1} + \cdots + \beta _{p} \cdot x_{p} + \varepsilon $$ Zunächst laden wir einen Datensatz.Der wird bereits bei scikit-learn mit ausgeliefert.Hierzu schauen wir uns nun die näheren Informationen an.
###Code
import sklearn.datasets
boston = sklearn.datasets.load_boston()
print(boston.DESCR)
###Output
_____no_output_____
###Markdown
Wir können auch direkt die Daten betrachten.Nur meistens helfen einem all die Zahlen in nicht aufbereiteter Form wenig.
###Code
boston
###Output
_____no_output_____
###Markdown
Untersuche Zielwert In der Ausgabe oben kann man kaum etwas erkennen.Aber wie teuer sind die Häuser überhaupt so ungefähr?Dafür importieren wir zunächst das Modul `matplotlib`, mit welchem man visualisieren kann.
###Code
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Nun können wir die Verteilung der Hauspreise grafisch darstellen:
###Code
plt.hist(boston.target)
plt.ylabel("Anzahl")
plt.xlabel("Hauspreis in 1000 US-\$")
plt.show()
###Output
_____no_output_____
###Markdown
Nun wollen wir uns die Statistiken dazu berechnen.Dafür nutzen wir das Modul `statistics`.
###Code
import statistics
print("Mittelwert: \t\t", statistics.mean(boston.target))
print("Median: \t\t", statistics.median(boston.target))
print("Standardabweichung: \t", statistics.stdev(boston.target))
###Output
_____no_output_____
###Markdown
Untersuche AttributeMithilfe der Attribute, den weiteren Eigenschaften einer Nachbarschaft, möchten wir später den Preis des Hauses vorhersagen.Dafür schauen wir uns nun zunächst an, wie die Eigenschaften im Datensatz verteilt sind.Dafür wird ein Scatterplot eingesetzt.
###Code
for feature_name, data_column in zip(boston.feature_names, boston.data.T):
plt.plot(data_column, boston.target, "b.")
plt.ylabel("Hauspreis in 1000 US-\$")
plt.xlabel(feature_name)
plt.show()
###Output
_____no_output_____
###Markdown
Weitere Informationen zu der Plot-Funktion findet man mithilfe des Fragezeichen-Operators:
###Code
?plt.plot
###Output
_____no_output_____
###Markdown
Für einige der Attribute war ein Scatterplot definitiv die falsche Visualisierungsform.Welche Art von Plots würde sich hier noch anbieten - insbesondere für die Variable CHAS?Eine Liste von Plot-Befehlen ist [hier](https://matplotlib.org/api/pyplot_summary.html) zu finden.
###Code
# hier gerne weitere Plots kreieren
###Output
_____no_output_____
###Markdown
Passe Lineare Regression anFür das Maschinelle Lernen wird ein bestehender Datensatz in einen Trainings- und einen Test-Datensatz aufgeteilt.Mit dem Trainings-Datensatz passen wir unsere $\beta_i$-Werte in der folgenden Gleichung an:$$ y=\beta _{0} + \beta _{1} \cdot x_{1} + \cdots + \beta _{p} \cdot x_{p} + \varepsilon $$Mit dem Test-Datensatz erproben wir dann, wie gut wir nun diesen "vorhersagen" können.Die Idee ist, dass man die Modelle immer mit bislang nicht betrachteten Daten überprüfen soll.
###Code
import sklearn.model_selection
###Output
_____no_output_____
###Markdown
Der Datensatz wird nun in einen Trainings-Datensatz und einen Test-Datensatz aufgeteilt.Der Trainings-Datensatz wird 67 % der insgesamt vorhandenen Daten enthalten, der Test-Datensatz 33 %.
###Code
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(
boston.data, boston.target, test_size=0.33)
###Output
_____no_output_____
###Markdown
Nun könnten wir die Attribute X_train mit dem Zielwert Y_train genauso untersuchen, wie wir es bereits mit `boston.data` gemacht haben.Allerdings ist der Split zufällig erfolgt und deswegen sollte sich eigentlich nichts grundlegend geändert haben.Nun importieren wir das Lineare Modell als Lernalgorithmus und passen die Gewichte $\beta_i$ an den vorliegenden Datensatz an:
###Code
import sklearn.linear_model
lm = sklearn.linear_model.LinearRegression()
lm.fit(X_train, Y_train)
print("\nIntercept:", f"{lm.intercept_:9.4f}\n")
print("Feature\t Koeffizient\n-----------|-----------------")
for feature, coefficient in zip(boston.feature_names, lm.coef_):
print("{:<10}".format(feature), f"{coefficient:9.4f}")
###Output
_____no_output_____
###Markdown
Das heißt, dass die Lineare Regressionsgleichung ungefähr so lautet:$$ y = 25,19 + -0,12 \cdot x_{1} + 0,04 \cdot x_{2} + \cdots + \varepsilon $$Je nach Durchlauf variiert diese Gleichung, weil der Trainings-Datensatz durch den zufälligen Split immer ein wenig anders aussieht. Nun können wir das Lineare Modell auch schon einsetzen, um Vorhersagen zu treffen:
###Code
print("\nEingabe:")
[print("\t{:<10}".format(feature), attr) for feature, attr in zip(boston.feature_names, X_test[0])]
print()
ausgabe = lm.predict(X_test[0:1])[0]
print("Ausgabe: ", ausgabe, "\n")
###Output
_____no_output_____
###Markdown
Das heißt, wenn ich für ein Haus die Werte gesammelt habe, dann kann ich nun einen Preis berechnen.In diesem Fall sind es `22.77`.Weil es 1000 US-$ sind, muss die Zahl noch mal 1000 gerechnet werden, d. h. der Wert ist 22.770.Aber kann man dem Modell vertrauen? Das muss nun noch untersucht werden.Dafür lassen wir alle Zielwerte vom Test-Datensatz erstellen und vergleichen das Ergebnis mit dem tatsächlichen, bekannten Wert.
###Code
Y_pred = lm.predict(X_test)
###Output
_____no_output_____
###Markdown
Nun müssen wir noch den vorhergesagten mit dem tatsächlichen Wert vergleichen.
###Code
plt.scatter(Y_test, Y_pred)
plt.xlabel("Tatsächlicher Hauspreis in $1000 US-\$")
plt.ylabel("Vorhergesagter Hauspreis in $1000 US-\$")
plt.plot([0, 50], [0, 50], "r", label="perfekter Treffer")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Nun können wir den Fehler des Modells berechnen.Das bezeichnet die Differenz zwischen der Vorhersage und dem tatsächlichen Wert.
###Code
error = Y_test - Y_pred
plt.hist(error)
plt.xlabel("Differenz zwischen dem Hauspreis des Modells und des Tatsächlichen in $1000 US-\$")
plt.show()
###Output
_____no_output_____
###Markdown
Nun noch ein paar Metriken, wie wir sie für einen wissenschaftlichen Bericht brauchen könnten.Dort kann man Visualisierungen, wie sie oben zu sehen sind, nicht immer mit aufnehmen, weil der Platz nicht mehr reicht...
###Code
import sklearn.metrics
###Output
_____no_output_____
###Markdown
Hier gibt es eine ganze Reihe von Metriken, die in verschiedenen Bereichen der Wissenschaft verwendet werden.Ob die Metrik im aktuellen Kontext hilfreich ist, muss der Mensch vor dem Bildschirm entscheiden.
###Code
for score in dir(sklearn.metrics):
if score.endswith("_score"):
print(score)
###Output
_____no_output_____
###Markdown
Eine Auswahl von Metriken:
###Code
print("Mean absolute error: %.2f" % sklearn.metrics.mean_absolute_error(Y_test, Y_pred))
print("Mean squared error : %.2f" % sklearn.metrics.mean_squared_error(Y_test, Y_pred))
print("r^2 : %.2f" % sklearn.metrics.r2_score(Y_test, Y_pred))
###Output
_____no_output_____ |
python-data/functions.ipynb | ###Markdown
Functions * Functions as Objects* Lambda Functions* Closures* \*args, \*\*kwargs* Currying* Generators* Generator Expressions* itertools Functions as Objects Python treats functions as objects which can simplify data cleaning. The following contains a transform utility class with two functions to clean strings:
###Code
%%file transform_util.py
import re
class TransformUtil:
@classmethod
def remove_punctuation(cls, value):
"""Removes !, #, and ?.
"""
return re.sub('[!#?]', '', value)
@classmethod
def clean_strings(cls, strings, ops):
"""General purpose method to clean strings.
Pass in a sequence of strings and the operations to perform.
"""
result = []
for value in strings:
for function in ops:
value = function(value)
result.append(value)
return result
###Output
Overwriting transform_util.py
###Markdown
Below are nose tests that exercises the utility functions:
###Code
%%file tests/test_transform_util.py
from nose.tools import assert_equal
from ..transform_util import TransformUtil
class TestTransformUtil():
states = [' Alabama ', 'Georgia!', 'Georgia', 'georgia', \
'FlOrIda', 'south carolina##', 'West virginia?']
expected_output = ['Alabama',
'Georgia',
'Georgia',
'Georgia',
'Florida',
'South Carolina',
'West Virginia']
def test_remove_punctuation(self):
assert_equal(TransformUtil.remove_punctuation('!#?'), '')
def test_map_remove_punctuation(self):
# Map applies a function to a collection
output = map(TransformUtil.remove_punctuation, self.states)
assert_equal('!#?' not in output, True)
def test_clean_strings(self):
clean_ops = [str.strip, TransformUtil.remove_punctuation, str.title]
output = TransformUtil.clean_strings(self.states, clean_ops)
assert_equal(output, self.expected_output)
###Output
Overwriting tests/test_transform_util.py
###Markdown
Execute the nose tests in verbose mode:
###Code
!nosetests tests/test_transform_util.py -v
###Output
core.tests.test_transform_util.TestTransformUtil.test_clean_strings ... ok
core.tests.test_transform_util.TestTransformUtil.test_map_remove_punctuation ... ok
core.tests.test_transform_util.TestTransformUtil.test_remove_punctuation ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.001s
OK
###Markdown
Lambda Functions Lambda functions are anonymous functions and are convenient for data analysis, as data transformation functions take functions as arguments. Sort a sequence of strings by the number of letters:
###Code
strings = ['foo', 'bar,', 'baz', 'f', 'fo', 'b', 'ba']
strings.sort(key=lambda x: len(list(x)))
strings
###Output
_____no_output_____
###Markdown
Closures Closures are dynamically-genearated functions returned by another function. The returned function has access to the variables in the local namespace where it was created. Closures are often used to implement decorators. Decorators are useful to transparently wrap something with additional functionality:```pythondef my_decorator(fun): def myfun(*params, **kwparams): do_something() fun(*params, **kwparams) return myfun``` Each time the following closure() is called, it generates the same output:
###Code
def make_closure(x):
def closure():
print('Secret value is: %s' % x)
return closure
closure = make_closure(7)
closure()
###Output
Secret value is: 7
###Markdown
Keep track of arguments passed:
###Code
def make_watcher():
dict_seen = {}
def watcher(x):
if x in dict_seen:
return True
else:
dict_seen[x] = True
return False
return watcher
watcher = make_watcher()
seq = [1, 1, 2, 3, 5, 8, 13, 2, 5, 13]
[watcher(x) for x in seq]
###Output
_____no_output_____
###Markdown
\*args, \*\*kwargs \*args and \*\*kwargs are useful when you don't know how many arguments might be passed to your function or when you want to handle named arguments that you have not defined in advance. Print arguments and call the input function on *args:
###Code
def foo(func, arg, *args, **kwargs):
print('arg: %s', arg)
print('args: %s', args)
print('kwargs: %s', kwargs)
print('func result: %s', func(args))
foo(sum, "foo", 1, 2, 3, 4, 5)
###Output
('arg: %s', 'foo')
('args: %s', (1, 2, 3, 4, 5))
('kwargs: %s', {})
('func result: %s', 15)
###Markdown
Currying Currying means to derive new functions from existing ones by partial argument appilcation. Currying is used in pandas to create specialized functions for transforming time series data.The argument y in add_numbers is curried:
###Code
def add_numbers(x, y):
return x + y
add_seven = lambda y: add_numbers(7, y)
add_seven(3)
###Output
_____no_output_____
###Markdown
The built-in functools can simplify currying with partial:
###Code
from functools import partial
add_five = partial(add_numbers, 5)
add_five(2)
###Output
_____no_output_____
###Markdown
Generators A generator is a simple way to construct a new iterable object. Generators return a sequence lazily. When you call the generator, no code is immediately executed until you request elements from the generator.Find all the unique ways to make change for $1:
###Code
def squares(n=5):
for x in xrange(1, n + 1):
yield x ** 2
# No code is executed
gen = squares()
# Generator returns values lazily
for x in squares():
print x
###Output
1
4
9
16
25
###Markdown
Generator ExpressionsA generator expression is analogous to a comprehension. A list comprehension is enclosed by [], a generator expression is enclosed by ():
###Code
gen = (x ** 2 for x in xrange(1, 6))
for x in gen:
print x
###Output
1
4
9
16
25
###Markdown
itertoolsThe library itertools has a collection of generators useful for data analysis.Function groupby takes a sequence and a key function, grouping consecutive elements in the sequence by the input function's return value (the key). groupby returns the function's return value (the key) and a generator.
###Code
import itertools
first_letter = lambda x: x[0]
strings = ['foo', 'bar', 'baz']
for letter, gen_names in itertools.groupby(strings, first_letter):
print letter, list(gen_names)
###Output
f ['foo']
b ['bar', 'baz']
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/data-science-ipython-notebooks). Functions * Functions as Objects* Lambda Functions* Closures* \*args, \*\*kwargs* Currying* Generators* Generator Expressions* itertools Functions as Objects Python treats functions as objects which can simplify data cleaning. The following contains a transform utility class with two functions to clean strings:
###Code
%%file transform_util.py
import re
class TransformUtil:
@classmethod
def remove_punctuation(cls, value):
"""Removes !, #, and ?.
"""
return re.sub('[!#?]', '', value)
@classmethod
def clean_strings(cls, strings, ops):
"""General purpose method to clean strings.
Pass in a sequence of strings and the operations to perform.
"""
result = []
for value in strings:
for function in ops:
value = function(value)
result.append(value)
return result
###Output
Overwriting transform_util.py
###Markdown
Below are nose tests that exercises the utility functions:
###Code
%%file tests/test_transform_util.py
from nose.tools import assert_equal
from ..transform_util import TransformUtil
class TestTransformUtil():
states = [' Alabama ', 'Georgia!', 'Georgia', 'georgia', \
'FlOrIda', 'south carolina##', 'West virginia?']
expected_output = ['Alabama',
'Georgia',
'Georgia',
'Georgia',
'Florida',
'South Carolina',
'West Virginia']
def test_remove_punctuation(self):
assert_equal(TransformUtil.remove_punctuation('!#?'), '')
def test_map_remove_punctuation(self):
# Map applies a function to a collection
output = map(TransformUtil.remove_punctuation, self.states)
assert_equal('!#?' not in output, True)
def test_clean_strings(self):
clean_ops = [str.strip, TransformUtil.remove_punctuation, str.title]
output = TransformUtil.clean_strings(self.states, clean_ops)
assert_equal(output, self.expected_output)
###Output
Overwriting tests/test_transform_util.py
###Markdown
Execute the nose tests in verbose mode:
###Code
!nosetests tests/test_transform_util.py -v
###Output
core.tests.test_transform_util.TestTransformUtil.test_clean_strings ... ok
core.tests.test_transform_util.TestTransformUtil.test_map_remove_punctuation ... ok
core.tests.test_transform_util.TestTransformUtil.test_remove_punctuation ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.001s
OK
###Markdown
Lambda Functions Lambda functions are anonymous functions and are convenient for data analysis, as data transformation functions take functions as arguments. Sort a sequence of strings by the number of letters:
###Code
strings = ['foo', 'bar,', 'baz', 'f', 'fo', 'b', 'ba']
strings.sort(key=lambda x: len(list(x)))
strings
###Output
_____no_output_____
###Markdown
Closures Closures are dynamically-genearated functions returned by another function. The returned function has access to the variables in the local namespace where it was created. Closures are often used to implement decorators. Decorators are useful to transparently wrap something with additional functionality:```pythondef my_decorator(fun): def myfun(*params, **kwparams): do_something() fun(*params, **kwparams) return myfun``` Each time the following closure() is called, it generates the same output:
###Code
def make_closure(x):
def closure():
print('Secret value is: %s' % x)
return closure
closure = make_closure(7)
closure()
###Output
Secret value is: 7
###Markdown
Keep track of arguments passed:
###Code
def make_watcher():
dict_seen = {}
def watcher(x):
if x in dict_seen:
return True
else:
dict_seen[x] = True
return False
return watcher
watcher = make_watcher()
seq = [1, 1, 2, 3, 5, 8, 13, 2, 5, 13]
[watcher(x) for x in seq]
###Output
_____no_output_____
###Markdown
\*args, \*\*kwargs \*args and \*\*kwargs are useful when you don't know how many arguments might be passed to your function or when you want to handle named arguments that you have not defined in advance. Print arguments and call the input function on *args:
###Code
def foo(func, arg, *args, **kwargs):
print('arg: %s', arg)
print('args: %s', args)
print('kwargs: %s', kwargs)
print('func result: %s', func(args))
foo(sum, "foo", 1, 2, 3, 4, 5)
###Output
('arg: %s', 'foo')
('args: %s', (1, 2, 3, 4, 5))
('kwargs: %s', {})
('func result: %s', 15)
###Markdown
Currying Currying means to derive new functions from existing ones by partial argument appilcation. Currying is used in pandas to create specialized functions for transforming time series data.The argument y in add_numbers is curried:
###Code
def add_numbers(x, y):
return x + y
add_seven = lambda y: add_numbers(7, y)
add_seven(3)
###Output
_____no_output_____
###Markdown
The built-in functools can simplify currying with partial:
###Code
from functools import partial
add_five = partial(add_numbers, 5)
add_five(2)
###Output
_____no_output_____
###Markdown
Generators A generator is a simple way to construct a new iterable object. Generators return a sequence lazily. When you call the generator, no code is immediately executed until you request elements from the generator.Find all the unique ways to make change for $1:
###Code
def squares(n=5):
for x in xrange(1, n + 1):
yield x ** 2
# No code is executed
gen = squares()
# Generator returns values lazily
for x in squares():
print x
###Output
1
4
9
16
25
###Markdown
Generator ExpressionsA generator expression is analogous to a comprehension. A list comprehension is enclosed by [], a generator expression is enclosed by ():
###Code
gen = (x ** 2 for x in xrange(1, 6))
for x in gen:
print x
###Output
1
4
9
16
25
###Markdown
itertoolsThe library itertools has a collection of generators useful for data analysis.Function groupby takes a sequence and a key function, grouping consecutive elements in the sequence by the input function's return value (the key). groupby returns the function's return value (the key) and a generator.
###Code
import itertools
first_letter = lambda x: x[0]
strings = ['foo', 'bar', 'baz']
for letter, gen_names in itertools.groupby(strings, first_letter):
print letter, list(gen_names)
###Output
f ['foo']
b ['bar', 'baz']
###Markdown
Functions * Functions as Objects* Lambda Functions* Closures* \*args, \*\*kwargs* Currying* Generators* Generator Expressions* itertools Functions as Objects Python treats functions as objects which can simplify data cleaning. The following contains a transform utility class with two functions to clean strings:
###Code
%%file transform_util.py
import re
class TransformUtil:
@classmethod
def remove_punctuation(cls, value):
"""Removes !, #, and ?.
"""
return re.sub('[!#?]', '', value)
@classmethod
def clean_strings(cls, strings, ops):
"""General purpose method to clean strings.
Pass in a sequence of strings and the operations to perform.
"""
result = []
for value in strings:
for function in ops:
value = function(value)
result.append(value)
return result
###Output
Overwriting transform_util.py
###Markdown
Below are nose tests that exercises the utility functions:
###Code
%%file tests/test_transform_util.py
from nose.tools import assert_equal
from ..transform_util import TransformUtil
class TestTransformUtil():
states = [' Alabama ', 'Georgia!', 'Georgia', 'georgia', \
'FlOrIda', 'south carolina##', 'West virginia?']
expected_output = ['Alabama',
'Georgia',
'Georgia',
'Georgia',
'Florida',
'South Carolina',
'West Virginia']
def test_remove_punctuation(self):
assert_equal(TransformUtil.remove_punctuation('!#?'), '')
def test_map_remove_punctuation(self):
# Map applies a function to a collection
output = map(TransformUtil.remove_punctuation, self.states)
assert_equal('!#?' not in output, True)
def test_clean_strings(self):
clean_ops = [str.strip, TransformUtil.remove_punctuation, str.title]
output = TransformUtil.clean_strings(self.states, clean_ops)
assert_equal(output, self.expected_output)
###Output
Overwriting tests/test_transform_util.py
###Markdown
Execute the nose tests in verbose mode:
###Code
!nosetests tests/test_transform_util.py -v
###Output
core.tests.test_transform_util.TestTransformUtil.test_clean_strings ... ok
core.tests.test_transform_util.TestTransformUtil.test_map_remove_punctuation ... ok
core.tests.test_transform_util.TestTransformUtil.test_remove_punctuation ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.001s
OK
###Markdown
Lambda Functions Lambda functions are anonymous functions and are convenient for data analysis, as data transformation functions take functions as arguments. Sort a sequence of strings by the number of letters:
###Code
strings = ['foo', 'bar,', 'baz', 'f', 'fo', 'b', 'ba']
strings.sort(key=lambda x: len(list(x)))
strings
###Output
_____no_output_____
###Markdown
Closures Closures are dynamically-genearated functions returned by another function. The returned function has access to the variables in the local namespace where it was created. Closures are often used to implement decorators. Decorators are useful to transparently wrap something with additional functionality:```pythondef my_decorator(fun): def myfun(*params, **kwparams): do_something() fun(*params, **kwparams) return myfun``` Each time the following closure() is called, it generates the same output:
###Code
def make_closure(x):
def closure():
print('Secret value is: %s' % x)
return closure
closure = make_closure(7)
closure()
###Output
Secret value is: 7
###Markdown
Keep track of arguments passed:
###Code
def make_watcher():
dict_seen = {}
def watcher(x):
if x in dict_seen:
return True
else:
dict_seen[x] = True
return False
return watcher
watcher = make_watcher()
seq = [1, 1, 2, 3, 5, 8, 13, 2, 5, 13]
[watcher(x) for x in seq]
###Output
_____no_output_____
###Markdown
\*args, \*\*kwargs \*args and \*\*kwargs are useful when you don't know how many arguments might be passed to your function or when you want to handle named arguments that you have not defined in advance. Print arguments and call the input function on *args:
###Code
def foo(func, arg, *args, **kwargs):
print('arg: %s', arg)
print('args: %s', args)
print('kwargs: %s', kwargs)
print('func result: %s', func(args))
foo(sum, "foo", 1, 2, 3, 4, 5)
###Output
('arg: %s', 'foo')
('args: %s', (1, 2, 3, 4, 5))
('kwargs: %s', {})
('func result: %s', 15)
###Markdown
Currying Currying means to derive new functions from existing ones by partial argument appilcation. Currying is used in pandas to create specialized functions for transforming time series data.The argument y in add_numbers is curried:
###Code
def add_numbers(x, y):
return x + y
add_seven = lambda y: add_numbers(7, y)
add_seven(3)
###Output
_____no_output_____
###Markdown
The built-in functools can simplify currying with partial:
###Code
from functools import partial
add_five = partial(add_numbers, 5)
add_five(2)
###Output
_____no_output_____
###Markdown
Generators A generator is a simple way to construct a new iterable object. Generators return a sequence lazily. When you call the generator, no code is immediately executed until you request elements from the generator.Find all the unique ways to make change for $1:
###Code
def squares(n=5):
for x in xrange(1, n + 1):
yield x ** 2
# No code is executed
gen = squares()
# Generator returns values lazily
for x in squares():
print x
###Output
1
4
9
16
25
###Markdown
Generator ExpressionsA generator expression is analogous to a comprehension. A list comprehension is enclosed by [], a generator expression is enclosed by ():
###Code
gen = (x ** 2 for x in xrange(1, 6))
for x in gen:
print x
###Output
1
4
9
16
25
###Markdown
itertoolsThe library itertools has a collection of generators useful for data analysis.Function groupby takes a sequence and a key function, grouping consecutive elements in the sequence by the input function's return value (the key). groupby returns the function's return value (the key) and a generator.
###Code
import itertools
first_letter = lambda x: x[0]
strings = ['foo', 'bar', 'baz']
for letter, gen_names in itertools.groupby(strings, first_letter):
print letter, list(gen_names)
###Output
f ['foo']
b ['bar', 'baz']
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/data-science-ipython-notebooks). Functions * Functions as Objects* Lambda Functions* Closures* \*args, \*\*kwargs* Currying* Generators* Generator Expressions* itertools Functions as Objects Python treats functions as objects which can simplify data cleaning. The following contains a transform utility class with two functions to clean strings:
###Code
%%file transform_util.py
import re
class TransformUtil:
@classmethod
def remove_punctuation(cls, value):
"""Removes !, #, and ?.
"""
return re.sub('[!#?]', '', value)
@classmethod
def clean_strings(cls, strings, ops):
"""General purpose method to clean strings.
Pass in a sequence of strings and the operations to perform.
"""
result = []
for value in strings:
for function in ops:
value = function(value)
result.append(value)
return result
###Output
Overwriting transform_util.py
###Markdown
Below are nose tests that exercises the utility functions:
###Code
%%file tests/test_transform_util.py
from nose.tools import assert_equal
from ..transform_util import TransformUtil
class TestTransformUtil():
states = [' Alabama ', 'Georgia!', 'Georgia', 'georgia', \
'FlOrIda', 'south carolina##', 'West virginia?']
expected_output = ['Alabama',
'Georgia',
'Georgia',
'Georgia',
'Florida',
'South Carolina',
'West Virginia']
def test_remove_punctuation(self):
assert_equal(TransformUtil.remove_punctuation('!#?'), '')
def test_map_remove_punctuation(self):
# Map applies a function to a collection
output = map(TransformUtil.remove_punctuation, self.states)
assert_equal('!#?' not in output, True)
def test_clean_strings(self):
clean_ops = [str.strip, TransformUtil.remove_punctuation, str.title]
output = TransformUtil.clean_strings(self.states, clean_ops)
assert_equal(output, self.expected_output)
###Output
Overwriting tests/test_transform_util.py
###Markdown
Execute the nose tests in verbose mode:
###Code
!nosetests tests/test_transform_util.py -v
###Output
core.tests.test_transform_util.TestTransformUtil.test_clean_strings ... ok
core.tests.test_transform_util.TestTransformUtil.test_map_remove_punctuation ... ok
core.tests.test_transform_util.TestTransformUtil.test_remove_punctuation ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.001s
OK
###Markdown
Lambda Functions Lambda functions are anonymous functions and are convenient for data analysis, as data transformation functions take functions as arguments. Sort a sequence of strings by the number of letters:
###Code
strings = ['foo', 'bar,', 'baz', 'f', 'fo', 'b', 'ba']
strings.sort(key=lambda x: len(list(x)))
strings
###Output
_____no_output_____
###Markdown
Closures Closures are dynamically-genearated functions returned by another function. The returned function has access to the variables in the local namespace where it was created. Closures are often used to implement decorators. Decorators are useful to transparently wrap something with additional functionality:```pythondef my_decorator(fun): def myfun(*params, **kwparams): do_something() fun(*params, **kwparams) return myfun``` Each time the following closure() is called, it generates the same output:
###Code
def make_closure(x):
def closure():
print('Secret value is: %s' % x)
return closure
closure = make_closure(7)
closure()
###Output
Secret value is: 7
###Markdown
Keep track of arguments passed:
###Code
def make_watcher():
dict_seen = {}
def watcher(x):
if x in dict_seen:
return True
else:
dict_seen[x] = True
return False
return watcher
watcher = make_watcher()
seq = [1, 1, 2, 3, 5, 8, 13, 2, 5, 13]
[watcher(x) for x in seq]
###Output
_____no_output_____
###Markdown
\*args, \*\*kwargs \*args and \*\*kwargs are useful when you don't know how many arguments might be passed to your function or when you want to handle named arguments that you have not defined in advance. Print arguments and call the input function on *args:
###Code
def foo(func, arg, *args, **kwargs):
print('arg: %s', arg)
print('args: %s', args)
print('kwargs: %s', kwargs)
print('func result: %s', func(args))
foo(sum, "foo", 1, 2, 3, 4, 5)
###Output
('arg: %s', 'foo')
('args: %s', (1, 2, 3, 4, 5))
('kwargs: %s', {})
('func result: %s', 15)
###Markdown
Currying Currying means to derive new functions from existing ones by partial argument appilcation. Currying is used in pandas to create specialized functions for transforming time series data.The argument y in add_numbers is curried:
###Code
def add_numbers(x, y):
return x + y
add_seven = lambda y: add_numbers(7, y)
add_seven(3)
###Output
_____no_output_____
###Markdown
The built-in functools can simplify currying with partial:
###Code
from functools import partial
add_five = partial(add_numbers, 5)
add_five(2)
###Output
_____no_output_____
###Markdown
Generators A generator is a simple way to construct a new iterable object. Generators return a sequence lazily. When you call the generator, no code is immediately executed until you request elements from the generator.Find all the unique ways to make change for $1:
###Code
def squares(n=5):
for x in xrange(1, n + 1):
yield x ** 2
# No code is executed
gen = squares()
# Generator returns values lazily
for x in squares():
print x
###Output
1
4
9
16
25
###Markdown
Generator ExpressionsA generator expression is analogous to a comprehension. A list comprehension is enclosed by [], a generator expression is enclosed by ():
###Code
gen = (x ** 2 for x in xrange(1, 6))
for x in gen:
print x
###Output
1
4
9
16
25
###Markdown
itertoolsThe library itertools has a collection of generators useful for data analysis.Function groupby takes a sequence and a key function, grouping consecutive elements in the sequence by the input function's return value (the key). groupby returns the function's return value (the key) and a generator.
###Code
import itertools
first_letter = lambda x: x[0]
strings = ['foo', 'bar', 'baz']
for letter, gen_names in itertools.groupby(strings, first_letter):
print letter, list(gen_names)
###Output
f ['foo']
b ['bar', 'baz']
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/data-science-ipython-notebooks). Functions * Functions as Objects* Lambda Functions* Closures* \*args, \*\*kwargs* Currying* Generators* Generator Expressions* itertools Functions as Objects Python treats functions as objects which can simplify data cleaning. The following contains a transform utility class with two functions to clean strings:
###Code
%%file transform_util.py
import re
class TransformUtil:
@classmethod
def remove_punctuation(cls, value):
"""Removes !, #, and ?.
"""
return re.sub('[!#?]', '', value)
@classmethod
def clean_strings(cls, strings, ops):
"""General purpose method to clean strings.
Pass in a sequence of strings and the operations to perform.
"""
result = []
for value in strings:
for function in ops:
value = function(value)
result.append(value)
return result
###Output
Overwriting transform_util.py
###Markdown
Below are nose tests that exercises the utility functions:
###Code
%%file tests/test_transform_util.py
from nose.tools import assert_equal
from ..transform_util import TransformUtil
class TestTransformUtil():
states = [' Alabama ', 'Georgia!', 'Georgia', 'georgia', \
'FlOrIda', 'south carolina##', 'West virginia?']
expected_output = ['Alabama',
'Georgia',
'Georgia',
'Georgia',
'Florida',
'South Carolina',
'West Virginia']
def test_remove_punctuation(self):
assert_equal(TransformUtil.remove_punctuation('!#?'), '')
def test_map_remove_punctuation(self):
# Map applies a function to a collection
output = map(TransformUtil.remove_punctuation, self.states)
assert_equal('!#?' not in output, True)
def test_clean_strings(self):
clean_ops = [str.strip, TransformUtil.remove_punctuation, str.title]
output = TransformUtil.clean_strings(self.states, clean_ops)
assert_equal(output, self.expected_output)
###Output
Overwriting tests/test_transform_util.py
###Markdown
Execute the nose tests in verbose mode:
###Code
!nosetests /tests/test_transform_util.py -v
###Output
Failure: ImportError (No module named test_transform_util) ... ERROR
======================================================================
ERROR: Failure: ImportError (No module named test_transform_util)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\software\install\Anaconda2\lib\site-packages\nose\loader.py", line 418, in loadTestsFromName
addr.filename, addr.module)
File "D:\software\install\Anaconda2\lib\site-packages\nose\importer.py", line 47, in importFromPath
return self.importFromDir(dir_path, fqname)
File "D:\software\install\Anaconda2\lib\site-packages\nose\importer.py", line 79, in importFromDir
fh, filename, desc = find_module(part, path)
ImportError: No module named test_transform_util
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)
###Markdown
Lambda Functions Lambda functions are anonymous functions and are convenient for data analysis, as data transformation functions take functions as arguments. Sort a sequence of strings by the number of letters:
###Code
strings = ['foo', 'bar,', 'baz', 'f', 'fo', 'b', 'ba']
strings.sort(key=lambda x: len(list(x)))
strings
###Output
_____no_output_____
###Markdown
Closures Closures are dynamically-genearated functions returned by another function. The returned function has access to the variables in the local namespace where it was created. Closures are often used to implement decorators. Decorators are useful to transparently wrap something with additional functionality:```pythondef my_decorator(fun): def myfun(*params, **kwparams): do_something() fun(*params, **kwparams) return myfun``` Each time the following closure() is called, it generates the same output:
###Code
def make_closure(x):
def closure():
print('Secret value is: %s' % x)
return closure
closure = make_closure(7)
closure()
###Output
Secret value is: 7
###Markdown
Keep track of arguments passed:
###Code
def make_watcher():
dict_seen = {}
def watcher(x):
if x in dict_seen:
return True
else:
dict_seen[x] = True
return False
return watcher
watcher = make_watcher()
seq = [1, 1, 2, 3, 5, 8, 13, 2, 5, 13]
[watcher(x) for x in seq]
###Output
_____no_output_____
###Markdown
\*args, \*\*kwargs \*args and \*\*kwargs are useful when you don't know how many arguments might be passed to your function or when you want to handle named arguments that you have not defined in advance. Print arguments and call the input function on *args:
###Code
def foo(func, arg, *args, **kwargs):
print('arg: %s', arg)
print('args: %s', args)
print('kwargs: %s', kwargs)
print('func result: %s', func(args))
foo(sum, "foo", 1, 2, 3, 4, 5)
###Output
('arg: %s', 'foo')
('args: %s', (1, 2, 3, 4, 5))
('kwargs: %s', {})
('func result: %s', 15)
###Markdown
Currying Currying means to derive new functions from existing ones by partial argument appilcation. Currying is used in pandas to create specialized functions for transforming time series data.The argument y in add_numbers is curried:
###Code
def add_numbers(x, y):
return x + y
add_seven = lambda y: add_numbers(7, y)
add_seven(3)
###Output
_____no_output_____
###Markdown
The built-in functools can simplify currying with partial:
###Code
from functools import partial
add_five = partial(add_numbers, 5)
add_five(2)
###Output
_____no_output_____
###Markdown
Generators A generator is a simple way to construct a new iterable object. Generators return a sequence lazily. When you call the generator, no code is immediately executed until you request elements from the generator.Find all the unique ways to make change for $1:
###Code
def squares(n=5):
for x in xrange(1, n + 1):
yield x ** 2
# No code is executed
gen = squares()
# Generator returns values lazily
for x in squares():
print x
###Output
1
4
9
16
25
###Markdown
Generator ExpressionsA generator expression is analogous to a comprehension. A list comprehension is enclosed by [], a generator expression is enclosed by ():
###Code
gen = (x ** 2 for x in xrange(1, 6))
for x in gen:
print x
###Output
1
4
9
16
25
###Markdown
itertoolsThe library itertools has a collection of generators useful for data analysis.Function groupby takes a sequence and a key function, grouping consecutive elements in the sequence by the input function's return value (the key). groupby returns the function's return value (the key) and a generator.
###Code
import itertools
first_letter = lambda x: x[0]
strings = ['foo', 'bar', 'baz']
for letter, gen_names in itertools.groupby(strings, first_letter):
print letter, list(gen_names)
###Output
f ['foo']
b ['bar', 'baz']
|
fig_7_networksize.ipynb | ###Markdown
Figure 7. Comparing RDD for different network sizes Imports
###Code
%pylab inline
pylab.rcParams['figure.figsize'] = (10, 6)
#%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import numpy.random as rand
import pandas as pd
import seaborn as sns
from lib.lif import LIF, ParamsLIF
from lib.causal import causaleffect_maxv, causaleffect_maxv_linear, causaleffect_maxv_sp
#Angle between two vectors
def alignment(a,b):
da = np.dot(a,a)
db = np.dot(b,b)
if da > 0 and db > 0:
return 180/np.pi*np.arccos(np.dot(a,b)/np.sqrt(da*db))
else:
return 360.
def mse(pred,true):
return np.mean((pred - true)**2)
###Output
_____no_output_____
###Markdown
A. Dependence on $N$ and $c$
###Code
nsims = 5
cvals = np.array([0.01, 0.25, 0.5, 0.75, 0.99])
#cvals = np.array([0.01, 0.25, 0.5])
#Nvals = np.logspace(1, 3, 6, dtype = int)
Nvals = np.logspace(1, 3, 4, dtype = int)
tau_s = 0.020
dt = 0.001
t = 100
sigma = 10
x = 0
p = 0.1
DeltaT = 20
W = np.array([12, 9])
params = ParamsLIF(sigma = sigma)
lif = LIF(params, t = t)
lif.W = W
t_filter = np.linspace(0, 1, 2000)
exp_filter = np.exp(-t_filter/tau_s)
exp_filter = exp_filter/np.sum(exp_filter)
ds = exp_filter[0]
#c (correlation between noise inputs)
beta_mse_rd_c = np.zeros((len(cvals), len(Nvals), nsims))
beta_mse_fd_c = np.zeros((len(cvals), len(Nvals), nsims))
beta_mse_bp_c = np.zeros((len(cvals), len(Nvals), nsims))
beta_mse_rd_c_linear = np.zeros((len(cvals), len(Nvals), nsims))
beta_mse_fd_c_linear = np.zeros((len(cvals), len(Nvals), nsims))
#beta_sp_c = np.zeros((len(cvals), params.n))
target = 0.1
W = 10*np.ones(int(Nvals[-1]))
#W = np.random.randn(int(Nvals[-1]))*5
V = np.random.randn(int(Nvals[-1]))*5
cost = lambda s,a: (np.dot(a[0:len(s)], s) - len(s)*target)**2
#Cost function
#B1 = 1
#B2 = 2
#x = .01
#y = 0.1
#z = 0
#cost = lambda s1, s2: (B1*s1-x)**2 + (z+B2*s2 - B2*(B1*s1-y)**2)**2
params.c = 0.99
params.n = 10
lif.setup(params)
lif.W = W[0:10]
(v, h, _, _, u) = lif.simulate(DeltaT)
h.shape
n_units = 10
s = np.zeros(h.shape)
for l in range(10):
s[l,:] = np.convolve(h[l,:], exp_filter)[0:h.shape[1]]
cost_s = cost(s,V[0:n_units])
plot(cost_s)
#Get 'true' causal effects by estimating with unconfounded data
DeltaT = 20
nsims = 10
i = 0; c = 0.0
beta_true_c = np.zeros((len(Nvals), nsims, np.max(Nvals)))
for j, n_units in enumerate(Nvals):
#for j, n_units in enumerate(Nvals[0:1]):
print("Running %d simulations with c=%s, n=%d"%(nsims, c, n_units))
params.c = c
params.n = n_units
lif = LIF(params, t = t)
lif.W = W[0:n_units]
for k in range(nsims):
(v, h, _, _, u) = lif.simulate(DeltaT)
s = np.zeros(h.shape)
for l in range(n_units):
s[l,:] = np.convolve(h[l,:], exp_filter)[0:h.shape[1]]
cost_s = cost(s,V[0:n_units])
beta_true_c[j,k,0:n_units] = causaleffect_maxv(u, cost_s, DeltaT, 1, params)
#print(beta_fd_c)
mean_beta_true_c = np.mean(beta_true_c, axis = 1)
mean_beta_true_c.shape
plt.imshow(s[:,0:1000])
plt.colorbar()
#Compute causal effects for different c values
DeltaT = 20
nsims = 10
mean_beta_aln_c = np.zeros((len(cvals), len(Nvals), nsims))
mean_beta_aln_fd_c = np.zeros((len(cvals), len(Nvals), nsims))
mean_beta_c = np.zeros((len(cvals), len(Nvals), nsims))
mean_beta_fd_c = np.zeros((len(cvals), len(Nvals), nsims))
for i,c in enumerate(cvals):
for j, n_units in enumerate(Nvals):
#for j, n_units in enumerate(Nvals[0:1]):
print("Running %d simulations with c=%s, n=%d"%(nsims, c, n_units))
params.c = c
params.n = n_units
lif = LIF(params, t = t)
lif.W = W[0:n_units]
for k in range(nsims):
(v, h, _, _, u) = lif.simulate(DeltaT)
s = np.zeros(h.shape)
for l in range(n_units):
s[l,:] = np.convolve(h[l,:], exp_filter)[0:h.shape[1]]
cost_s = cost(s,V[0:n_units])
beta_est_c = causaleffect_maxv(u, cost_s, DeltaT, p, params)
beta_est_fd_c = causaleffect_maxv(u, cost_s, DeltaT, 1, params)
mean_beta_aln_c[i,j,k] = alignment(beta_est_c, mean_beta_true_c[j,0:n_units])
mean_beta_aln_fd_c[i,j,k] = alignment(beta_est_fd_c, mean_beta_true_c[j,0:n_units])
mean_beta_c[i,j,k] = mse(beta_est_c, mean_beta_true_c[j,0:n_units])
mean_beta_fd_c[i,j,k] = mse(beta_est_fd_c, mean_beta_true_c[j,0:n_units])
#print(beta_fd_c)
###Output
Running 10 simulations with c=0.01, n=10
Running 10 simulations with c=0.01, n=46
Running 10 simulations with c=0.01, n=215
Running 10 simulations with c=0.01, n=1000
Running 10 simulations with c=0.25, n=10
Running 10 simulations with c=0.25, n=46
Running 10 simulations with c=0.25, n=215
Running 10 simulations with c=0.25, n=1000
Running 10 simulations with c=0.5, n=10
Running 10 simulations with c=0.5, n=46
Running 10 simulations with c=0.5, n=215
Running 10 simulations with c=0.5, n=1000
Running 10 simulations with c=0.75, n=10
Running 10 simulations with c=0.75, n=46
Running 10 simulations with c=0.75, n=215
Running 10 simulations with c=0.75, n=1000
Running 10 simulations with c=0.99, n=10
Running 10 simulations with c=0.99, n=46
Running 10 simulations with c=0.99, n=215
Running 10 simulations with c=0.99, n=1000
###Markdown
Use seaborn for plotting
###Code
fig,axes = plt.subplots(2,2,figsize=(8,8), sharex = True)
pal = sns.color_palette()
for i in range(len(cvals)):
sns.tsplot(data = mean_beta_aln_c[i,:,:].T, ax = axes[1,0], ci='sd', time=Nvals, color = pal[i])
sns.tsplot(data = mean_beta_aln_fd_c[i,:,:].T, ax = axes[1,1], ci='sd', time=Nvals, color = pal[i])
sns.tsplot(data = mean_beta_c[i,:,:].T, ax = axes[0,0], ci='sd', time=Nvals, color = pal[i])
sns.tsplot(data = mean_beta_fd_c[i,:,:].T, ax = axes[0,1], ci='sd', time=Nvals, color = pal[i])
axes[1,0].set_xlabel('network size $n$');
axes[1,1].set_xlabel('network size $n$');
axes[0,0].set_ylabel('MSE');
axes[1,0].set_ylabel('alignment (degrees)');
axes[0,0].set_title('spiking discontinuity');
axes[0,1].set_title('observed dependence');
axes[0,0].set_xscale('log')
axes[0,1].set_xscale('log')
axes[0,0].set_ylim([1e-5, 1e6])
axes[0,1].set_ylim([1e-5, 1e6])
axes[1,0].set_xscale('log')
axes[1,1].set_xscale('log')
axes[0,0].set_yscale('log')
axes[0,1].set_yscale('log')
sns.despine(trim=True)
axes[0,0].legend(["c = %.2f"%i for i in cvals], loc= 'upper left');
plt.savefig('./fig_7_networksize.pdf')
###Output
_____no_output_____
###Markdown
Figure 7. Comparing RDD for different network sizes Imports
###Code
%pylab inline
pylab.rcParams['figure.figsize'] = (10, 6)
#%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import numpy.random as rand
import pandas as pd
import seaborn as sns
from lib.lif import LIF, ParamsLIF
from lib.causal import causaleffect_maxv, causaleffect_maxv_linear, causaleffect_maxv_sp
#Angle between two vectors
def alignment(a,b):
da = np.dot(a,a)
db = np.dot(b,b)
if da > 0 and db > 0:
return 180/np.pi*np.arccos(np.dot(a,b)/np.sqrt(da*db))
else:
return 360.
def mse(pred,true):
return np.mean((pred - true)**2)
###Output
_____no_output_____
###Markdown
A. Dependence on $N$ and $c$
###Code
nsims = 5
cvals = np.array([0.01, 0.25, 0.5, 0.75, 0.99])
#cvals = np.array([0.01, 0.25, 0.5])
#Nvals = np.logspace(1, 3, 6, dtype = int)
Nvals = np.logspace(1, 3, 4, dtype = int)
tau_s = 0.020
dt = 0.001
t = 100
sigma = 10
x = 0
p = 0.1
DeltaT = 20
W = np.array([12, 9])
params = ParamsLIF(sigma = sigma)
lif = LIF(params, t = t)
lif.W = W
t_filter = np.linspace(0, 1, 2000)
exp_filter = np.exp(-t_filter/tau_s)
exp_filter = exp_filter/np.sum(exp_filter)
ds = exp_filter[0]
#c (correlation between noise inputs)
beta_mse_rd_c = np.zeros((len(cvals), len(Nvals), nsims))
beta_mse_fd_c = np.zeros((len(cvals), len(Nvals), nsims))
beta_mse_bp_c = np.zeros((len(cvals), len(Nvals), nsims))
beta_mse_rd_c_linear = np.zeros((len(cvals), len(Nvals), nsims))
beta_mse_fd_c_linear = np.zeros((len(cvals), len(Nvals), nsims))
#beta_sp_c = np.zeros((len(cvals), params.n))
target = 0.1
W = 10*np.ones(int(Nvals[-1]))
#W = np.random.randn(int(Nvals[-1]))*5
V = np.random.randn(int(Nvals[-1]))*5
#cost = lambda s,a: (np.dot(a[0:len(s)], s) - len(s)*target)**2
cost = lambda s,a: np.sqrt((np.dot(a[0:len(s)], s) - target)**2)
#Cost function
#B1 = 1
#B2 = 2
#x = .01
#y = 0.1
#z = 0
#cost = lambda s1, s2: (B1*s1-x)**2 + (z+B2*s2 - B2*(B1*s1-y)**2)**2
params.c = 0.99
params.n = 10
lif.setup(params)
lif.W = W[0:10]
(v, h, _, _, u) = lif.simulate(DeltaT)
h.shape
n_units = 10
s = np.zeros(h.shape)
for l in range(10):
s[l,:] = np.convolve(h[l,:], exp_filter)[0:h.shape[1]]
cost_s = cost(s,V[0:n_units])
plot(cost_s)
#Get 'true' causal effects by estimating with unconfounded data
DeltaT = 20
nsims = 10
i = 0; c = 0.0
beta_true_c = np.zeros((len(Nvals), nsims, np.max(Nvals)))
for j, n_units in enumerate(Nvals):
#for j, n_units in enumerate(Nvals[0:1]):
print("Running %d simulations with c=%s, n=%d"%(nsims, c, n_units))
params.c = c
params.n = n_units
lif = LIF(params, t = t)
lif.W = W[0:n_units]
for k in range(nsims):
(v, h, _, _, u) = lif.simulate(DeltaT)
s = np.zeros(h.shape)
for l in range(n_units):
s[l,:] = np.convolve(h[l,:], exp_filter)[0:h.shape[1]]
#cost_s = cost(s,V[0:n_units])
cost_s = cost(s,V[0:n_units])
beta_true_c[j,k,0:n_units] = causaleffect_maxv(u, cost_s, DeltaT, 1, params)
#print(beta_fd_c)
mean_beta_true_c = np.mean(beta_true_c, axis = 1)
mean_beta_true_c.shape
def tsplot(ax, data, xvals = None, **kw):
if xvals is not None:
x = xvals
else:
x = np.arange(data.shape[1])
n = data.shape[0]
est = np.mean(data, axis=0)
sd = np.std(data, axis=0)/np.sqrt(n)
cis = (est - sd, est + sd)
ax.fill_between(x,cis[0],cis[1],alpha=0.2, **kw)
ax.plot(x,est,**kw)
ax.margins(x=0)
plt.imshow(s[:,0:1000])
plt.colorbar()
#Compute causal effects for different c values
DeltaT = 20
nsims = 10
mean_beta_aln_c = np.zeros((len(cvals), len(Nvals), nsims))
mean_beta_aln_fd_c = np.zeros((len(cvals), len(Nvals), nsims))
mean_beta_c = np.zeros((len(cvals), len(Nvals), nsims))
mean_beta_fd_c = np.zeros((len(cvals), len(Nvals), nsims))
for i,c in enumerate(cvals):
for j, n_units in enumerate(Nvals):
#for j, n_units in enumerate(Nvals[0:1]):
print("Running %d simulations with c=%s, n=%d"%(nsims, c, n_units))
params.c = c
params.n = n_units
lif = LIF(params, t = t)
lif.W = W[0:n_units]
for k in range(nsims):
(v, h, _, _, u) = lif.simulate(DeltaT)
s = np.zeros(h.shape)
for l in range(n_units):
s[l,:] = np.convolve(h[l,:], exp_filter)[0:h.shape[1]]
cost_s = cost(s,V[0:n_units])
#cost_s = cost(s,V[0:n_units]/np.sqrt(n_units))
beta_est_c = causaleffect_maxv(u, cost_s, DeltaT, p, params)
beta_est_fd_c = causaleffect_maxv(u, cost_s, DeltaT, 1, params)
mean_beta_aln_c[i,j,k] = alignment(beta_est_c, mean_beta_true_c[j,0:n_units])
mean_beta_aln_fd_c[i,j,k] = alignment(beta_est_fd_c, mean_beta_true_c[j,0:n_units])
mean_beta_c[i,j,k] = mse(beta_est_c, mean_beta_true_c[j,0:n_units])
mean_beta_fd_c[i,j,k] = mse(beta_est_fd_c, mean_beta_true_c[j,0:n_units])
#print(beta_fd_c)
###Output
Running 10 simulations with c=0.01, n=10
Running 10 simulations with c=0.01, n=46
Running 10 simulations with c=0.01, n=215
Running 10 simulations with c=0.01, n=1000
Running 10 simulations with c=0.25, n=10
Running 10 simulations with c=0.25, n=46
Running 10 simulations with c=0.25, n=215
Running 10 simulations with c=0.25, n=1000
Running 10 simulations with c=0.5, n=10
Running 10 simulations with c=0.5, n=46
Running 10 simulations with c=0.5, n=215
Running 10 simulations with c=0.5, n=1000
Running 10 simulations with c=0.75, n=10
Running 10 simulations with c=0.75, n=46
Running 10 simulations with c=0.75, n=215
Running 10 simulations with c=0.75, n=1000
Running 10 simulations with c=0.99, n=10
Running 10 simulations with c=0.99, n=46
Running 10 simulations with c=0.99, n=215
Running 10 simulations with c=0.99, n=1000
###Markdown
Use seaborn for plotting
###Code
fig,axes = plt.subplots(2,2,figsize=(8,8), sharex = True)
pal = sns.color_palette()
for i in range(len(cvals)):
tsplot(data = mean_beta_aln_c[i,:,:].T, ax = axes[1,0], xvals=Nvals, color = pal[i])
tsplot(data = mean_beta_aln_fd_c[i,:,:].T, ax = axes[1,1], xvals=Nvals, color = pal[i])
tsplot(data = mean_beta_c[i,:,:].T, ax = axes[0,0], xvals=Nvals, color = pal[i])
tsplot(data = mean_beta_fd_c[i,:,:].T, ax = axes[0,1], xvals=Nvals, color = pal[i])
axes[1,0].set_xlabel('network size $n$');
axes[1,1].set_xlabel('network size $n$');
axes[0,0].set_ylabel('MSE');
axes[1,0].set_ylabel('alignment (degrees)');
axes[0,0].set_title('spiking discontinuity');
axes[0,1].set_title('observed dependence');
axes[0,0].set_xscale('log')
axes[0,1].set_xscale('log')
axes[0,0].set_ylim([1e-5, 1e6])
axes[0,1].set_ylim([1e-5, 1e6])
axes[1,0].set_xscale('log')
axes[1,1].set_xscale('log')
axes[0,0].set_yscale('log')
axes[0,1].set_yscale('log')
sns.despine(trim=True)
axes[0,0].legend(["c = %.2f"%i for i in cvals], loc= 'upper left');
#plt.savefig('./fig_7_networksize.pdf')
###Output
_____no_output_____ |
genre_classifer/3. Deploy Music Genre Classification.ipynb | ###Markdown
Upload singleExtract
###Code
import pandas as pd
import librosa
import numpy as np
import sklearn
# Preprocessing
from sklearn.preprocessing import MinMaxScaler
#Keras
import keras
import tensorflow as tf
from keras import models
from keras import layers
from tensorflow.keras.models import save_model, load_model
import warnings
warnings.filterwarnings('ignore')
!unzip '/content/drive/My Drive/AI Project/Music Genre Classification/Deployment/genre_classifer.zip'
import genre_classifer
exec('genre_classifer')
import singleExtract as SE
###Output
Using TensorFlow backend.
###Markdown
**Genre Classification** **Reading data and preprocessing**
###Code
# Load scaler and fit to data available (Important)
scaler = MinMaxScaler() # Use this object for scaling new data
training_features = pd.read_csv('/content/drive/My Drive/AI Project/Music Genre Classification/Deployment/final_features.csv')
data = training_features.drop(['number'], axis=1)
training_data = scaler.fit_transform(np.array(data.iloc[:, 1:], dtype = float))
# This fits the scaler to training data, and the scaler object is used on new data to scale it similarly to training data
# Read new audio file and extract feature data
features = SE.extract('/content/drive/My Drive/AI Project/Music Genre Classification/Deployment/Song files/Electronic/004520.mp3')
features
X = scaler.transform(features) # Use scaler object fit on training data
###Output
_____no_output_____
###Markdown
Predict
###Code
# load model
model = load_model('/content/drive/My Drive/AI Project/Music Genre Classification/Deployment/final_deploy_model.h5')
# Evaluate on test data
prediction = model.predict(X)
print(np.argmax(prediction)) # Final prediction, which can be forwarded to Music Recommender
###Output
0
|
Notebooks/Starting with Keras - Image Classification.ipynb | ###Markdown
This part is derived from the official TensorFLow documentation for Keras. Keras is an high level API which intends to simplify the contruction and training of neural networks. It provides an abstraction layer so that it can be used with different frameworks like PyTorch. Since Google acquired Keras it is also embedded natively into the TensorFlow ecosystem. We start by including the necessary utilities.
###Code
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
# Here we print the TensorFlow version
print(tf.__version__)
###Output
2.1.0
###Markdown
We use in this example the [Fashion MNIST dataset](https://www.tensorflow.org/tutorials/keras/classification). It cosists of images of fashion articles from Zalando. It is already part of Keras, so we can simply load the data.
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz
32768/29515 [=================================] - 0s 2us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz
26427392/26421880 [==============================] - 11s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz
8192/5148 [===============================================] - 0s 1us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz
4423680/4422102 [==============================] - 2s 1us/step
###Markdown
Fashion-MNIST is distributed into two sets. A training set and a test set for evaluation purposes. Each image in the dataset is mapped to a label that represent a class. We have 10 classes that are laballed from 0 to 9. For each label we declare now a class name.
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
There are 60000 training images with an format of 28 x 28 pixels and 60000 corresponding labels.
###Code
train_images.shape
len(train_labels)
train_labels
###Output
_____no_output_____
###Markdown
We have 10000 test images.
###Code
test_images.shape
len(test_labels)
###Output
_____no_output_____
###Markdown
Process the DataThe data need to be pre-processed. The color values fall in the range from 0 to 255.
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
We need to normalise the values from 0 to 1.
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
Now we verify that the data is in the correct format to ensure that we can build and train a neural network. The following code displays the 25 first elements of the training data set.
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
Build the ModelA neural network is also called model. this part intends to build the model. Set up the LayersA neural network is comprised of a number of layers. These layers can be seen as building blocks. A layer looks on an image for a specific meaningfull feature in the data that is passed through it. So each layer is responsible for another feature. The deeper the network (the numbers of layers) is the more complex data structure can be worked on. We start by declaring a Keras object and chaining some layers together in a sequence.
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
The layer is the flatten layer. Its purpose is to transform a 2 dimensional array into a 1 dimensional array. The 2 dimensional array is an array of pixels from an input image (28 x 28 pixels). Followed by the flatten layer a sequence of dense layers are following. Dense layers are fully connected layers. The first dense layer has 128 neurons (or nodes) and the second has 10 neurons which return 10 scores that indicate to which class an image belongs. Compile the ModelNow a few more settings need to be set before training.* Loss Function: For the measument of the models accuracy. it should push the training into the right direction.* Optimzer: With respect of the output of the loss function the optimzer updates the neural network.* Metrics: Metrics are used to observe the training progression of the model. E.g. the accuracy metric shows the number of images that are classified correctly during a training iteration.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the ModelThe training is progressing in four steps. It starts by feeding the model with the training images and labels. Then the model learns to associate images and labels, it learns. In the third step the model makes predictions on the test set and verifies the results on the test labels. This is all repeated in a number of iterations. Feed the ModelThe training is started by calling the model.fit method. In this example we determined 10 training roaunds (epochs). As you can observe, the loss number is decreasing while the accuracy is increasing. In the end we have an accuracy of about 0.91 or 91%.
###Code
model.fit(train_images, train_labels, epochs=10)
###Output
Train on 60000 samples
Epoch 1/10
60000/60000 [==============================] - 10s 167us/sample - loss: 0.5005 - accuracy: 0.8241
Epoch 2/10
60000/60000 [==============================] - 9s 154us/sample - loss: 0.3757 - accuracy: 0.8644
Epoch 3/10
60000/60000 [==============================] - 10s 170us/sample - loss: 0.3368 - accuracy: 0.8762
Epoch 4/10
60000/60000 [==============================] - 9s 156us/sample - loss: 0.3116 - accuracy: 0.8850
Epoch 5/10
60000/60000 [==============================] - 10s 163us/sample - loss: 0.2916 - accuracy: 0.8941
Epoch 6/10
60000/60000 [==============================] - 10s 171us/sample - loss: 0.2772 - accuracy: 0.8973
Epoch 7/10
60000/60000 [==============================] - 10s 171us/sample - loss: 0.2673 - accuracy: 0.9006
Epoch 8/10
60000/60000 [==============================] - 10s 167us/sample - loss: 0.2562 - accuracy: 0.9046
Epoch 9/10
60000/60000 [==============================] - 10s 169us/sample - loss: 0.2479 - accuracy: 0.9078
Epoch 10/10
60000/60000 [==============================] - 10s 163us/sample - loss: 0.2400 - accuracy: 0.9104
###Markdown
Evaluate the AccuracyNow we evaluate the model on the test data set.
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
###Output
10000/10000 - 1s - loss: 0.3482 - accuracy: 0.8806
Test accuracy: 0.8806
###Markdown
It should turn out that the accuracy on the test set is worse than on the training set. This gap is called overfitting. If a model is trained too much it starts to memorise details (like noise) in the training data to a point such that the performance model is impacted negatively. Make PredictionsThe model can now be used to make predictions.
###Code
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
predictions = probability_model.predict(test_images)
###Output
_____no_output_____
###Markdown
Here, the model has predicted the label for each image in the testing set. Let's take a look at the first prediction:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
A prediction is an array of 10 numbers. They represent the model's "confidence" that the image corresponds to each of the 10 different articles of clothing. You can see which label has the highest confidence value:
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
The output says that the model is confident that image is an ankle boot.
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
Verify the PredictionsWe now want to plot the results for the verification.
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array, true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
Here, the first image is predicted. The blue output is correct prediction. Red is an incorrect prediction.
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
Another example.
###Code
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
Here are several images with their results.
###Code
# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
User the trained ModelThe model can now be used on a single image.
###Code
# Grab an image from the test dataset.
img = test_images[1]
print(img.shape)
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))
print(img.shape)
# Predict the correct label
predictions_single = probability_model.predict(img)
print(predictions_single)
plot_value_array(1, predictions_single[0], test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
np.argmax(predictions_single[0])
###Output
_____no_output_____ |
notebooks/BenchMarkTest.ipynb | ###Markdown
###Code
!pip install transformers
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gagan3012/project-code-py")
model = AutoModelWithLMHead.from_pretrained("gagan3012/project-code-py")
sequence = """Given an array of integers nums and a positive integer k"""
inputs = tokenizer.encode(sequence, return_tensors='pt')
outputs = model.generate(inputs, max_length=1024, do_sample=True, temperature=0.5, top_p=1.0)
text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(text.encode().decode('unicode_escape'))
sequence = """Given an array of integers"""
inputs = tokenizer.encode(sequence, return_tensors='pt')
outputs = model.generate(inputs, max_length=1024, do_sample=True, temperature=0.5, top_p=1.0)
text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(text.encode().decode('unicode_escape'))
#two sum problem
sequence = """Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to target."""
inputs = tokenizer.encode(sequence, return_tensors='pt')
outputs = model.generate(inputs, max_length=1024, do_sample=True, temperature=0.5, top_p=1.0)
text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(text.encode().decode('unicode_escape'))
sequence = """def solution():"""
inputs = tokenizer.encode(sequence, return_tensors='pt')
outputs = model.generate(inputs, max_length=1024, do_sample=True, temperature=0.5, top_p=1.0)
text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(text.encode().decode('unicode_escape'))
sequence = """Given two strings str1 and str2, return the shortest string that has both str1 and str2 as subsequences. If multiple answers exist, you may return any of them."""
inputs = tokenizer.encode(sequence, return_tensors='pt')
outputs = model.generate(inputs, max_length=1024, do_sample=True, temperature=0.5, top_p=1.0)
text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(text.encode().decode('unicode_escape'))
sequence = """Given a sorted array nums, remove the duplicates in-place such that each element appears only once and returns the new length."""
inputs = tokenizer.encode(sequence, return_tensors='pt')
outputs = model.generate(inputs, max_length=1024, do_sample=True, temperature=0.5, top_p=1.0)
text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(text.encode().decode('unicode_escape'))
###Output
_____no_output_____ |
hydradx_update/TestIL.ipynb | ###Markdown
Testing ILIn this notebook we will compare the IL of the implemented AMM to theory. Simulation Setup
###Code
# Dependencies
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import copy
import random
import math
# Experiments
from model import run
from model import processing
#from model.plot_utils import *
from model import plot_utils as pu
from model import init_utils
########## AGENT CONFIGURATION ##########
# key -> token name, value -> token amount owned by agent
# note that token name of 'omniABC' is used for omnipool LP shares of token 'ABC'
# omniHDXABC is HDX shares dedicated to pool of token ABC
LP1 = {'omniR1': 500000}
LP2 = {'omniR2': 1500000}
trader = {'HDX': 1000000, 'R1': 1000000, 'R2': 1000000}
# key -> agent_id, value -> agent dict
agent_d = {'Trader': trader, 'LP1': LP1, 'LP2': LP2}
#agent_d = {'Trader': trader, 'LP1': LP1}
########## ACTION CONFIGURATION ##########
action_dict = {
'sell_hdx_for_r1': {'token_buy': 'R1', 'token_sell': 'HDX', 'amount_sell': 2000, 'action_id': 'Trade', 'agent_id': 'Trader'},
'sell_r1_for_hdx': {'token_sell': 'R1', 'token_buy': 'HDX', 'amount_sell': 1000, 'action_id': 'Trade', 'agent_id': 'Trader'}
}
# list of (action, number of repititions of action), timesteps = sum of repititions of all actions
trade_count = 5000
action_ls = [('trade', trade_count)]
# maps action_id to action dict, with some probability to enable randomness
prob_dict = {
'trade': {'sell_hdx_for_r1': 0.5,
'sell_r1_for_hdx': 0.5}
}
########## CFMM INITIALIZATION ##########
# Todo: generalize
initial_values = {
'token_list': ['R1','R2'],
'R': [500000,1500000],
'P': [2,2/3],
'fee_assets': 0,
'fee_HDX': 0
}
#initial_values['H'] = [initial_values['Q'] * initial_values['W'][i] for i in range(len(initial_values['token_list']))]
#initial_values['D'] = copy.deepcopy(initial_values['H'])
#amms = [balancer_amm, reweighting_amm]
#amm_types = ['Balancer', 'Reweighting']
#amms = [reweighting_amm]
#amm_types = ['Reweighting']
#labels = amm_types
initial_list = []
config_params = {
#'amm': amm,
'cfmm_type': "",
'initial_values': initial_values,
'agent_d': agent_d,
'action_ls': action_ls,
'prob_dict': prob_dict,
'action_dict': action_dict,
}
config_dict, state = init_utils.get_configuration(config_params)
pd.options.mode.chained_assignment = None # default='warn'
pd.options.display.float_format = '{:.2f}'.format
run.config(config_dict, state)
events = run.run()
rdf, agent_df = processing.postprocessing(events)
%matplotlib inline
rdf.head(20) # Todo: delete
agent_df.tail(20) # TODO: delete
###Output
_____no_output_____
###Markdown
Analysis
###Code
var_list = ['R', 'Q']
pu.plot_vars(rdf, var_list)
var_list = ['r', 'q']
trader_df = agent_df[agent_df['agent_label'] == 'Trader']
pu.plot_vars(trader_df, var_list)
# merge agent_df, rdf to one df on timesteps, run, etc
merged_df = pd.merge(agent_df, rdf, how="inner", on=["timestep", "simulation", "run", "subset", "substep"])
# add IL column to agent DF, where val_hold is calculated using initial holdings from agent_d
#val hold: withdraw liquidity at t=0, calculate value with prices at t
#val pool: withdraw liquidity at t, calculate value with prices at t
merged_df['P-0'] = merged_df.apply(lambda x: x['Q-0']/x['R-0'], axis=1)
merged_df['P-1'] = merged_df.apply(lambda x: x['Q-1']/x['R-1'], axis=1)
merged_df['val_pool'] = merged_df.apply(lambda x: processing.val_pool(x), axis=1)
withdraw_agent_d = processing.get_withdraw_agent_d(initial_values, agent_d)
print(withdraw_agent_d)
merged_df['val_hold'] = merged_df.apply(lambda x: processing.val_hold(x, withdraw_agent_d), axis=1)
merged_df['IL'] = merged_df.apply(lambda x: x['val_pool']/x['val_hold'] - 1, axis=1)
merged_df['pool_val'] = merged_df.apply(lambda x: processing.pool_val(x), axis=1)
#merged_df['pool_loss'] = merged_df.apply(lambda x: x['pool_val']/2000000 - 1, axis=1)
merged_df[['timestep', 'agent_label', 'q','Q-0','B-0','s-0','S-0','r-0','R-0','val_pool', 'val_hold','IL','pool_val', 'p-0']].tail()
# compute val hold column
# compute val pool column
# compute IL
# plot Impermanent loss
#
merged_df[merged_df['agent_label'] == 'LP2'][['timestep', 'agent_label', 'q','Q-0','B-0','s-0','S-0','r-0','R-0','val_pool', 'val_hold','IL','pool_val']].head(20)
LP1_merged_df = merged_df[merged_df['agent_label'] == 'LP1']
LP1_merged_df[['timestep', 'agent_label', 'q','Q-0','B-0','s-0','S-0','r-0','R-0','val_pool', 'val_hold','IL','pool_val']].head(50)
###Output
_____no_output_____
###Markdown
IL over time
###Code
var_list = ['pool_val', 'val_pool', 'val_hold', 'IL']
LP1_merged_df = merged_df[merged_df['agent_label'] == 'LP1']
pu.plot_vars(LP1_merged_df, var_list)
###Output
[0]
###Markdown
IL as a function of price movement Theory On a price move from $p_i^Q \to k p_i^Q$, LP is entitled to $k\frac{\sqrt{k}}{k+1}$ of the *original value* of the matched pool.$$Val_{hold} = k p_i^Q R_i\\Val_{pool} = \frac{\sqrt{k}k}{k+1} 2Q_i = \left(\frac{2\sqrt{k}k}{k+1}\right) p_i^Q R_i$$ $Val_{hold}$
###Code
def val_hold_func(P, R):
return P * R
plt.figure(figsize=(15,5))
#ax = plt.subplot(131, title='P-0/IL')
#LP1_merged_df[['IL','P-0']].astype(float).plot(ax=ax, y=['IL'], x='P-0', label=[])
ax = plt.subplot(131, title='P-0/val_hold')
LP1_merged_df[['val_hold','P-0']].astype(float).plot(ax=ax, y=['val_hold'], x='P-0', label=[])
#ax = plt.subplot(132, title='Theoretical')
x = LP1_merged_df['P-0'].tolist()
y = LP1_merged_df.apply(lambda x: val_hold_func(x['P-0'], LP1['omniR1']), axis=1)
ax.plot(x,y, label='Theory')
#ax = plt.subplot(132, title='Theoretical')
#x = LP1_merged_df['P-0'].tolist()
#y = LP1_merged_df['P-0'].apply(lambda x: IL_func(x, 2, 0.5))
#ax.plot(x,y, label='Theory')
###Output
_____no_output_____
###Markdown
$Val_{Pool}$
###Code
def val_pool_func(P, P_init, R):
k = P/P_init
return 2 * k * math.sqrt(k) / (k + 1) * P_init * R
plt.figure(figsize=(15,5))
#ax = plt.subplot(131, title='P-0/IL')
#LP1_merged_df[['IL','P-0']].astype(float).plot(ax=ax, y=['IL'], x='P-0', label=[])
ax = plt.subplot(131, title='P-0/val_pool')
LP1_merged_df[['val_pool','P-0']].astype(float).plot(ax=ax, y=['val_pool'], x='P-0', label=[])
#ax = plt.subplot(132, title='Theoretical')
x = LP1_merged_df['P-0'].tolist()
y = LP1_merged_df.apply(lambda x: val_pool_func(x['P-0'], initial_values['P'][0], LP1['omniR1']), axis=1)
#y = LP1_merged_df.apply(lambda x: val_pool_func(x['P-0'], initial_values['P'][0], x['R-0']), axis=1)
ax.plot(x, y, label='Theory')
#ax = plt.subplot(132, title='Theoretical')
#x = LP1_merged_df['P-0'].tolist()
#y = LP1_merged_df['P-0'].apply(lambda x: IL_func(x, 2, 0.5))
#ax.plot(x,y, label='Theory')
###Output
_____no_output_____
###Markdown
Impermanent Loss
###Code
def IL_func(P, P_init, R):
return val_pool_func(P, P_init, R)/val_hold_func(P, R) - 1
plt.figure(figsize=(15,5))
#ax = plt.subplot(131, title='P-0/IL')
#LP1_merged_df[['IL','P-0']].astype(float).plot(ax=ax, y=['IL'], x='P-0', label=[])
ax = plt.subplot(131, title='P-0/IL')
LP1_merged_df[['IL','P-0']].astype(float).plot(ax=ax, y=['IL'], x='P-0', label=[])
#ax = plt.subplot(132, title='Theoretical')
x = LP1_merged_df['P-0'].tolist()
y = LP1_merged_df.apply(lambda x: IL_func(x['P-0'], initial_values['P'][0], LP1['omniR1']), axis=1)
#y = LP1_merged_df.apply(lambda x: val_pool_func(x['P-0'], initial_values['P'][0], x['R-0']), axis=1)
ax.plot(x, y, label='Theory')
#ax = plt.subplot(132, title='Theoretical')
#x = LP1_merged_df['P-0'].tolist()
#y = LP1_merged_df['P-0'].apply(lambda x: IL_func(x, 2, 0.5))
#ax.plot(x,y, label='Theory')
LP1_merged_df[['val_pool', 'val_hold', 'R-0', 's-0', 'S-0', 'B-0', 'P-0', 'p-0']].tail()
###Output
_____no_output_____ |
Labs/Lab7_Clustering/Lab 7 - K-means Clustering.ipynb | ###Markdown
Lab 7: K-means Clustering __IMPORTANT__Please complete this Jupyter Notebook file and upload it to blackboard __before 15 March 2020__ evening.In this Lab, you will implement the K-means clustering algorithm and apply it to compress an image. Before starting on the Lab, we strongly recommend reading the slides of lecture 7.You will first start on an example 2D dataset that will help you gain an intuition of how the K-means algorithm works. After that, you wil use the K-means algorithm for image compression by reducing the number of colors that occur in an image to only those that are most common in that image. 1. Implementing K-meansThe K-means algorithm is a method to automatically cluster similar data examples together. Concretely, you are given a training set $\{ x^{(1)}, \dots, x^{(n)} \}$ (where $x^{(i)} \in \mathbb{R}^d$), and want to group the data into a few cohesive "*clusters*". The intuition behind K-means is an iterative procedure that starts by guessing the initial centroids, and then refines this guess by repeatedly assigning examples to their closest centroids and then recomputing the centroids based on the assignments.The K-means algorithm is as follows:```python Initialize centroidscentroids = kMeansInitCentroids(X, K)for itr in range(0, iterations): "Cluster assignment" step: Assign each data point to the closest centroid. idx[i] corresponds to the index of the centroid assigned to data-point i idx = findClosestCentroids(X, centroids) "Move centroid" step: Compute means based on centroid assignments centroids = computeCentroids(X, idx, K)```The inner-loop of the algorithm repeatedly carries out two steps: (i) Assigning each training example $x^{(i)}$ to its closest centroid, and (ii) Recomputing the mean of each centroid using the points assigned to it. The K-means algorithm will always converge to some final set of means for the centroids. Note that the converged solution may not always be ideal and depends on the initial setting of the centroids. Therefore, in practice the K-means algorithm is usually run a few times with different random initializations.You will implement the two phases of the K-means algorithm separately in the next sections. 1.1. Finding closest centroidsIn the "cluster assignment" phase of the K-means algorithm, the algorithm assigns every training example $x^{(i)}$ to its closest centroid, given the current positions of centroids. Specifically, for every example $i$ we set$$c^{(i)} = j \text{ that minimizes } \left \| x^{(i)} - \mu_j \right \|^2$$where $c^{(i)}$ is the index of the centroid that is closest to $x^{(i)}$, and $\mu_j \in \mathbb{R}^d$ is the position (vector of values) of the $j^{th}$ centroid. Note that $c^{(i)}$ corresponds to `idx[i]` in the algorithm shown above.Your task is to complete the function `findClosestCentroids(X, centroids)` in the following Python code. This function takes the data matrix $X$ and the centroids inside `centroids` and should output a one-dimensional array `idx` that holds the index (a value in $\{1, \dots, K\}$ where $K$ is total number of centroids) of the closest centroid to every training example. The length of the array `idx` should be the same as the number of data-points (i.e. `len(idx) == len(X) == n`). You can implement this using a loop over every training example andevery centroid.Once you have completed the function `findClosestCentroids(..)`, you can test it using the examples `X` and `centroids` provided in the code below. If you implemented the function correctly, you should get the array `[1 2 0 0]` (i.e. we have $K=3$ centroinds; the data-point `X[0]` is assigned to cluster centroid `1`, the data-point `X[1]` is assigned to cluster centroid `2`, the data-point `X[2]` is assigned to cluster centroid `0`, and the data-point `X[3]` is assigned to cluster centroid `0`).
###Code
import numpy as np
""" TODO:
Complete the definition of the function findClosestCentroids(X, centroids). This
function takes the data matrix X and the centroids, and should output an array
idx that holds the index of the closest centroid to every training data-point.
"""
def findClosestCentroids(X, centroids):
# The idx list will contain the index of the closest centroid to each data-point
idx = []
# For each data-point xi from our dataset X
for xi in X:
# TODO: compute the Euclidean distance from x to all centoids. The results should be in a list distances
# distances = [...]
distances = [np.linalg.norm(xi - x) for x in centroids]
# TODO: find the index j corresponding to the smallest distance in distances (you can use np.argmin(..))
j = np.argmin(distances)
# TODO: append the index of the closest centroid from xi, to the list idx
idx.append(j)
# Return the list idx as an array
return np.array(idx)
""" TODO:
Test your function findClosestCentroids(..) by calling it using
the examples X and centroids given below. If you implemented the
function correctly, you should get [1, 2, 0, 0]
"""
X = np.array([[1, 2], [3, 4], [5, 6], [9, 11]]) # Example dataset with 4 data-points
centroids = np.array([[7, 5], [0, 2], [3, 3]]) # Initial centroids (we have K=3 centroids)
idx = findClosestCentroids(X, centroids)
print(idx)
###Output
[1 2 0 0]
###Markdown
1.2. Computing centroid meansGiven assignments of every point to a centroid, the second phase of the algorithm recomputes, for each centroid, the mean of the points that were assigned to it. Specifically, for every centroid $j$ we set it to:$$\mu_j = \frac{1}{|C_j|} \sum_{i \in C_j} x^{(i)}$$where $C_j$ is the set of examples that are assigned to centroid $j$ (i.e. the $j^{th}$ cluster). Concretely, if two examples say $x^{(3)}$ and $x^{(5)}$ are assigned to centroid $j = 2$, then you should update this centroid as $\mu_2 = \frac{1}{2} (x^{(3)} + x^{(5)})$.You should now complete the function `computeCentroids(X, idx, K)` in the following code. You can implement this function using a loop over the centroids. You can also use a loop over the examples; but if you can use a vectorized implementation that does not use such a loop, your code may run faster.Once you have completed the function `computeCentroids(X, idx, K)`, you can test it by calling it once with `K = 3` centroids on the previous example dataset `X` with `idx = np.array([1, 2, 0, 0])`. If your implementation is correct, the function should return the following 3 centroids as a result:```python[[ 7. 8.5] [ 1. 2. ] [ 3. 4. ]]```
###Code
""" TODO:
Complete the definition of the function computeCentroids(X, idx, K). This function
takes as arguments the dataset X, the array of assignments idx (that indicates for
each data-point, the index of its nearest centroid), and the number of centroids K.
It should return a new array of centroids.
"""
def computeCentroids(X, idx, K):
new_centroids = [] # This will contain the new re-computed centroids
# For each centroid (or cluster) index j
for j in range(K):
# TODO: find Cj, the array of all data-points that were assigned to centroid j
cj = X[idx == j]
# TODO: re-compute the new centroid j as the mean (center) of the data-points in Cj
mu_j = (1/len(cj))*np.sum(cj,axis=0)
# TODO: append your re-computed centroid mu_j to the list new_centroids
new_centroids.append(mu_j)
# Return new_centroids as an array
return np.array(new_centroids)
""" TODO:
Test your function computeCentroids(X, idx, K) by calling it with K = 3 on
the previous example dataset X with idx = np.array([1, 2, 0, 0])
"""
idx = np.array([1, 2, 0, 0])
centroids = computeCentroids(X, idx, K = 3)
print(centroids)
###Output
[[7. 8.5]
[1. 2. ]
[3. 4. ]]
###Markdown
2. K-means on an example datasetAfter you have completed the two functions (`findClosestCentroids(..)` and `computeCentroids(..)`) successfully, the next step is to use them in the main K-means algorithm on a toy 2-dimensional dataset to help you understand how K-means works. Run the following code to load the dataset and plot it.
###Code
%matplotlib inline
import matplotlib.pylab as plt
from scipy.io import loadmat
mat = loadmat("datasets/lab7data1.mat")
X = mat["X"]
np.random.shuffle(X)
print("X.shape:", X.shape)
fig, ax = plt.subplots()
ax.scatter(X[:, 0], X[:, 1], marker=".", color="blue")
fig.show()
###Output
X.shape: (300, 2)
###Markdown
In the following Python code a function `Kmeans(X, K, max_iterations)` is defined. This function performs K-means clustering by calling the two functions that you implemented (`findClosestCentroids(..)` and `computeCentroids(..)`) inside a loop. It returns the final cluster centroids.Read the `Kmeans(X, K, max_iterations)` function to understand it, then call it with `K = 3` and `max_iterations = 50`, on the dataset `X` that we loaded previously. Once K-means finishes running and returns the final centroids, your task is to produce a plot of the dataset with colors corresponding to the clusters that K-means found. Your plot should be similar to the following figure.**Hint:** After calling `Kmeans(..)` and getting the final centroids, you can get the cluster index to which each data-point belongs, by calling `idx = findClosestCentroids(X, centroids)` once again. Then, in order, for example, to select the data-points that are members of cluster 0, you can use `X[idx == 0]`.
###Code
import numpy as np
# This function performs K-means clustering and returns the final cluster centroids
def Kmeans(X, K, max_iterations):
# Initialize centroids: we pick randomly K different points as our initial centroids
random_ids = np.random.choice(len(X), K, replace=False) # pick K random ids from range(len(X))
centroids = X[random_ids]
for itr in range(1, max_iterations):
idx = findClosestCentroids(X, centroids) # Assigning data-points to clusters
centroids = computeCentroids(X, idx, K) # Updating (re-computing) the centroids
return centroids
""" TODO:
Call the Kmeans(X, K, max_iterations) function with K=3 and produce a plot similar
to the above figure. Data-points within the same cluster should have the same color.
"""
K = 3 # Number of clusters (and centroids) that we want to get
max_iterations = 50 # Number of iterations to perform
centroids = Kmeans(X, K, max_iterations)
#TODO: continue here to produce the required plot
plt.scatter(X[:, 0], X[:, 1], marker=".", color="blue")
for i in centroids:
plt.scatter(i[0], i[1], marker=".", c='red')
# ...
###Output
[[6.03366736 3.00052511]
[3.04367119 1.01541041]
[1.95399466 5.02557006]]
###Markdown
3. Image compression with K-meansIn this section, you will apply K-means to image compression. In a straightforward 24-bit color representation of an image, each pixel is represented as three 8-bit unsigned integers (ranging from 0 to 255) that specify the red, green and blue intensity values. This encoding is often refered to as the RGB encoding. Our image contains thousands of colors, and in this part of the Lab, you will reduce the number of colors to 16 colors.By making this reduction, it is possible to represent (compress) the photo in an effcient way. Specifically, you only need to store the RGB values of the 16 selected colors, and for each pixel in the image you now need to only store the index of the color at that location (where only 4 bits are necessary to represent 16 possibilities).In this section, you will use the K-means algorithm to select the 16 colors that will be used to represent the compressed image. Concretely, you will treat every pixel in the original image as a data example and use the K-means algorithm to find the 16 colors that best group (cluster) the pixels in the 3-dimensional RGB space. Once you have computed the cluster centroids on the image, you will then use the 16 colors to replace the pixels in the original image. 3.1. K-means on pixelsIn Python, images can be read as follows `A = plt.imread('image.png')`. For RGB images, this function creates a three-dimensional matrix $A$ of shape `(p, q, 3)` where $p \times q$ is the number of pixels in the image, and each element `A[i][j]` (corresponding to the pixel at row $i$ and column $j$) is a 3-dimensional vector containing the RGB (*red*, *green*, *blue*) intensities.We consider each pixel as a 3-dimensional data-point. Therefore, the number of data-points we have is the number of pixels in the image (i.e. $n = p \times q$). For example, for an RGB image of $128 \times 128 = 16384 = n$ pixels, our dataset is $X \in \mathbb{R}^{16384 \times 3}$.The following Python code first loads the image, and then reshapes it to create an $n \times 3$ matrix of pixels (where $n = 16384 = 128 \times 128$), and calls the `Kmeans(..)` function on it to cluster the pixel colors into 16 clusters. This clustering process may take some time (few seconds or minutes) as we have 16384 data-points (pixels). After finding the top K = 16 colors to represent the image, we can now assign each pixel to its closest centroid using the `findClosestCentroids(..)` function. This allows us to represent the original image using the centroid assignments of each pixel and plot the new image. The image that you will get is shown in the following figure.Notice that you have significantly reduced the number of bits that are required to describe the image. The original image required 24 bits for each one of the $128 \times 128$ pixel locations, while the new representation requires only requires 4 bits per pixel location. The final number of bits used would correspond to compressing the original image by about a factor of 6 (i.e. we eliminated $\sim 83\%$ of the original size).**Your Task:** Once you read the code and run it and see the results, you can then try to run it with a smaller value of $K$ (e.g. $k = 10, K = 5, K = 2$) and see the results again.
###Code
import matplotlib.pylab as plt
# Loading the image bird_small.png into a p*q*d matrix A (here p=q=128, and d=3)
A = plt.imread('datasets/bird_small.png')
print("A.shape:", A.shape)
# Reshape A into an n*d matrix X (our dataset of n pixels/data-points)
X = A.reshape(A.shape[0] * A.shape[1], A.shape[2])
print("X.shape:", X.shape)
for k in [2,5,10]:
# Apply K-means to X to cluster the pixels into 16 clusters based on their RGB intensities.
print("Performing Kmeans ... This may take some time ...")
centroids = Kmeans(X, K=k, max_iterations=20)
# For each pixel (X[i]) in X, we find its closest centroid (idx[i])
idx = findClosestCentroids(X, centroids)
# Create a new matrix XX where each data-point is replaced by its closest centroid
XX = np.array([ centroids[i] for i in idx ])
# Reshape XX back into an image (matrix AA of dimension p*q*d)
AA = XX.reshape(128, 128, 3)
# Plot the original image A and the new image AA
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(A)
ax1.axis("off")
ax1.set_title("Original image")
ax2.imshow(AA)
ax2.axis("off")
ax2.set_title("Compressed image")
fig.show()
###Output
A.shape: (128, 128, 3)
X.shape: (16384, 3)
Performing Kmeans ... This may take some time ...
###Markdown
4. Optional: Use your own imageThis section is optional. In this section, you can reuse the code we have supplied above to run on one of your own images. Note that if your image is very large, then K-means can take a long time to run. Therefore, we recommend that you resize your images to managable sizes (e.g. $128 \times 128$ pixels) before running the code. You can also try to vary K to see the effects on the compression.
###Code
# ...
# ...
# ...
# ...
###Output
_____no_output_____ |
notebooks/python_recap/python_rehearsal.ipynb | ###Markdown
Python rehearsal> *DS Data manipulation, analysis and visualization in Python* > *May/June, 2021*>> *© 2021, Joris Van den Bossche and Stijn Van Hoey (, ). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*--- I measure air pressure
###Code
pressure_hPa = 1010 # hPa
###Output
_____no_output_____
###Markdown
REMEMBER: Use meaningful variable names I'm measuring at sea level, what would be the air pressure of this measured value on other altitudes? I'm curious what the equivalent pressure would be on other alitudes... The **barometric formula**, sometimes called the exponential atmosphere or isothermal atmosphere, is a formula used to model how the **pressure** (or density) of the air **changes with altitude**. The pressure drops approximately by 11.3 Pa per meter in first 1000 meters above sea level.$$P=P_0 \cdot \exp \left[\frac{-g \cdot M \cdot h}{R \cdot T}\right]$$see https://www.math24.net/barometric-formula/ or https://en.wikipedia.org/wiki/Atmospheric_pressure where:* $T$ = standard temperature, 288.15 (K)* $R$ = universal gas constant, 8.3144598, (J/mol/K)* $g$ = gravitational acceleration, 9.81 (m/s$^2$)* $M$ = molar mass of Earth's air, 0.02896 (kg/mol)and:* $P_0$ = sea level pressure (hPa)* $h$ = height above sea level (m) Let's implement this... To calculate the formula, I need the exponential operator. Pure Python provide a number of mathematical functions, e.g. https://docs.python.org/3.7/library/math.htmlmath.exp within the `math` library
###Code
import math
# ...modules and libraries...
###Output
_____no_output_____
###Markdown
DON'T: from os import *. Just don't!
###Code
standard_temperature = 288.15
gas_constant = 8.31446
gravit_acc = 9.81
molar_mass_earth = 0.02896
###Output
_____no_output_____
###Markdown
EXERCISE: Calculate the equivalent air pressure at the altitude of 2500 m above sea level for our measured value of pressure_hPa (1010 hPa)
###Code
height = 2500
pressure_hPa * math.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))
# ...function/definition for barometric_formula...
def barometric_formula(pressure_sea_level, height=2500):
"""Apply barometric formula
Apply the barometric formula to calculate the air pressure on a given height
Parameters
----------
pressure_sea_level : float
pressure, measured as sea level
height : float
height above sea level (m)
Notes
------
see https://www.math24.net/barometric-formula/ or
https://en.wikipedia.org/wiki/Atmospheric_pressure
"""
standard_temperature = 288.15
gas_constant = 8.3144598
gravit_acc = 9.81
molar_mass_earth = 0.02896
pressure_altitude = pressure_sea_level * math.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))
return pressure_altitude
barometric_formula(pressure_hPa, 2000)
barometric_formula(pressure_hPa)
# ...formula not valid above 11000m...
# barometric_formula(pressure_hPa, 12000)
def barometric_formula(pressure_sea_level, height=2500):
"""Apply barometric formula
Apply the barometric formula to calculate the air pressure on a given height
Parameters
----------
pressure_sea_level : float
pressure, measured as sea level
height : float
height above sea level (m)
Notes
------
see https://www.math24.net/barometric-formula/ or
https://en.wikipedia.org/wiki/Atmospheric_pressure
"""
if height > 11000:
raise Exception("Barometric formula only valid for heights lower than 11000m above sea level")
standard_temperature = 288.15
gas_constant = 8.3144598
gravit_acc = 9.81
molar_mass_earth = 0.02896
pressure_altitude = pressure_sea_level * math.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))
return pressure_altitude
# ...combining logical statements...
height > 11000 or pressure_hPa < 9000
# ...load function from file...
###Output
_____no_output_____
###Markdown
Instead of having the functions in a notebook, importing the function from a file can be done as importing a function from an installed package. Save the function `barometric_formula` in a file called `barometric_formula.py` and add the required import statement `import math` on top of the file. Next, run the following cell:
###Code
from barometric_formula import barometric_formula
###Output
_____no_output_____
###Markdown
REMEMBER: Write functions to prevent copy-pasting of code and maximize reuse Add documentation to functions for your future self Named arguments provide default values Import functions from a file just as other modules I measure air pressure multiple times We can store these in a Python [list](https://docs.python.org/3/tutorial/introduction.htmllists):
###Code
pressures_hPa = [1013, 1003, 1010, 1020, 1032, 993, 989, 1018, 889, 1001]
# ...check methods of lists... append vs insert
###Output
_____no_output_____
###Markdown
Notice: A list is a general container, so can exist of mixed data types as well.
###Code
# ...list is a container...
###Output
_____no_output_____
###Markdown
I want to apply my function to each of these measurements I want to calculate the barometric formula **for** each of these measured values.
###Code
# ...for loop... dummy example
###Output
_____no_output_____
###Markdown
EXERCISE: Write a for loop that prints the adjusted value for altitude 3000m for each of the pressures in pressures_hPa
###Code
for pressure in pressures_hPa:
print(barometric_formula(pressure, 3000))
# ...list comprehensions...
###Output
_____no_output_____
###Markdown
EXERCISE: Write a for loop as a list comprehension to calculate the adjusted value for altitude 3000m for each of the pressures in pressures_hPa and store these values in a new variable pressures_hPa_adjusted
###Code
pressures_hPa_adjusted = [barometric_formula(pressure, 3000) for pressure in pressures_hPa]
pressures_hPa_adjusted
###Output
_____no_output_____
###Markdown
The power of numpy
###Code
import numpy as np
pressures_hPa = [1013, 1003, 1010, 1020, 1032, 993, 989, 1018, 889, 1001]
np_pressures_hPa = np.array([1013, 1003, 1010, 1020, 1032, 993, 989, 1018, 889, 1001])
# ...slicing/subselecting is similar...
print(np_pressures_hPa[0], pressures_hPa[0])
###Output
_____no_output_____
###Markdown
REMEMBER: [] for accessing elements [start:end:step]
###Code
# ...original function using numpy array instead of list... do both
np_pressures_hPa * math.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))
###Output
_____no_output_____
###Markdown
REMEMBER: The operations do work on all elements of the array at the same time, you don't need a `for` loop It is also a matter of **calculation speed**:
###Code
lots_of_pressures = np.random.uniform(990, 1040, 1000)
%timeit [barometric_formula(pressure, 3000) for pressure in list(lots_of_pressures)]
%timeit lots_of_pressures * np.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))
###Output
_____no_output_____
###Markdown
REMEMBER: for calculations, numpy outperforms python Boolean indexing and filtering (!)
###Code
np_pressures_hPa
np_pressures_hPa > 1000
###Output
_____no_output_____
###Markdown
You can use this as a filter to select elements of an array:
###Code
boolean_mask = np_pressures_hPa > 1000
np_pressures_hPa[boolean_mask]
###Output
_____no_output_____
###Markdown
or, also to change the values in the array corresponding to these conditions:
###Code
boolean_mask = np_pressures_hPa < 900
np_pressures_hPa[boolean_mask] = 900
np_pressures_hPa
###Output
_____no_output_____
###Markdown
**Intermezzo:** Exercises boolean indexing:
###Code
AR = np.random.randint(0, 20, 15)
AR
###Output
_____no_output_____
###Markdown
EXERCISE: Count the number of values in AR that are larger than 10 _Tip:_ You can count with True = 1 and False = 0 and take the sum of these values
###Code
sum(AR > 10)
###Output
_____no_output_____
###Markdown
EXERCISE: Change all even numbers of `AR` into zero-values.
###Code
AR[AR%2 == 0] = 0
AR
###Output
_____no_output_____
###Markdown
EXERCISE: Change all even positions of matrix AR into the value 30
###Code
AR[1::2] = 30
AR
###Output
_____no_output_____
###Markdown
EXERCISE: Select all values above the 75th percentile of the following array AR2 ad take the square root of these values _Tip_: numpy provides a function `percentile` to calculate a given percentile
###Code
AR2 = np.random.random(10)
AR2
np.sqrt(AR2[AR2 > np.percentile(AR2, 75)])
###Output
_____no_output_____
###Markdown
EXERCISE: Convert all values -99 of the array AR3 into Nan-values _Tip_: that Nan values can be provided in float arrays as `np.nan` and that numpy provides a specialized function to compare float values, i.e. `np.isclose()`
###Code
AR3 = np.array([-99., 2., 3., 6., 8, -99., 7., 5., 6., -99.])
AR3[np.isclose(AR3, -99)] = np.nan
AR3
###Output
_____no_output_____
###Markdown
I also have measurement locations
###Code
location = 'Ghent - Sterre'
# ...check methods of strings... split, upper,...
locations = ['Ghent - Sterre', 'Ghent - Coupure', 'Ghent - Blandijn',
'Ghent - Korenlei', 'Ghent - Kouter', 'Ghent - Coupure',
'Antwerp - Groenplaats', 'Brussels- Grand place',
'Antwerp - Justitipaleis', 'Brussels - Tour & taxis']
###Output
_____no_output_____
###Markdown
EXERCISE: Use a list comprehension to convert all locations to lower case. _Tip:_ check the available methods of lists by writing: `location.` + TAB button
###Code
[location.lower() for location in locations]
###Output
_____no_output_____
###Markdown
I also measure temperature
###Code
pressures_hPa = [1013, 1003, 1010, 1020, 1032, 993, 989, 1018, 889, 1001]
temperature_degree = [23, 20, 17, 8, 12, 5, 16, 22, -2, 16]
locations = ['Ghent - Sterre', 'Ghent - Coupure', 'Ghent - Blandijn',
'Ghent - Korenlei', 'Ghent - Kouter', 'Ghent - Coupure',
'Antwerp - Groenplaats', 'Brussels- Grand place',
'Antwerp - Justitipaleis', 'Brussels - Tour & taxis']
###Output
_____no_output_____
###Markdown
Python [dictionaries](https://docs.python.org/3/tutorial/datastructures.htmldictionaries) are a convenient way to store multiple types of data together, to not have too much different variables:
###Code
measurement = {}
measurement['pressure_hPa'] = 1010
measurement['temperature'] = 23
measurement
# ...select on name, iterate over keys or items...
measurements = {'pressure_hPa': pressures_hPa,
'temperature_degree': temperature_degree,
'location': locations}
measurements
###Output
_____no_output_____
###Markdown
__But__: I want to apply my barometric function to measurements taken in Ghent when the temperature was below 10 degrees...
###Code
for idx, pressure in enumerate(measurements['pressure_hPa']):
if measurements['location'][idx].startswith("Ghent") and \
measurements['temperature_degree'][idx]< 10:
print(barometric_formula(pressure, 3000))
###Output
_____no_output_____
###Markdown
when a table would be more appropriate... Pandas!
###Code
import pandas as pd
measurements = pd.DataFrame(measurements)
measurements
barometric_formula(measurements[(measurements["location"].str.contains("Ghent")) &
(measurements["temperature_degree"] < 10)]["pressure_hPa"], 3000)
###Output
_____no_output_____
###Markdown
Python the basics: A quick recap> *DS Data manipulation, analysis and visualisation in Python* > *December, 2017*> *© 2016, Joris Van den Bossche and Stijn Van Hoey (, ). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)* First steps the obligatory...
###Code
print("Hello DS_course!") # python 3(!)
###Output
_____no_output_____
###Markdown
Python is a calculator
###Code
4*5
3**2
(3 + 4)/2, 3 + 4/2,
21//5, 21%5 # floor division, modulo
###Output
_____no_output_____
###Markdown
Variable assignment
###Code
my_variable_name = 'DS_course'
my_variable_name
name, age = 'John', 30
print('The age of {} is {:d}'.format(name, age))
###Output
_____no_output_____
###Markdown
More information on print format: https://pyformat.info/ Loading functionalities
###Code
import os
os.listdir()
###Output
_____no_output_____
###Markdown
Loading with defined short name (community agreement)
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Loading functions from any file/module/package:
###Code
%%file rehears1.py
#this writes a file in your directory, check it(!)
"A demo module."
def print_it():
"""Dummy function to print the string it"""
print('it')
import rehears1
rehears1.print_it()
%%file rehears2.py
#this writes a file in your directory, check it(!)
"A demo module."
def print_it():
"""Dummy function to print the string it"""
print('it')
def print_custom(my_input):
"""Dummy function to print the string that"""
print(my_input)
from rehears2 import print_it, print_custom
print_custom('DS_course')
###Output
_____no_output_____
###Markdown
DON'T: `from os import *`. Just don't! Datatypes Numerical **floats**
###Code
a_float = 5.
type(a_float)
###Output
_____no_output_____
###Markdown
**integers**
###Code
an_integer = 4
type(an_integer)
###Output
_____no_output_____
###Markdown
**booleans**
###Code
a_boolean = True
a_boolean
type(a_boolean)
3 > 4 # results in boolean
###Output
_____no_output_____
###Markdown
Containers Strings
###Code
a_string = "abcde"
a_string
a_string.capitalize(), a_string.capitalize(), a_string.endswith('f') #,...
a_string.upper().replace('B', 'A')
a_string + a_string
a_string * 5
###Output
_____no_output_____
###Markdown
Lists
###Code
a_list = [1, 'a', 3, 4]
a_list.append(8.2)
a_list
a_list.reverse()
a_list
###Output
_____no_output_____
###Markdown
REMEMBER: The list is updated in-place; a_list.reverse() does not return anything, it updates the list
###Code
a_list + ['b', 5]
[el*2 for el in a_list] # list comprehensions...a short for-loop
[el for el in dir(list) if not el[0]=='_']
###Output
_____no_output_____
###Markdown
EXERCISE: Rewrite the previous list comprehension by using a builtin string method to test if the element starts with an underscore
###Code
# %load _solutions/python_rehearsal35.py
###Output
_____no_output_____
###Markdown
EXERCISE: Given the sentence `the quick brown fox jumps over the lazy dog`, split the sentence in words and put all the word-lengths in a list.
###Code
sentence = "the quick brown fox jumps over the lazy dog"
# %load _solutions/python_rehearsal37.py
###Output
_____no_output_____
###Markdown
Dictionaries
###Code
a_dict = {'a': 1, 'b': 2}
a_dict['c'] = 3
a_dict['a'] = 5
a_dict
a_dict.keys(), a_dict.values(), a_dict.items()
an_empty_dic = dict() # or just {}
an_empty_dic
###Output
_____no_output_____
###Markdown
**tuple**
###Code
a_tuple = (1, 2, 4)
###Output
_____no_output_____
###Markdown
REMEMBER: (), [], {} => depends from the datatype you want to create!
###Code
collect = a_list, a_dict
type(collect)
serie_of_numbers = 3, 4, 5
# Using tuples on the left-hand side of assignment allows you to extract fields
a, b, c = serie_of_numbers
print(c, b, a)
###Output
_____no_output_____
###Markdown
Accessing container values
###Code
a_string[2:5]
a_list[-2]
a_list = [0, 1, 2, 3]
a_list[:3]
a_list[::2]
###Output
_____no_output_____
###Markdown
EXERCISE: Reverse the `a_list` without using the built-in reverse method, but using an appropriate slicing command:
###Code
# %load _solutions/python_rehearsal54.py
a_dict['a']
a_tuple[1]
###Output
_____no_output_____
###Markdown
REMEMBER: [] for accessing elements Assigning new values to items -> `mutable` vs `immutable`
###Code
a_list
a_list[2] = 10 # element 2 changed -- mutable
a_list
## TRY these yourself by un-commenting
#a_tuple[1] = 10 # cfr. a_string -- immutable
#a_string[3] = 'q'
###Output
_____no_output_____
###Markdown
Control flows for-loop
###Code
for i in [1, 2, 3, 4]:
print(i)
for i in a_list: # anything that is a collection/container can be looped
print(i)
###Output
_____no_output_____
###Markdown
EXERCISE: Loop through the characters of the string `Hello DS` and print each character on a new line
###Code
# %load _solutions/python_rehearsal62.py
for i in a_dict: # items, keys, values
print(i)
for j, key in enumerate(a_dict.keys()):
print(j, key)
###Output
_____no_output_____
###Markdown
REMEMBER: When needing a iterator to count, just use `enumerate`. you mostly do not need i = 0 for... i = i +1. Check [itertools](http://pymotw.com/2/itertools/) as well... while
###Code
b = 7
while b < 10:
b+=1
print(b)
###Output
_____no_output_____
###Markdown
if statement
###Code
if 'a' in a_dict:
print('a is in!')
if 3 > 4:
print('This is valid')
testvalue = True # 0, 1, None, False, 4 > 3
if testvalue:
print('valid')
else:
raise Exception("Not valid!")
myvalue = 3
if isinstance(myvalue, str):
print('this is a string')
elif isinstance(myvalue, float):
print('this is a float')
elif isinstance(myvalue, list):
print('this is a list')
else:
print('no idea actually')
###Output
_____no_output_____
###Markdown
Functions We've been using functions the whole time...
###Code
len(a_list)
###Output
_____no_output_____
###Markdown
Calling a method on an object:
###Code
a_list.reverse()
a_list
###Output
_____no_output_____
###Markdown
Defining a function:
###Code
def f(a, b, verbose=False):
"""custom summation function
Parameters
----------
a : number
first number to sum
b : number
second number to sum
verbose: boolean
require additional information (True) or not (False)
Returns
-------
my_sum : number
sum of the provided two input elements
"""
if verbose:
print('print a lot of information to the user')
my_sum = a + b
return my_sum
f(2, 3, verbose=False) # [3], '4'
###Output
_____no_output_____
###Markdown
REMEMBER: () for calling functions **Functions are objects** as well... (!)
###Code
def f1():
print('this is function 1 speaking...')
def f2():
print('this is function 2 speaking...')
def function_of_functions(inputfunction):
return inputfunction()
function_of_functions(f1)
###Output
_____no_output_____
###Markdown
**Anonymous functions (lambda)**
###Code
add_two = (lambda x: x + 2)
add_two(10)
###Output
_____no_output_____
###Markdown
Numpy
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Creating numpy array
###Code
np.array([1, 1.5, 2, 2.5]) #np.array(anylist)
np.arange(5, 10, 2)
np.linspace(5, 9, 3)
np.zeros((5, 2)), np.ones(5)
np.zeros((5, 2)).shape, np.zeros((5, 2)).size
###Output
_____no_output_____
###Markdown
Slicing
###Code
my_array = np.random.randint(2, 10, 10)
my_array
my_array[-2:]
my_array[0:7:2]
###Output
_____no_output_____
###Markdown
Assign new values to items
###Code
my_array[:2] = 10
my_array
my_array = my_array.reshape(5, 2)
my_array
my_array[0, :]
###Output
_____no_output_____
###Markdown
Element-wise operations
###Code
my_array = np.random.randint(2, 10, 10)
my_array%3 # == 0
np.exp(my_array), np.sin(my_array)
np.max(my_array)
np.cumsum(my_array) == my_array.cumsum()
my_array.max(axis=0)
my_array * my_array # element-wise
###Output
_____no_output_____
###Markdown
REMEMBER: The operations do work on all elements of the array at the same time, you don't need a `for` loop
###Code
a_list = range(1000)
%timeit [i**2 for i in a_list]
an_array = np.arange(1000)
%timeit an_array**2
###Output
_____no_output_____
###Markdown
Boolean indexing and filtering (!)
###Code
row_array = np.random.randint(1, 20, 10)
row_array
###Output
_____no_output_____
###Markdown
Conditions can be checked (element-wise):
###Code
row_array > 5
boolean_mask = row_array > 5
###Output
_____no_output_____
###Markdown
You can use this as a filter to select elements of an array:
###Code
row_array[boolean_mask]
###Output
_____no_output_____
###Markdown
or, also to change the values in the array corresponding to these conditions:
###Code
row_array[boolean_mask] = 20
row_array
###Output
_____no_output_____
###Markdown
in short - making the values equal to 20 now -20:
###Code
row_array[row_array == 20] = -20
row_array
###Output
_____no_output_____
###Markdown
This requires some practice...
###Code
AR = np.random.randint(0, 20, 15)
AR
###Output
_____no_output_____
###Markdown
EXERCISE: Count the number of values in AR that are larger than 10 (note: you can count with True = 1 and False = 0)
###Code
# %load _solutions/python_rehearsal107.py
###Output
_____no_output_____
###Markdown
EXERCISE: Change all even numbers of `AR` into zero-values.
###Code
# %load _solutions/python_rehearsal108.py
###Output
_____no_output_____
###Markdown
EXERCISE: Change all even positions of matrix AR into the value 30
###Code
# %load _solutions/python_rehearsal109.py
###Output
_____no_output_____
###Markdown
EXERCISE: Select all values above the 75th percentile of the following array AR2 ad take the square root of these values
###Code
AR2 = np.random.random(10)
AR2
# %load _solutions/python_rehearsal111.py
###Output
_____no_output_____
###Markdown
EXERCISE: Convert all values -99 of the array AR3 into Nan-values (Note that Nan values can be provided in float arrays as `np.nan` and that numpy provides a specialized function to compare float values, i.e. `np.isclose()`)
###Code
AR3 = np.array([-99., 2., 3., 6., 8, -99., 7., 5., 6., -99.])
# %load _solutions/python_rehearsal113.py
###Output
_____no_output_____
###Markdown
Python rehearsal> *© 2021, Joris Van den Bossche and Stijn Van Hoey (, ). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*--- I measure air pressure
###Code
pressure_hPa = 1010 # hPa
###Output
_____no_output_____
###Markdown
REMEMBER: Use meaningful variable names I'm measuring at sea level, what would be the air pressure of this measured value on other altitudes? I'm curious what the equivalent pressure would be on other alitudes... The **barometric formula**, sometimes called the exponential atmosphere or isothermal atmosphere, is a formula used to model how the **pressure** (or density) of the air **changes with altitude**. The pressure drops approximately by 11.3 Pa per meter in first 1000 meters above sea level.$$P=P_0 \cdot \exp \left[\frac{-g \cdot M \cdot h}{R \cdot T}\right]$$see https://www.math24.net/barometric-formula/ or https://en.wikipedia.org/wiki/Atmospheric_pressure where:* $T$ = standard temperature, 288.15 (K)* $R$ = universal gas constant, 8.3144598, (J/mol/K)* $g$ = gravitational acceleration, 9.81 (m/s$^2$)* $M$ = molar mass of Earth's air, 0.02896 (kg/mol)and:* $P_0$ = sea level pressure (hPa)* $h$ = height above sea level (m) Let's implement this... To calculate the formula, I need the exponential operator. Pure Python provide a number of mathematical functions, e.g. https://docs.python.org/3.7/library/math.htmlmath.exp within the `math` library
###Code
import math
# ...modules and libraries...
###Output
_____no_output_____
###Markdown
DON'T: from os import *. Just don't!
###Code
standard_temperature = 288.15
gas_constant = 8.31446
gravit_acc = 9.81
molar_mass_earth = 0.02896
###Output
_____no_output_____
###Markdown
EXERCISE: Calculate the equivalent air pressure at the altitude of 2500 m above sea level for our measured value of pressure_hPa (1010 hPa)
###Code
height = 2500
pressure_hPa * math.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))
# ...function/definition for barometric_formula...
def barometric_formula(pressure_sea_level, height=2500):
"""Apply barometric formula
Apply the barometric formula to calculate the air pressure on a given height
Parameters
----------
pressure_sea_level : float
pressure, measured as sea level
height : float
height above sea level (m)
Notes
------
see https://www.math24.net/barometric-formula/ or
https://en.wikipedia.org/wiki/Atmospheric_pressure
"""
standard_temperature = 288.15
gas_constant = 8.3144598
gravit_acc = 9.81
molar_mass_earth = 0.02896
pressure_altitude = pressure_sea_level * math.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))
return pressure_altitude
barometric_formula(pressure_hPa, 2000)
barometric_formula(pressure_hPa)
# ...formula not valid above 11000m...
# barometric_formula(pressure_hPa, 12000)
def barometric_formula(pressure_sea_level, height=2500):
"""Apply barometric formula
Apply the barometric formula to calculate the air pressure on a given height
Parameters
----------
pressure_sea_level : float
pressure, measured as sea level
height : float
height above sea level (m)
Notes
------
see https://www.math24.net/barometric-formula/ or
https://en.wikipedia.org/wiki/Atmospheric_pressure
"""
if height > 11000:
raise Exception("Barometric formula only valid for heights lower than 11000m above sea level")
standard_temperature = 288.15
gas_constant = 8.3144598
gravit_acc = 9.81
molar_mass_earth = 0.02896
pressure_altitude = pressure_sea_level * math.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))
return pressure_altitude
# ...combining logical statements...
height > 11000 or pressure_hPa < 9000
# ...load function from file...
###Output
_____no_output_____
###Markdown
Instead of having the functions in a notebook, importing the function from a file can be done as importing a function from an installed package. Save the function `barometric_formula` in a file called `barometric_formula.py` and add the required import statement `import math` on top of the file. Next, run the following cell:
###Code
from barometric_formula import barometric_formula
###Output
_____no_output_____
###Markdown
REMEMBER: Write functions to prevent copy-pasting of code and maximize reuse Add documentation to functions for your future self Named arguments provide default values Import functions from a file just as other modules I measure air pressure multiple times We can store these in a Python [list](https://docs.python.org/3/tutorial/introduction.htmllists):
###Code
pressures_hPa = [1013, 1003, 1010, 1020, 1032, 993, 989, 1018, 889, 1001]
# ...check methods of lists... append vs insert
###Output
_____no_output_____
###Markdown
Notice: A list is a general container, so can exist of mixed data types as well.
###Code
# ...list is a container...
###Output
_____no_output_____
###Markdown
I want to apply my function to each of these measurements I want to calculate the barometric formula **for** each of these measured values.
###Code
# ...for loop... dummy example
###Output
_____no_output_____
###Markdown
EXERCISE: Write a for loop that prints the adjusted value for altitude 3000m for each of the pressures in pressures_hPa
###Code
for pressure in pressures_hPa:
print(barometric_formula(pressure, 3000))
# ...list comprehensions...
###Output
_____no_output_____
###Markdown
EXERCISE: Write a for loop as a list comprehension to calculate the adjusted value for altitude 3000m for each of the pressures in pressures_hPa and store these values in a new variable pressures_hPa_adjusted
###Code
pressures_hPa_adjusted = [barometric_formula(pressure, 3000) for pressure in pressures_hPa]
pressures_hPa_adjusted
###Output
_____no_output_____
###Markdown
The power of numpy
###Code
import numpy as np
pressures_hPa = [1013, 1003, 1010, 1020, 1032, 993, 989, 1018, 889, 1001]
np_pressures_hPa = np.array([1013, 1003, 1010, 1020, 1032, 993, 989, 1018, 889, 1001])
# ...slicing/subselecting is similar...
print(np_pressures_hPa[0], pressures_hPa[0])
###Output
1013 1013
###Markdown
REMEMBER: [] for accessing elements [start:end:step]
###Code
# ...original function using numpy array instead of list... do both
np_pressures_hPa * math.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))
###Output
_____no_output_____
###Markdown
REMEMBER: The operations do work on all elements of the array at the same time, you don't need a `for` loop It is also a matter of **calculation speed**:
###Code
lots_of_pressures = np.random.uniform(990, 1040, 1000)
%timeit [barometric_formula(pressure, 3000) for pressure in list(lots_of_pressures)]
%timeit lots_of_pressures * np.exp(-gravit_acc * molar_mass_earth* height/(gas_constant*standard_temperature))
###Output
2.91 µs ± 65 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
###Markdown
REMEMBER: for calculations, numpy outperforms python Boolean indexing and filtering (!)
###Code
np_pressures_hPa
np_pressures_hPa > 1000
###Output
_____no_output_____
###Markdown
You can use this as a filter to select elements of an array:
###Code
boolean_mask = np_pressures_hPa > 1000
np_pressures_hPa[boolean_mask]
###Output
_____no_output_____
###Markdown
or, also to change the values in the array corresponding to these conditions:
###Code
boolean_mask = np_pressures_hPa < 900
np_pressures_hPa[boolean_mask] = 900
np_pressures_hPa
###Output
_____no_output_____
###Markdown
**Intermezzo:** Exercises boolean indexing:
###Code
AR = np.random.randint(0, 20, 15)
AR
###Output
_____no_output_____
###Markdown
EXERCISE: Count the number of values in AR that are larger than 10 _Tip:_ You can count with True = 1 and False = 0 and take the sum of these values
###Code
sum(AR > 10)
###Output
_____no_output_____
###Markdown
EXERCISE: Change all even numbers of `AR` into zero-values.
###Code
AR[AR%2 == 0] = 0
AR
###Output
_____no_output_____
###Markdown
EXERCISE: Change all even positions of matrix AR into the value 30
###Code
AR[1::2] = 30
AR
###Output
_____no_output_____
###Markdown
EXERCISE: Select all values above the 75th percentile of the following array AR2 ad take the square root of these values _Tip_: numpy provides a function `percentile` to calculate a given percentile
###Code
AR2 = np.random.random(10)
AR2
np.sqrt(AR2[AR2 > np.percentile(AR2, 75)])
###Output
_____no_output_____
###Markdown
EXERCISE: Convert all values -99 of the array AR3 into Nan-values _Tip_: that Nan values can be provided in float arrays as `np.nan` and that numpy provides a specialized function to compare float values, i.e. `np.isclose()`
###Code
AR3 = np.array([-99., 2., 3., 6., 8, -99., 7., 5., 6., -99.])
AR3[np.isclose(AR3, -99)] = np.nan
AR3
###Output
_____no_output_____
###Markdown
I also have measurement locations
###Code
location = 'Ghent - Sterre'
# ...check methods of strings... split, upper,...
locations = ['Ghent - Sterre', 'Ghent - Coupure', 'Ghent - Blandijn',
'Ghent - Korenlei', 'Ghent - Kouter', 'Ghent - Coupure',
'Antwerp - Groenplaats', 'Brussels- Grand place',
'Antwerp - Justitipaleis', 'Brussels - Tour & taxis']
###Output
_____no_output_____
###Markdown
EXERCISE: Use a list comprehension to convert all locations to lower case. _Tip:_ check the available methods of lists by writing: `location.` + TAB button
###Code
[location.lower() for location in locations]
###Output
_____no_output_____
###Markdown
I also measure temperature
###Code
pressures_hPa = [1013, 1003, 1010, 1020, 1032, 993, 989, 1018, 889, 1001]
temperature_degree = [23, 20, 17, 8, 12, 5, 16, 22, -2, 16]
locations = ['Ghent - Sterre', 'Ghent - Coupure', 'Ghent - Blandijn',
'Ghent - Korenlei', 'Ghent - Kouter', 'Ghent - Coupure',
'Antwerp - Groenplaats', 'Brussels- Grand place',
'Antwerp - Justitipaleis', 'Brussels - Tour & taxis']
###Output
_____no_output_____
###Markdown
Python [dictionaries](https://docs.python.org/3/tutorial/datastructures.htmldictionaries) are a convenient way to store multiple types of data together, to not have too much different variables:
###Code
measurement = {}
measurement['pressure_hPa'] = 1010
measurement['temperature'] = 23
measurement
# ...select on name, iterate over keys or items...
measurements = {'pressure_hPa': pressures_hPa,
'temperature_degree': temperature_degree,
'location': locations}
measurements
###Output
_____no_output_____
###Markdown
__But__: I want to apply my barometric function to measurements taken in Ghent when the temperature was below 10 degrees...
###Code
for idx, pressure in enumerate(measurements['pressure_hPa']):
if measurements['location'][idx].startswith("Ghent") and \
measurements['temperature_degree'][idx]< 10:
print(barometric_formula(pressure, 3000))
###Output
714.6658383288585
695.748213196624
###Markdown
when a table would be more appropriate... Pandas!
###Code
import pandas as pd
measurements = pd.DataFrame(measurements)
measurements
barometric_formula(measurements[(measurements["location"].str.contains("Ghent")) &
(measurements["temperature_degree"] < 10)]["pressure_hPa"], 3000)
###Output
_____no_output_____
###Markdown
Python the basics: A quick recap> *DS Data manipulation, analysis and visualisation in Python* > *December, 2017*> *© 2016, Joris Van den Bossche and Stijn Van Hoey (, ). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)* First steps the obligatory...
###Code
print("Hello DS_course!") # python 3(!)
###Output
Hello DS_course!
###Markdown
Python is a calculator
###Code
4*5
3**2 # ^ in R
(3 + 4)/2, 3 + 4/2,
21//5, 21%5, # floor division, modulo
###Output
_____no_output_____
###Markdown
Variable assignment
###Code
my_variable_name = 'DS_course'
my_variable_name
name, age = 'John', 30
print('The age of {} is {:d}'.format(name, age))
###Output
The age of John is 30
###Markdown
More information on print format: https://pyformat.info/ Loading functionalities
###Code
import os
os.listdir()
###Output
_____no_output_____
###Markdown
Loading with defined short name (community agreement)
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Loading functions from any file/module/package:
###Code
%%file rehears1.py
#this writes a file in your directory, check it(!)
"A demo module."
def print_it():
"""Dummy function to print the string it"""
print('it')
import rehears1
rehears1.print_it()
%%file rehears2.py
#this writes a file in your directory, check it(!)
"A demo module."
def print_it():
"""Dummy function to print the string it"""
print('it')
def print_custom(my_input):
"""Dummy function to print the string that"""
print(my_input)
from rehears2 import print_it, print_custom
print_custom('DS_course')
###Output
DS_course
###Markdown
DON'T: `from os import *`. Just don't! Datatypes Numerical **floats**
###Code
a_float = 5.
type(a_float)
###Output
_____no_output_____
###Markdown
**integers**
###Code
an_integer = 4
type(an_integer)
###Output
_____no_output_____
###Markdown
**booleans**
###Code
a_boolean = True
a_boolean
type(a_boolean)
3 > 4 # results in boolean
###Output
_____no_output_____
###Markdown
Containers Strings
###Code
a_string = "abcde"
a_string
a_string.capitalize(), a_string.capitalize(), a_string.endswith('f') #,...
a_string.upper().replace('B', 'A')
a_string + a_string
a_string * 5
###Output
_____no_output_____
###Markdown
Lists
###Code
a_list = [1, 'a', 3, 4]
a_list.append(8.2)
a_list
a_list.reverse()
a_list
###Output
_____no_output_____
###Markdown
REMEMBER: The list is updated in-place; a_list.reverse() does not return anything, it updates the list
###Code
a_list + ['b', 5]
[el*2 for el in a_list] # list comprehensions...a short for-loop
[el for el in dir(list) if not el[0]=='_']
[el for el in dir(list) if not str.startswith(el, "_")]
###Output
_____no_output_____
###Markdown
EXERCISE: Rewrite the previous list comprehension by using a builtin string method to test if the element starts with an underscore
###Code
# %load _solutions/python_rehearsal35.py
###Output
_____no_output_____
###Markdown
EXERCISE: Given the sentence `the quick brown fox jumps over the lazy dog`, split the sentence in words and put all the word-lengths in a list.
###Code
sentence = "the quick brown fox jumps over the lazy dog"
sentence_list = str.split(sentence, " ")
sentence.split(" ")
[len(word) for word in sentence.split(sep = " ")]
# %load _solutions/python_rehearsal37.py
###Output
_____no_output_____
###Markdown
Dictionaries
###Code
a_dict = {'a': 1, 'b': 2}
a_dict['c'] = 3
a_dict['a'] = 5
a_dict
for a_key, a_value in a_dict.items():
print(a_key, a_value)
a_dict.keys(), a_dict.values(), a_dict.items()
an_empty_dic = dict() # or just {}
an_empty_dic
###Output
_____no_output_____
###Markdown
**tuple**
###Code
a_tuple = (1, 2, 4)
###Output
_____no_output_____
###Markdown
REMEMBER: (), [], {} => depends from the datatype you want to create!
###Code
collect = a_list, a_dict
print(a_list)
print(a_dict)
type(collect)
serie_of_numbers = 3, 4, 5
# Using tuples on the left-hand side of assignment allows you to extract fields
a, b, c = serie_of_numbers
print(c, b, a)
###Output
5 4 3
###Markdown
Accessing container values
###Code
a_string[2:5]
a_list[-2]
a_list = [0, 1, 2, 3]
a_list[:3]
a_list
a_list[::2]
###Output
_____no_output_____
###Markdown
EXERCISE: Reverse the `a_list` without using the built-in reverse method, but using an appropriate slicing command:
###Code
# %load _solutions/python_rehearsal54.py
a_dict['a']
a_tuple[1]
###Output
_____no_output_____
###Markdown
REMEMBER: [] for accessing elements Assigning new values to items -> `mutable` vs `immutable`
###Code
a_list
a_list[2] = 10 # element 2 changed -- mutable
a_list
## TRY these yourself by un-commenting
#a_tuple[1] = 10 # cfr. a_string -- immutable
#a_string[3] = 'q'
###Output
_____no_output_____
###Markdown
Control flows for-loop
###Code
for i in [1, 2, 3, 4]:
print(i)
for i in a_list: # anything that is a collection/container can be looped
print(i)
hello_str = "Hello DS"
for stub in hello_str.split(" "):
print(stub)
for charact in hello_str:
print(charact)
###Output
Hello
DS
H
e
l
l
o
D
S
###Markdown
EXERCISE: Loop through the characters of the string `Hello DS` and print each character on a new line
###Code
# %load _solutions/python_rehearsal62.py
for i in a_dict: # items, keys, values
print(i)
for j, key in enumerate(a_dict.keys()):
print(j, key)
###Output
_____no_output_____
###Markdown
REMEMBER: When needing a iterator to count, just use `enumerate`. you mostly do not need i = 0 for... i = i +1. Check [itertools](http://pymotw.com/2/itertools/) as well... while
###Code
b = 7
while b < 10:
b+=1
print(b)
###Output
_____no_output_____
###Markdown
if statement
###Code
if 'a' in a_dict:
print('a is in!')
if 3 > 4:
print('This is valid')
testvalue = True # 0, 1, None, False, 4 > 3
if testvalue:
print('valid')
else:
raise Exception("Not valid!")
myvalue = 3
if isinstance(myvalue, str):
print('this is a string')
elif isinstance(myvalue, float):
print('this is a float')
elif isinstance(myvalue, list):
print('this is a list')
else:
print('no idea actually')
###Output
_____no_output_____
###Markdown
Functions We've been using functions the whole time...
###Code
len(a_list)
###Output
_____no_output_____
###Markdown
Calling a method on an object:
###Code
a_list.reverse()
a_list
###Output
_____no_output_____
###Markdown
Defining a function:
###Code
def f(a, b, verbose=False):
"""custom summation function
Parameters
----------
a : number
first number to sum
b : number
second number to sum
verbose: boolean
require additional information (True) or not (False)
Returns
-------
my_sum : number
sum of the provided two input elements
"""
if verbose:
print('print a lot of information to the user')
my_sum = a + b
return my_sum
f(2, 3, verbose=False) # [3], '4'
###Output
_____no_output_____
###Markdown
REMEMBER: () for calling functions **Functions are objects** as well... (!)
###Code
def f1():
print('this is function 1 speaking...')
def f2():
print('this is function 2 speaking...')
def function_of_functions(inputfunction):
return inputfunction()
function_of_functions(f1)
###Output
_____no_output_____
###Markdown
**Anonymous functions (lambda)**
###Code
add_two = (lambda x: x + 2)
add_two(10)
###Output
_____no_output_____
###Markdown
Numpy
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Creating numpy array
###Code
my_array = np.array([1, 1.5, 2, 2.5]) #np.array(anylist)
my_array.shape
np.arange(5, 10, 2)
np.linspace(5, 9, 3)
np.zeros((5, 2)), np.ones(5)
np.zeros((5, 2)).shape, np.zeros((5, 2)).size
###Output
_____no_output_____
###Markdown
Slicing
###Code
my_array = np.random.randint(2, 10, 10)
my_array
my_array[-2:]
my_array[0:7:2]
###Output
_____no_output_____
###Markdown
Assign new values to items
###Code
my_array[:2] = 10
my_array
my_array = my_array.reshape(5, 2)
my_array
my_array[0, :]
###Output
_____no_output_____
###Markdown
Element-wise operations
###Code
my_array = np.random.randint(2, 10, 10)
print(my_array)
my_array%3 # == 0
np.exp(my_array), np.sin(my_array)
np.max(my_array)
np.cumsum(my_array) == my_array.cumsum()
my_array.max(axis=0)
my_array * my_array # element-wise
###Output
_____no_output_____
###Markdown
REMEMBER: The operations do work on all elements of the array at the same time, you don't need a `for` loop
###Code
a_list = range(1000)
%timeit [i**2 for i in a_list]
an_array = np.arange(1000)
%timeit an_array**2
###Output
1.71 µs ± 79.7 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
###Markdown
Boolean indexing and filtering (!)
###Code
row_array = np.random.randint(1, 20, 10)
row_array
###Output
_____no_output_____
###Markdown
Conditions can be checked (element-wise):
###Code
row_array > 5
boolean_mask = row_array > 5
###Output
_____no_output_____
###Markdown
You can use this as a filter to select elements of an array:
###Code
row_array[boolean_mask]
###Output
_____no_output_____
###Markdown
or, also to change the values in the array corresponding to these conditions:
###Code
row_array[boolean_mask] = 20
row_array
###Output
_____no_output_____
###Markdown
in short - making the values equal to 20 now -20:
###Code
row_array[row_array == 20] = -20
row_array
###Output
_____no_output_____
###Markdown
This requires some practice...
###Code
AR = np.random.randint(0, 20, 15)
AR
###Output
_____no_output_____
###Markdown
EXERCISE: Count the number of values in AR that are larger than 10 (note: you can count with True = 1 and False = 0)
###Code
sum(AR>10)
# %load _solutions/python_rehearsal107.py
###Output
_____no_output_____
###Markdown
EXERCISE: Change all even numbers of `AR` into zero-values.
###Code
AR[AR%2 == 0] = 0
AR
# %load _solutions/python_rehearsal108.py
###Output
_____no_output_____
###Markdown
EXERCISE: Change all even positions of matrix AR into the value 30
###Code
AR[np.arange(len(AR))%2 == 0] = 30
AR
# %load _solutions/python_rehearsal109.py
###Output
_____no_output_____
###Markdown
EXERCISE: Select all values above the 75th percentile of the following array AR2 ad take the square root of these values
###Code
AR2 = np.random.random(10)
AR2
AR2[AR2 > np.percentile(AR2, 75)]**2
# %load _solutions/python_rehearsal111.py
###Output
_____no_output_____
###Markdown
EXERCISE: Convert all values -99 of the array AR3 into Nan-values (Note that Nan values can be provided in float arrays as `np.nan` and that numpy provides a specialized function to compare float values, i.e. `np.isclose()`)
###Code
AR3 = np.array([-99., 2., 3., 6., 8, -99., 7., 5., 6., -99.])
AR3[AR3 == -99] = np.nan
AR3
# Alternative solution with np.isclose()
AR3 = np.array([-99., 2., 3., 6., 8, -99., 7., 5., 6., -99.])
AR3[np.isclose(AR3, -99)] = np.nan
AR3
# %load _solutions/python_rehearsal113.py
###Output
_____no_output_____ |
src/deep_model.ipynb | ###Markdown
Create two lists from the data file, one with the molecules and the other with the classes of the molecules. The classes are integers [0..10] representing the 11 classes. Keep in mind that this is a multi-class problem, i.e. each molecule (metabolite) may have more than one class.
###Code
# load the molecules
with open("../data/kegg_classes.txt") as f:
mols_str, classes = zip(*[ line.strip().split() for line in f])
# encode each molecule using deep smiles
import deepsmiles
print("DeepSMILES version: %s" % deepsmiles.__version__)
converter = deepsmiles.Converter(rings=True, branches=True)
# a list of deep smiles per molecule
deep_smiles_enc = [ converter.encode(m) for m in mols_str ]
deep_smiles_enc
###Output
_____no_output_____ |
notebooks/M5_figs_process-Copy1.ipynb | ###Markdown
Figures by heightEach of the following figures could be drawn for any probe height. Currently, figures are only drawn for z=80m (hub height)
###Code
categories = list(catinfo['columns'].keys())
for cat in categories:
if 'stability flag' in cat.lower():
continue
# savepath for new figs
savecat = catinfo['save'][cat]
catfigpath = os.path.join(figPath,savecat)
catcolumns, probe_heights, _ = utils.get_vertical_locations(catinfo['columns'][cat])
## wind roses by height
for height in [87]:#probe_heights:
# SCATTER PLOTS
## cumulative scatter
fig, ax = vis.winddir_scatter(metdat, catinfo, cat, vertloc=height, exclude_angles=exclude_angles)
fig.savefig(os.path.join(catfigpath,'{}_{}_scatter_{}.png'.format(towerID, savecat, height)), dpi=200, bbox_inches='tight')
# stability scatter
fig, ax = vis.stability_winddir_scatter(metdat, catinfo, cat, vertloc=height)
fig.savefig(os.path.join(catfigpath,'{}_{}_scatter_stability_{}.png'.format(towerID, savecat, height)), dpi=200, bbox_inches='tight')
# HISTOGRAMS
# cumulative
fig,ax = vis.hist(metdat, catinfo, cat, vertloc=height)
fig.savefig(os.path.join(catfigpath,'{}_{}_hist_{}.png'.format(towerID, savecat, height)), dpi=200, bbox_inches='tight')
# stability breakout
fig,ax = vis.hist_by_stability(metdat, catinfo, cat, vertloc=height)
fig.savefig(os.path.join(catfigpath,'{}_{}_hist_stabilty_{}.png'.format(towerID, savecat, height)), dpi=200, bbox_inches='tight')
# stability stacked
fig,ax = fig,ax = vis.stacked_hist_by_stability(metdat, catinfo, cat, vertloc=height)
fig.savefig(os.path.join(catfigpath,'{}_{}_hist_stabilty_stacked_{}.png'.format(towerID, savecat, height)), dpi=200, bbox_inches='tight')
if 'ti' not in cat.lower():
# groupby direction scatter
fig,ax = vis.groupby_scatter(metdat, catinfo, cat, vertloc=height, abscissa='direction', groupby='ti')
fig.set_size_inches(8,3)
fig.tight_layout()
fig.savefig(os.path.join(catfigpath,'{}_{}_TI_scatter_{}m.png'.format(towerID, savecat, height)), dpi=200, bbox_inches='tight')
plt.close('all')
###Output
/Users/nhamilto/.local/lib/python3.6/site-packages/numpy/lib/function_base.py:838: RuntimeWarning: invalid value encountered in true_divide
return n/db/n.sum(), bin_edges
/Users/nhamilto/anaconda/lib/python3.6/site-packages/pandas/plotting/_core.py:1060: RuntimeWarning: invalid value encountered in greater_equal
if (values >= 0).all():
/Users/nhamilto/anaconda/lib/python3.6/site-packages/pandas/plotting/_core.py:1062: RuntimeWarning: invalid value encountered in less_equal
elif (values <= 0).all():
###Markdown
Wind rosesMight as well draw all of the wind roses (by height)
###Code
# wind roses
cat = 'speed'
catcolumns, probe_heights, ind = utils.get_vertical_locations(catinfo['columns'][cat])
# savepath for new figs
savecat = catinfo['save'][cat]
catfigpath = os.makedirs(os.path.join(figPath,savecat,'roses'), mode=0o777, exist_ok=True)
catfigpath = os.path.join(figPath,savecat,'roses')
## wind roses by height
for height in probe_heights:
## cumulative wind rose
fig,ax,leg = vis.rose_fig(metdat, catinfo, cat, vertloc=height, bins=np.linspace(0,15,6), ylim=9)
fig.savefig(os.path.join(catfigpath,'{}_{}_rose_{}m.png'.format(towerID, cat, height)), dpi=200, bbox_inches='tight')
## monthly wind roses
fig,ax,leg = vis.monthly_rose_fig(metdat, catinfo, cat, vertloc=height, bins=np.linspace(0,15,6), ylim=12)
fig.savefig(os.path.join(catfigpath,'{}_{}_rose_monthly_{}m.png'.format(towerID, cat, height)), dpi=200, bbox_inches='tight')
plt.close('all')
# TI roses
cat = 'ti'
catcolumns, probe_heights, ind = utils.get_vertical_locations(catinfo['columns'][cat])
# savepath for new figs
savecat = catinfo['save'][cat]
catfigpath = os.makedirs(os.path.join(figPath,savecat,'roses'), mode=0o777, exist_ok=True)
catfigpath = os.path.join(figPath,savecat,'roses')
## wind roses by height
for height in probe_heights:
## cumulative wind rose
fig,ax,leg = vis.rose_fig(metdat, catinfo, cat, vertloc=height, bins=np.linspace(0,40,6), ylim=12)
fig.savefig(os.path.join(catfigpath,'{}_{}_rose_{}m.png'.format(towerID, cat, height)), dpi=200, bbox_inches='tight')
## monthly wind roses
fig,ax,leg = vis.monthly_rose_fig(metdat, catinfo, cat, vertloc=height, bins=np.linspace(0,40,6), ylim=14)
fig.savefig(os.path.join(catfigpath,'{}_{}_rose_monthly_{}m.png'.format(towerID, cat, height)), dpi=200, bbox_inches='tight')
plt.close('all')
fig, ax = vis.normalized_hist_by_stability(metdat, catinfo)
fig.savefig(os.path.join(figPath,'{}_normalized_stability_flag.png'.format(towerID)), dpi=200, bbox_inches='tight')
fig, ax = vis.normalized_monthly_hist_by_stability(metdat,catinfo)
fig.savefig(os.path.join(figPath,'{}_normalized_stability_flag_monthly.png'.format(towerID)), dpi=200, bbox_inches='tight')
cat = 'wind shear'
# savepath for new figs
savecat = catinfo['save'][cat]
catfigpath = os.path.join(figPath,savecat)
fig,ax = vis.hist(metdat, catinfo, cat, vertloc=122)
fig.savefig(os.path.join(catfigpath,'{}_{}_hist_{}.png'.format(towerID, savecat, 122)), dpi=200, bbox_inches='tight')
height
plt.rcParams.update({'font.size': 12})
colors = utils.get_nrelcolors()
blue = colors['blue'][0]
cat = 'wind shear'
# savepath for new figs
savecat = catinfo['save'][cat]
catfigpath = os.path.join(figPath,savecat)
catcolumns, probe_heights, ind = utils.get_vertical_locations(catinfo['columns'][cat])
for col in catcolumns[5:6]:
fig, ax = plt.subplots(figsize=(5,3))
metdat[col].hist(ax=ax, bins=35, color=blue, edgecolor='k', density=False, weights = np.ones(metdat[col].count())/len(metdat[col]))
ax.grid(False)
ax.set_title(col, fontsize=12)
ax.set_ylabel('Frequency [%]', fontsize=12)
ax.set_xlabel(catinfo['labels'][cat], fontsize=12)
fig.savefig(os.path.join(catfigpath,'{}_{}_hist_{}.png'.format(towerID, savecat, 122)), dpi=200, bbox_inches='tight')
colors = utils.get_nrelcolors()
blue = colors['blue'][0]
cat = 'wind veer'
# savepath for new figs
savecat = catinfo['save'][cat]
catfigpath = os.path.join(figPath,savecat)
catcolumns, probe_heights, ind = utils.get_vertical_locations(catinfo['columns'][cat])
for col in catcolumns[5:6]:
fig, ax = plt.subplots(figsize=(5,3))
metdat[col].hist(ax=ax, bins=35, color=blue, edgecolor='k', density=False, weights = np.ones(metdat[col].count())/len(metdat[col]))
ax.grid(False)
ax.set_title(col, fontsize=12)
ax.set_ylabel('Frequency [%]', fontsize=12)
ax.set_xlabel(catinfo['labels'][cat], fontsize=12)
fig.savefig(os.path.join(catfigpath,'{}_{}_hist_{}.png'.format(towerID, savecat, 122)), dpi=200, bbox_inches='tight')
imp.reload(vis)
for cat in ['air pressure']:#categories:
savecat = catinfo['save'][cat]
catfigpath = os.path.join(figPath,savecat)
fig,ax = vis.monthly_stacked_hist_by_stability(metdat,catinfo,cat, vertloc=87)
fig.savefig(os.path.join(catfigpath,'{}_{}_monthly_stacked_hist_by_stability_{}.png'.format(towerID, savecat, 87)), dpi=200, bbox_inches='tight')
# direction conditioned
dircol, probe_heights, ind = utils.get_vertical_locations(catinfo['columns']['direction'], location=87)
spdcol, probe_heights, ind = utils.get_vertical_locations(catinfo['columns']['speed'], location=87)
northerly = metdat[(metdat[dircol] < 45) | (metdat[dircol] > 315)]
southerly = metdat[(metdat[dircol] > 135) & (metdat[dircol] < 225)]
weakwest = metdat[(metdat[dircol] > 225) & (metdat[dircol] < 315) & (metdat[spdcol] < 10)]
strongwest = metdat[(metdat[dircol] > 225) & (metdat[dircol] < 315) & (metdat[spdcol] > 10)]
imp.reload(vis)
cat = 'ti'
fig,ax = vis.stacked_hist_by_stability(northerly, catinfo, cat, vertloc=87)
fig,ax = vis.stacked_hist_by_stability(southerly, catinfo, cat, vertloc=87)
fig,ax = vis.stacked_hist_by_stability(weakwest, catinfo, cat, vertloc=87)
fig,ax = vis.stacked_hist_by_stability(strongwest, catinfo, cat, vertloc=87)
cat = 'turbulent kinetic energy'
fig,ax = vis.stacked_hist_by_stability(northerly, catinfo, cat, vertloc=87)
fig,ax = vis.stacked_hist_by_stability(southerly, catinfo, cat, vertloc=87)
fig,ax = vis.stacked_hist_by_stability(weakwest, catinfo, cat, vertloc=87)
fig,ax = vis.stacked_hist_by_stability(strongwest, catinfo, cat, vertloc=87)
###Output
_____no_output_____
###Markdown
Monthly histograms
###Code
categories = list(catinfo['columns'].keys())
imp.reload(vis)
for cat in categories:
height = 87
if 'shear' in cat.lower():
height = 122
if 'stability flag' in cat.lower():
continue
savecat = catinfo['save'][cat]
catfigpath = os.path.join(figPath,savecat)
fig,ax = vis.monthly_hist(metdat, catinfo, cat, vertloc=height)
fig.savefig(os.path.join(catfigpath,'{}_{}_monthly_hist_{}.png'.format(towerID, savecat, height)), dpi=200, bbox_inches='tight')
plt.clf()
metdat.columns
###Output
_____no_output_____
###Markdown
Bad pressure data correctionthere is a period of data for which the pressure signals are not to be trustedIt appears that there was a poor calibration between two periods of downtime.Data has been correted by adding an offset to that range of data. The offset is equal to the difference between the mean value of the bad data and the mean value of the annual average over that period.
###Code
varcol, vertloc, _= utils.get_vertical_locations(catinfo['columns']['air pressure'])
for col in varcol:
# pressure data
pdat = metdat[col].copy()
# find start and stop times of bad data
timediff = np.abs(np.diff(pdat.index.values))
temp = timediff.copy()
temp.sort()
limits = [np.where(timediff==temp[-1])[0][0], np.where(timediff==temp[-2])[0][0]]
# extract bad data
bdat = pdat.iloc[limits[0]+1:limits[1]].copy()
# good data is outside of that range
gdat = pdat[(pdat.index<pdat.index[limits[0]]) | (pdat.index>=pdat.index[limits[1]])].copy()
# average value of pressure for that day of year
dayofyearaverage = gdat.groupby(gdat.index.dayofyear).mean()
# correction is just the difference of mean values
pressure_correction = dayofyearaverage.values[bdat.index.dayofyear].mean()-bdat.mean()
# corrected data
cdat = bdat+(dayofyearaverage.values[bdat.index.dayofyear].mean()-bdat.mean())
# metdat[col].iloc[limits[0]+1:limits[1]] = cdat
fig,ax = plt.subplots()
pdat.plot(ax=ax, label='raw data')
bdat.plot(ax=ax, label='bad data')
cdat.plot(ax=ax, label='corrected data')
ax.set_ylabel(catinfo['labels']['air pressure'])
fig.legend(loc=6, bbox_to_anchor=(1,0.5), fontsize=10, frameon=False)
fig.tight_layout()
fig.savefig(os.path.join(figPath,'pressure_correction.png'), bbox_inches='tight', dpi=200)
categories = list(catinfo['columns'].keys())
for cat in categories:
if 'stability flag' in cat.lower():
continue
# savepath for new figs
savecat = catinfo['save'][cat]
catfigpath = os.makedirs(os.path.join(figPath,savecat), mode=0o777, exist_ok=True)
catfigpath = os.path.join(figPath,savecat)
# Profiles
## cumulative profile
fig, ax = vis.cumulative_profile(metdat, catinfo, cat)
fig.savefig(os.path.join(catfigpath,'{}_{}_profile.png'.format(towerID, savecat)), dpi=200, bbox_inches='tight')
## monthly profile
fig, ax = vis.monthly_profile(metdat, catinfo, cat)
fig.savefig(os.path.join(catfigpath,'{}_{}_profile_monthly.png'.format(towerID, savecat)), dpi=200, bbox_inches='tight')
## stability profile
fig,ax = vis.stability_profile(metdat, catinfo, cat)
fig.savefig(os.path.join(catfigpath,'{}_{}_profile_stability.png'.format(towerID, savecat)), dpi=200, bbox_inches='tight')
## monthly stability profile
fig,ax = vis.monthly_stability_profiles(metdat, catinfo, cat)
fig.savefig(os.path.join(catfigpath,'{}_{}_profile_monthly_stability.png'.format(towerID, savecat)), dpi=200, bbox_inches='tight')
# Diurnal cycle
## cumulative hourly plot
fig,ax = vis.hourlyplot(metdat, catinfo, cat)
fig.savefig(os.path.join(catfigpath,'{}_{}_hourly.png'.format(towerID, savecat)), dpi=200, bbox_inches='tight')
## monthly hourly plot
fig,ax = vis.monthlyhourlyplot(metdat, catinfo, cat)
fig.savefig(os.path.join(catfigpath,'{}_{}_hourly_monthly.png'.format(towerID, savecat)), dpi=200, bbox_inches='tight')
plt.close('all')
###Output
_____no_output_____
###Markdown
Figures by heightEach of the following figures could be drawn for any probe height. Currently, figures are only drawn for z=80m (hub height)
###Code
categories = list(catinfo['columns'].keys())
for cat in categories:
if 'stability flag' in cat.lower():
continue
# savepath for new figs
savecat = catinfo['save'][cat]
catfigpath = os.path.join(figPath,savecat)
catcolumns, probe_heights, _ = utils.get_vertical_locations(catinfo['columns'][cat])
## wind roses by height
for height in [87]:#probe_heights:
# SCATTER PLOTS
## cumulative scatter
fig, ax = vis.winddir_scatter(metdat, catinfo, cat, vertloc=height, exclude_angles=exclude_angles)
fig.savefig(os.path.join(catfigpath,'{}_{}_scatter_{}.png'.format(towerID, savecat, height)), dpi=200, bbox_inches='tight')
# stability scatter
fig, ax = vis.stability_winddir_scatter(metdat, catinfo, cat, vertloc=height)
fig.savefig(os.path.join(catfigpath,'{}_{}_scatter_stability_{}.png'.format(towerID, savecat, height)), dpi=200, bbox_inches='tight')
# HISTOGRAMS
# cumulative
fig,ax = vis.hist(metdat, catinfo, cat, vertloc=height)
fig.savefig(os.path.join(catfigpath,'{}_{}_hist_{}.png'.format(towerID, savecat, height)), dpi=200, bbox_inches='tight')
# stability breakout
fig,ax = vis.hist_by_stability(metdat, catinfo, cat, vertloc=height)
fig.savefig(os.path.join(catfigpath,'{}_{}_hist_stabilty_{}.png'.format(towerID, savecat, height)), dpi=200, bbox_inches='tight')
# stability stacked
fig,ax = fig,ax = vis.stacked_hist_by_stability(metdat, catinfo, cat, vertloc=height)
fig.savefig(os.path.join(catfigpath,'{}_{}_hist_stabilty_stacked_{}.png'.format(towerID, savecat, height)), dpi=200, bbox_inches='tight')
if 'ti' not in cat.lower():
# groupby direction scatter
fig,ax = vis.groupby_scatter(metdat, catinfo, cat, vertloc=height, abscissa='direction', groupby='ti')
fig.set_size_inches(8,3)
fig.tight_layout()
fig.savefig(os.path.join(catfigpath,'{}_{}_TI_scatter_{}m.png'.format(towerID, savecat, height)), dpi=200, bbox_inches='tight')
plt.close('all')
###Output
/Users/nhamilto/.local/lib/python3.6/site-packages/numpy/lib/function_base.py:838: RuntimeWarning: invalid value encountered in true_divide
return n/db/n.sum(), bin_edges
/Users/nhamilto/anaconda/lib/python3.6/site-packages/pandas/plotting/_core.py:1060: RuntimeWarning: invalid value encountered in greater_equal
if (values >= 0).all():
/Users/nhamilto/anaconda/lib/python3.6/site-packages/pandas/plotting/_core.py:1062: RuntimeWarning: invalid value encountered in less_equal
elif (values <= 0).all():
###Markdown
Wind rosesMight as well draw all of the wind roses (by height)
###Code
# wind roses
cat = 'speed'
catcolumns, probe_heights, ind = utils.get_vertical_locations(catinfo['columns'][cat])
# savepath for new figs
savecat = catinfo['save'][cat]
catfigpath = os.makedirs(os.path.join(figPath,savecat,'roses'), mode=0o777, exist_ok=True)
catfigpath = os.path.join(figPath,savecat,'roses')
## wind roses by height
for height in probe_heights:
## cumulative wind rose
fig,ax,leg = vis.rose_fig(metdat, catinfo, cat, vertloc=height, bins=np.linspace(0,15,6), ylim=9)
fig.savefig(os.path.join(catfigpath,'{}_{}_rose_{}m.png'.format(towerID, cat, height)), dpi=200, bbox_inches='tight')
## monthly wind roses
fig,ax,leg = vis.monthly_rose_fig(metdat, catinfo, cat, vertloc=height, bins=np.linspace(0,15,6), ylim=12)
fig.savefig(os.path.join(catfigpath,'{}_{}_rose_monthly_{}m.png'.format(towerID, cat, height)), dpi=200, bbox_inches='tight')
plt.close('all')
# TI roses
cat = 'ti'
catcolumns, probe_heights, ind = utils.get_vertical_locations(catinfo['columns'][cat])
# savepath for new figs
savecat = catinfo['save'][cat]
catfigpath = os.makedirs(os.path.join(figPath,savecat,'roses'), mode=0o777, exist_ok=True)
catfigpath = os.path.join(figPath,savecat,'roses')
## wind roses by height
for height in probe_heights:
## cumulative wind rose
fig,ax,leg = vis.rose_fig(metdat, catinfo, cat, vertloc=height, bins=np.linspace(0,40,6), ylim=12)
fig.savefig(os.path.join(catfigpath,'{}_{}_rose_{}m.png'.format(towerID, cat, height)), dpi=200, bbox_inches='tight')
## monthly wind roses
fig,ax,leg = vis.monthly_rose_fig(metdat, catinfo, cat, vertloc=height, bins=np.linspace(0,40,6), ylim=14)
fig.savefig(os.path.join(catfigpath,'{}_{}_rose_monthly_{}m.png'.format(towerID, cat, height)), dpi=200, bbox_inches='tight')
plt.close('all')
fig, ax = vis.normalized_hist_by_stability(metdat, catinfo)
fig.savefig(os.path.join(figPath,'{}_normalized_stability_flag.png'.format(towerID)), dpi=200, bbox_inches='tight')
fig, ax = vis.normalized_monthly_hist_by_stability(metdat,catinfo)
fig.savefig(os.path.join(figPath,'{}_normalized_stability_flag_monthly.png'.format(towerID)), dpi=200, bbox_inches='tight')
cat = 'wind shear'
# savepath for new figs
savecat = catinfo['save'][cat]
catfigpath = os.path.join(figPath,savecat)
fig,ax = vis.hist(metdat, catinfo, cat, vertloc=122)
fig.savefig(os.path.join(catfigpath,'{}_{}_hist_{}.png'.format(towerID, savecat, 122)), dpi=200, bbox_inches='tight')
height
plt.rcParams.update({'font.size': 12})
colors = utils.get_nrelcolors()
blue = colors['blue'][0]
cat = 'wind shear'
# savepath for new figs
savecat = catinfo['save'][cat]
catfigpath = os.path.join(figPath,savecat)
catcolumns, probe_heights, ind = utils.get_vertical_locations(catinfo['columns'][cat])
for col in catcolumns[5:6]:
fig, ax = plt.subplots(figsize=(5,3))
metdat[col].hist(ax=ax, bins=35, color=blue, edgecolor='k', density=False, weights = np.ones(metdat[col].count())/len(metdat[col]))
ax.grid(False)
ax.set_title(col, fontsize=12)
ax.set_ylabel('Frequency [%]', fontsize=12)
ax.set_xlabel(catinfo['labels'][cat], fontsize=12)
fig.savefig(os.path.join(catfigpath,'{}_{}_hist_{}.png'.format(towerID, savecat, 122)), dpi=200, bbox_inches='tight')
colors = utils.get_nrelcolors()
blue = colors['blue'][0]
cat = 'wind veer'
# savepath for new figs
savecat = catinfo['save'][cat]
catfigpath = os.path.join(figPath,savecat)
catcolumns, probe_heights, ind = utils.get_vertical_locations(catinfo['columns'][cat])
for col in catcolumns[5:6]:
fig, ax = plt.subplots(figsize=(5,3))
metdat[col].hist(ax=ax, bins=35, color=blue, edgecolor='k', density=False, weights = np.ones(metdat[col].count())/len(metdat[col]))
ax.grid(False)
ax.set_title(col, fontsize=12)
ax.set_ylabel('Frequency [%]', fontsize=12)
ax.set_xlabel(catinfo['labels'][cat], fontsize=12)
fig.savefig(os.path.join(catfigpath,'{}_{}_hist_{}.png'.format(towerID, savecat, 122)), dpi=200, bbox_inches='tight')
imp.reload(vis)
for cat in ['air pressure']:#categories:
savecat = catinfo['save'][cat]
catfigpath = os.path.join(figPath,savecat)
fig,ax = vis.monthly_stacked_hist_by_stability(metdat,catinfo,cat, vertloc=87)
fig.savefig(os.path.join(catfigpath,'{}_{}_monthly_stacked_hist_by_stability_{}.png'.format(towerID, savecat, 87)), dpi=200, bbox_inches='tight')
# direction conditioned
dircol, probe_heights, ind = utils.get_vertical_locations(catinfo['columns']['direction'], location=87)
spdcol, probe_heights, ind = utils.get_vertical_locations(catinfo['columns']['speed'], location=87)
northerly = metdat[(metdat[dircol] < 45) | (metdat[dircol] > 315)]
southerly = metdat[(metdat[dircol] > 135) & (metdat[dircol] < 225)]
weakwest = metdat[(metdat[dircol] > 225) & (metdat[dircol] < 315) & (metdat[spdcol] < 10)]
strongwest = metdat[(metdat[dircol] > 225) & (metdat[dircol] < 315) & (metdat[spdcol] > 10)]
imp.reload(vis)
cat = 'ti'
fig,ax = vis.stacked_hist_by_stability(northerly, catinfo, cat, vertloc=87)
fig,ax = vis.stacked_hist_by_stability(southerly, catinfo, cat, vertloc=87)
fig,ax = vis.stacked_hist_by_stability(weakwest, catinfo, cat, vertloc=87)
fig,ax = vis.stacked_hist_by_stability(strongwest, catinfo, cat, vertloc=87)
cat = 'turbulent kinetic energy'
fig,ax = vis.stacked_hist_by_stability(northerly, catinfo, cat, vertloc=87)
fig,ax = vis.stacked_hist_by_stability(southerly, catinfo, cat, vertloc=87)
fig,ax = vis.stacked_hist_by_stability(weakwest, catinfo, cat, vertloc=87)
fig,ax = vis.stacked_hist_by_stability(strongwest, catinfo, cat, vertloc=87)
###Output
_____no_output_____
###Markdown
Monthly histograms
###Code
categories = list(catinfo['columns'].keys())
imp.reload(vis)
for cat in categories:
height = 87
if 'shear' in cat.lower():
height = 122
if 'stability flag' in cat.lower():
continue
savecat = catinfo['save'][cat]
catfigpath = os.path.join(figPath,savecat)
fig,ax = vis.monthly_hist(metdat, catinfo, cat, vertloc=height)
fig.savefig(os.path.join(catfigpath,'{}_{}_monthly_hist_{}.png'.format(towerID, savecat, height)), dpi=200, bbox_inches='tight')
plt.clf()
metdat.columns
###Output
_____no_output_____ |
appyters/DSigDB_Harmonizome_ETL/DSigDB.ipynb | ###Markdown
Harmonizome ETL: DSigDB Created by: Charles Dai Credit to: Moshe SilversteinData Source: http://dsigdb.tanlab.org/DSigDBv1.0/download.html
###Code
# appyter init
from appyter import magic
magic.init(lambda _=globals: _())
import sys
import os
from datetime import date
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import harmonizome.utility_functions as uf
import harmonizome.lookup as lookup
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Notebook Information
###Code
print('This notebook was run on:', date.today(), '\nPython version:', sys.version)
###Output
_____no_output_____
###Markdown
Initialization
###Code
%%appyter hide_code
{% do SectionField(
name='data',
title='Upload Data',
img='load_icon.png'
) %}
{% do SectionField(
name='settings',
title='Settings',
img='setting_icon.png'
) %}
%%appyter code_eval
{% do DescriptionField(
name='description',
text='The example below was sourced from <a href="http://dsigdb.tanlab.org/DSigDBv1.0/download.html" target="_blank">dsigdb.tanlab.org/DSigDBv1.0</a>. If clicking on the example does not work, it should be downloaded directly from the source website.',
section='data'
) %}
{% set df_file = FileField(
constraint='.*\.txt$',
name='dsigdb data',
label='DSigDB Data (txt)',
default='DSigDB_All_detailed.txt',
examples={'DSigDB_All_detailed.txt': 'http://dsigdb.tanlab.org/Downloads/DSigDB_All_detailed.txt'},
section='data'
) %}
%%appyter code_eval
{% set group = ChoiceField(
name='drug subset',
label='Drug Subset',
choices={
'Computational Drug Signatures': "'Computational'",
'FDA Approved Drugs': "'FDA'",
'Kinase Inhibitors': "'Kinase'",
'Peturbagen Signatures': "'Peturbagen'"
},
default='Computational Drug Signatures',
section='settings'
) %}
###Output
_____no_output_____
###Markdown
Load Mapping Dictionaries
###Code
symbol_lookup, geneid_lookup = lookup.get_lookups()
###Output
_____no_output_____
###Markdown
Output Path
###Code
%%appyter code_exec
output_name = 'dsigdb_' + {{group}}.lower()
path = 'Output/DSigDB-' + {{group}}
if not os.path.exists(path):
os.makedirs(path)
###Output
_____no_output_____
###Markdown
Load Data
###Code
%%appyter code_exec
df = pd.read_csv(
{{df_file}},
sep='\t', usecols=['Drug', 'Gene', 'Source']
)
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Pre-process Data Get Relevant Data
###Code
%%appyter code_exec
sources = {
'Computational': ['BOSS', 'CTD', 'TTD', 'D4 PubChem', 'D4 ChEMBL'],
'FDA': ['D1'],
'Kinase': ['FDA', 'Kinome Scan', 'LINCS', 'MRC', 'GSK', 'Roche', 'RBC'],
'Peturbagen': ['CMAP']
}[{{group}}]
# Get only relevant resources for desired drugs
df = df.dropna()
df = df[df['Source'].str.contains('|'.join(sources))]
df.head()
df = df.drop('Source', axis=1).set_index('Gene')
df.index.name = 'Gene Symbol'
df.head()
###Output
_____no_output_____
###Markdown
Filter Data Map Gene Symbols to Up-to-date Approved Gene Symbols
###Code
df = uf.map_symbols(df, symbol_lookup, remove_duplicates=True)
df.shape
###Output
_____no_output_____
###Markdown
Analyze Data Create Binary Matrix
###Code
binary_matrix = uf.binary_matrix(df)
binary_matrix.head()
binary_matrix.shape
uf.save_data(binary_matrix, path, output_name + '_binary_matrix',
compression='npz', dtype=np.uint8)
###Output
_____no_output_____
###Markdown
Create Gene List
###Code
gene_list = uf.gene_list(binary_matrix, geneid_lookup)
gene_list.head()
gene_list.shape
uf.save_data(gene_list, path, output_name + '_gene_list',
ext='tsv', compression='gzip', index=False)
###Output
_____no_output_____
###Markdown
Create Attribute List
###Code
attribute_list = uf.attribute_list(binary_matrix)
attribute_list.head()
attribute_list.shape
uf.save_data(attribute_list, path, output_name + '_attribute_list',
ext='tsv', compression='gzip')
###Output
_____no_output_____
###Markdown
Create Gene and Attribute Set Libraries
###Code
uf.save_setlib(binary_matrix, 'gene', 'up', path, output_name + '_gene_up_set')
uf.save_setlib(binary_matrix, 'attribute', 'up', path,
output_name + '_attribute_up_set')
###Output
_____no_output_____
###Markdown
Create Attribute Similarity Matrix
###Code
attribute_similarity_matrix = uf.similarity_matrix(binary_matrix.T, 'jaccard', sparse=True)
attribute_similarity_matrix.head()
uf.save_data(attribute_similarity_matrix, path,
output_name + '_attribute_similarity_matrix',
compression='npz', symmetric=True, dtype=np.float32)
###Output
_____no_output_____
###Markdown
Create Gene Similarity Matrix
###Code
gene_similarity_matrix = uf.similarity_matrix(binary_matrix, 'jaccard', sparse=True)
gene_similarity_matrix.head()
uf.save_data(gene_similarity_matrix, path,
output_name + '_gene_similarity_matrix',
compression='npz', symmetric=True, dtype=np.float32)
###Output
_____no_output_____
###Markdown
Create Gene-Attribute Edge List
###Code
edge_list = uf.edge_list(binary_matrix)
uf.save_data(edge_list, path, output_name + '_edge_list',
ext='tsv', compression='gzip')
###Output
_____no_output_____
###Markdown
Create Downloadable Save File
###Code
uf.archive(path)
###Output
_____no_output_____
###Markdown
Harmonizome ETL: DSigDB Created by: Charles Dai Credit to: Moshe SilversteinData Source: http://dsigdb.tanlab.org/DSigDBv1.0/download.html
###Code
#%%appyter init
from appyter import magic
magic.init(lambda _=globals: _())
import sys
import os
from datetime import date
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import harmonizome.utility_functions as uf
import harmonizome.lookup as lookup
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Notebook Information
###Code
print('This notebook was run on:', date.today(), '\nPython version:', sys.version)
###Output
_____no_output_____
###Markdown
Initialization
###Code
%%appyter hide_code
{% do SectionField(
name='data',
title='Upload Data',
img='load_icon.png'
) %}
{% do SectionField(
name='settings',
title='Settings',
img='setting_icon.png'
) %}
%%appyter code_eval
{% do DescriptionField(
name='description',
text='The example below was sourced from <a href="http://dsigdb.tanlab.org/DSigDBv1.0/download.html" target="_blank">dsigdb.tanlab.org/DSigDBv1.0</a>. If clicking on the example does not work, it should be downloaded directly from the source website.',
section='data'
) %}
{% set df_file = FileField(
constraint='.*\.txt$',
name='dsigdb data',
label='DSigDB Data (txt)',
default='DSigDB_All_detailed.txt',
examples={'DSigDB_All_detailed.txt': 'http://dsigdb.tanlab.org/Downloads/DSigDB_All_detailed.txt'},
section='data'
) %}
%%appyter code_eval
{% set group = ChoiceField(
name='drug subset',
label='Drug Subset',
choices={
'Computational Drug Signatures': "'Computational'",
'FDA Approved Drugs': "'FDA'",
'Kinase Inhibitors': "'Kinase'",
'Peturbagen Signatures': "'Peturbagen'"
},
default='Computational Drug Signatures',
section='settings'
) %}
###Output
_____no_output_____
###Markdown
Load Mapping Dictionaries
###Code
symbol_lookup, geneid_lookup = lookup.get_lookups()
###Output
_____no_output_____
###Markdown
Output Path
###Code
%%appyter code_exec
output_name = 'dsigdb_' + {{group}}.lower()
path = 'Output/DSigDB-' + {{group}}
if not os.path.exists(path):
os.makedirs(path)
###Output
_____no_output_____
###Markdown
Load Data
###Code
%%appyter code_exec
df = pd.read_csv(
{{df_file}},
sep='\t', usecols=['Drug', 'Gene', 'Source']
)
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Pre-process Data Get Relevant Data
###Code
%%appyter code_exec
sources = {
'Computational': ['BOSS', 'CTD', 'TTD', 'D4 PubChem', 'D4 ChEMBL'],
'FDA': ['D1'],
'Kinase': ['FDA', 'Kinome Scan', 'LINCS', 'MRC', 'GSK', 'Roche', 'RBC'],
'Peturbagen': ['CMAP']
}[{{group}}]
# Get only relevant resources for desired drugs
df = df.dropna()
df = df[df['Source'].str.contains('|'.join(sources))]
df.head()
df = df.drop('Source', axis=1).set_index('Gene')
df.index.name = 'Gene Symbol'
df.head()
###Output
_____no_output_____
###Markdown
Filter Data Map Gene Symbols to Up-to-date Approved Gene Symbols
###Code
df = uf.map_symbols(df, symbol_lookup, remove_duplicates=True)
df.shape
###Output
_____no_output_____
###Markdown
Analyze Data Create Binary Matrix
###Code
binary_matrix = uf.binary_matrix(df)
binary_matrix.head()
binary_matrix.shape
uf.save_data(binary_matrix, path, output_name + '_binary_matrix',
compression='npz', dtype=np.uint8)
###Output
_____no_output_____
###Markdown
Create Gene List
###Code
gene_list = uf.gene_list(binary_matrix, geneid_lookup)
gene_list.head()
gene_list.shape
uf.save_data(gene_list, path, output_name + '_gene_list',
ext='tsv', compression='gzip', index=False)
###Output
_____no_output_____
###Markdown
Create Attribute List
###Code
attribute_list = uf.attribute_list(binary_matrix)
attribute_list.head()
attribute_list.shape
uf.save_data(attribute_list, path, output_name + '_attribute_list',
ext='tsv', compression='gzip')
###Output
_____no_output_____
###Markdown
Create Gene and Attribute Set Libraries
###Code
uf.save_setlib(binary_matrix, 'gene', 'up', path, output_name + '_gene_up_set')
uf.save_setlib(binary_matrix, 'attribute', 'up', path,
output_name + '_attribute_up_set')
###Output
_____no_output_____
###Markdown
Create Attribute Similarity Matrix
###Code
attribute_similarity_matrix = uf.similarity_matrix(binary_matrix.T, 'jaccard', sparse=True)
attribute_similarity_matrix.head()
uf.save_data(attribute_similarity_matrix, path,
output_name + '_attribute_similarity_matrix',
compression='npz', symmetric=True, dtype=np.float32)
###Output
_____no_output_____
###Markdown
Create Gene Similarity Matrix
###Code
gene_similarity_matrix = uf.similarity_matrix(binary_matrix, 'jaccard', sparse=True)
gene_similarity_matrix.head()
uf.save_data(gene_similarity_matrix, path,
output_name + '_gene_similarity_matrix',
compression='npz', symmetric=True, dtype=np.float32)
###Output
_____no_output_____
###Markdown
Create Gene-Attribute Edge List
###Code
edge_list = uf.edge_list(binary_matrix)
uf.save_data(edge_list, path, output_name + '_edge_list',
ext='tsv', compression='gzip')
###Output
_____no_output_____
###Markdown
Create Downloadable Save File
###Code
uf.archive(path)
###Output
_____no_output_____
###Markdown
Harmonizome ETL: DSigDB Created by: Charles Dai Credit to: Moshe SilversteinData Source: http://dsigdb.tanlab.org/DSigDBv1.0/download.html
###Code
# appyter init
from appyter import magic
magic.init(lambda _=globals: _())
import sys
import os
from datetime import date
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import harmonizome.utility_functions as uf
import harmonizome.lookup as lookup
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Notebook Information
###Code
print('This notebook was run on:', date.today(), '\nPython version:', sys.version)
###Output
_____no_output_____
###Markdown
Initialization
###Code
%%appyter hide_code
{% do SectionField(
name='data',
title='Upload Data',
img='load_icon.png'
) %}
{% do SectionField(
name='settings',
title='Settings',
img='setting_icon.png'
) %}
%%appyter code_eval
{% do DescriptionField(
name='description',
text='The example below was sourced from <a href="http://dsigdb.tanlab.org/DSigDBv1.0/download.html" target="_blank">dsigdb.tanlab.org/DSigDBv1.0</a>. If clicking on the example does not work, it should be downloaded directly from the source website.',
section='data'
) %}
{% set df_file = FileField(
constraint='.*\.txt$',
name='dsigdb data',
label='DSigDB Data (txt)',
default='DSigDB_All_detailed.txt',
examples={'DSigDB_All_detailed.txt': 'http://dsigdb.tanlab.org/Downloads/DSigDB_All_detailed.txt'},
section='data'
) %}
%%appyter code_eval
{% set group = ChoiceField(
name='drug subset',
label='Drug Subset',
choices={
'Computational Drug Signatures': "'Computational'",
'FDA Approved Drugs': "'FDA'",
'Kinase Inhibitors': "'Kinase'",
'Peturbagen Signatures': "'Peturbagen'"
},
default='Computational Drug Signatures',
section='settings'
) %}
###Output
_____no_output_____
###Markdown
Load Mapping Dictionaries
###Code
symbol_lookup, geneid_lookup = lookup.get_lookups()
###Output
_____no_output_____
###Markdown
Output Path
###Code
%%appyter code_exec
output_name = 'dsigdb_' + {{group}}.lower()
path = 'Output/DSigDB-' + {{group}}
if not os.path.exists(path):
os.makedirs(path)
###Output
_____no_output_____
###Markdown
Load Data
###Code
%%appyter code_exec
df = pd.read_csv(
{{df_file}},
sep='\t', usecols=['Drug', 'Gene', 'Source']
)
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Pre-process Data Get Relevant Data
###Code
%%appyter code_exec
sources = {
'Computational': ['BOSS', 'CTD', 'TTD', 'D4 PubChem', 'D4 ChEMBL'],
'FDA': ['D1'],
'Kinase': ['FDA', 'Kinome Scan', 'LINCS', 'MRC', 'GSK', 'Roche', 'RBC'],
'Peturbagen': ['CMAP']
}[{{group}}]
# Get only relevant resources for desired drugs
df = df.dropna()
df = df[df['Source'].str.contains('|'.join(sources))]
df.head()
df = df.drop('Source', axis=1).set_index('Gene')
df.index.name = 'Gene Symbol'
df.head()
###Output
_____no_output_____
###Markdown
Filter Data Map Gene Symbols to Up-to-date Approved Gene Symbols
###Code
df = uf.map_symbols(df, symbol_lookup, remove_duplicates=True)
df.shape
###Output
_____no_output_____
###Markdown
Analyze Data Create Binary Matrix
###Code
binary_matrix = uf.binary_matrix(df)
binary_matrix.head()
binary_matrix.shape
uf.save_data(binary_matrix, path, output_name + '_binary_matrix',
compression='npz', dtype=np.uint8)
###Output
_____no_output_____
###Markdown
Create Gene List
###Code
gene_list = uf.gene_list(binary_matrix, geneid_lookup)
gene_list.head()
gene_list.shape
uf.save_data(gene_list, path, output_name + '_gene_list',
ext='tsv', compression='gzip', index=False)
###Output
_____no_output_____
###Markdown
Create Attribute List
###Code
attribute_list = uf.attribute_list(binary_matrix)
attribute_list.head()
attribute_list.shape
uf.save_data(attribute_list, path, output_name + '_attribute_list',
ext='tsv', compression='gzip')
###Output
_____no_output_____
###Markdown
Create Gene and Attribute Set Libraries
###Code
uf.save_setlib(binary_matrix, 'gene', 'up', path, output_name + '_gene_up_set')
uf.save_setlib(binary_matrix, 'attribute', 'up', path,
output_name + '_attribute_up_set')
###Output
_____no_output_____
###Markdown
Create Attribute Similarity Matrix
###Code
attribute_similarity_matrix = uf.similarity_matrix(binary_matrix.T, 'jaccard', sparse=True)
attribute_similarity_matrix.head()
uf.save_data(attribute_similarity_matrix, path,
output_name + '_attribute_similarity_matrix',
compression='npz', symmetric=True, dtype=np.float32)
###Output
_____no_output_____
###Markdown
Create Gene Similarity Matrix
###Code
gene_similarity_matrix = uf.similarity_matrix(binary_matrix, 'jaccard', sparse=True)
gene_similarity_matrix.head()
uf.save_data(gene_similarity_matrix, path,
output_name + '_gene_similarity_matrix',
compression='npz', symmetric=True, dtype=np.float32)
###Output
_____no_output_____
###Markdown
Create Gene-Attribute Edge List
###Code
edge_list = uf.edge_list(binary_matrix)
uf.save_data(edge_list, path, output_name + '_edge_list',
ext='tsv', compression='gzip')
###Output
_____no_output_____
###Markdown
Create Downloadable Save File
###Code
uf.archive(path)
###Output
_____no_output_____
###Markdown
Harmonizome ETL: DSigDB Created by: Charles Dai Credit to: Moshe SilversteinData Source: http://dsigdb.tanlab.org/DSigDBv1.0/download.html
###Code
# appyter init
from appyter import magic
magic.init(lambda _=globals: _())
import sys
import os
from datetime import date
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import harmonizome.utility_functions as uf
import harmonizome.lookup as lookup
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Notebook Information
###Code
print('This notebook was run on:', date.today(), '\nPython version:', sys.version)
###Output
_____no_output_____
###Markdown
Initialization
###Code
%%appyter hide_code
{% do SectionField(
name='data',
title='Upload Data',
img='load_icon.png'
) %}
{% do SectionField(
name='settings',
title='Settings',
img='setting_icon.png'
) %}
%%appyter code_eval
{% do DescriptionField(
name='description',
text='The example below was sourced from <a href="http://dsigdb.tanlab.org/DSigDBv1.0/download.html" target="_blank">dsigdb.tanlab.org/DSigDBv1.0</a>. If clicking on the example does not work, it should be downloaded directly from the source website.',
section='data'
) %}
{% set df_file = FileField(
constraint='.*\.txt$',
name='dsigdb data',
label='DSigDB Data (txt)',
default='DSigDB_All_detailed.txt',
section='data'
) %}
%%appyter code_eval
{% set group = ChoiceField(
name='drug subset',
label='Drug Subset',
choices={
'Computational Drug Signatures': "'Computational'",
'FDA Approved Drugs': "'FDA'",
'Kinase Inhibitors': "'Kinase'",
'Peturbagen Signatures': "'Peturbagen'"
},
default='Computational Drug Signatures',
section='settings'
) %}
###Output
_____no_output_____
###Markdown
Load Mapping Dictionaries
###Code
symbol_lookup, geneid_lookup = lookup.get_lookups()
###Output
_____no_output_____
###Markdown
Output Path
###Code
%%appyter code_exec
output_name = 'dsigdb_' + {{group}}.lower()
path = 'Output/DSigDB-' + {{group}}
if not os.path.exists(path):
os.makedirs(path)
###Output
_____no_output_____
###Markdown
Load Data
###Code
%%appyter code_exec
df = pd.read_csv(
{{df_file}},
sep='\t', usecols=['Drug', 'Gene', 'Source']
)
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Pre-process Data Get Relevant Data
###Code
%%appyter code_exec
sources = {
'Computational': ['BOSS', 'CTD', 'TTD', 'D4 PubChem', 'D4 ChEMBL'],
'FDA': ['D1'],
'Kinase': ['FDA', 'Kinome Scan', 'LINCS', 'MRC', 'GSK', 'Roche', 'RBC'],
'Peturbagen': ['CMAP']
}[{{group}}]
# Get only relevant resources for desired drugs
df = df.dropna()
df = df[df['Source'].str.contains('|'.join(sources))]
df.head()
df = df.drop('Source', axis=1).set_index('Gene')
df.index.name = 'Gene Symbol'
df.head()
###Output
_____no_output_____
###Markdown
Filter Data Map Gene Symbols to Up-to-date Approved Gene Symbols
###Code
df = uf.map_symbols(df, symbol_lookup, remove_duplicates=True)
df.shape
###Output
_____no_output_____
###Markdown
Analyze Data Create Binary Matrix
###Code
binary_matrix = uf.binary_matrix(df)
binary_matrix.head()
binary_matrix.shape
uf.save_data(binary_matrix, path, output_name + '_binary_matrix',
compression='npz', dtype=np.uint8)
###Output
_____no_output_____
###Markdown
Create Gene List
###Code
gene_list = uf.gene_list(binary_matrix, geneid_lookup)
gene_list.head()
gene_list.shape
uf.save_data(gene_list, path, output_name + '_gene_list',
ext='tsv', compression='gzip', index=False)
###Output
_____no_output_____
###Markdown
Create Attribute List
###Code
attribute_list = uf.attribute_list(binary_matrix)
attribute_list.head()
attribute_list.shape
uf.save_data(attribute_list, path, output_name + '_attribute_list',
ext='tsv', compression='gzip')
###Output
_____no_output_____
###Markdown
Create Gene and Attribute Set Libraries
###Code
uf.save_setlib(binary_matrix, 'gene', 'up', path, output_name + '_gene_up_set')
uf.save_setlib(binary_matrix, 'attribute', 'up', path,
output_name + '_attribute_up_set')
###Output
_____no_output_____
###Markdown
Create Attribute Similarity Matrix
###Code
attribute_similarity_matrix = uf.similarity_matrix(binary_matrix.T, 'jaccard', sparse=True)
attribute_similarity_matrix.head()
uf.save_data(attribute_similarity_matrix, path,
output_name + '_attribute_similarity_matrix',
compression='npz', symmetric=True, dtype=np.float32)
###Output
_____no_output_____
###Markdown
Create Gene Similarity Matrix
###Code
gene_similarity_matrix = uf.similarity_matrix(binary_matrix, 'jaccard', sparse=True)
gene_similarity_matrix.head()
uf.save_data(gene_similarity_matrix, path,
output_name + '_gene_similarity_matrix',
compression='npz', symmetric=True, dtype=np.float32)
###Output
_____no_output_____
###Markdown
Create Gene-Attribute Edge List
###Code
edge_list = uf.edge_list(binary_matrix)
uf.save_data(edge_list, path, output_name + '_edge_list',
ext='tsv', compression='gzip')
###Output
_____no_output_____
###Markdown
Create Downloadable Save File
###Code
uf.archive(path)
###Output
_____no_output_____ |
src/01_export_masterfile.ipynb | ###Markdown
Export Master FileAuthor: Cristian E. NunoDate: October 12, 2018Purpose:* unzips .zip files;* reads them as separate data frames; + for both the `pitfail` and `prevent` worksheets* binds them together into one data frame; and* exports the results as a .csv file
###Code
# see the value of multiple statements at once
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# load necessary packages
import pandas as pd
import zipfile
import sys
import os
import re
import xlrd
# print version of python
print(sys.version)
# set working directory
directory = "/Users/cristiannuno/Desktop/Advanced_Analytics/DABP_Group_Project/"
raw_data = "raw_data/"
write_data = "write_data/"
os.chdir(directory + raw_data)
# load in states data frames
states = pd.read_csv("states.csv")
states.head()
# store abbreviations
state_abb = states["abb"]
# store all files in current wd
# sorting the items in the list alpabetically
file_names = sorted(os.listdir("."))
# create regex to only give back file names that contain .zip
my_pattern = re.compile(".zip$")
# store zip files that satisfy the regex
my_zip_files = list(filter(my_pattern.search, file_names))
# unzip each file in my_zip_files
for i in my_zip_files:
print(i)
zip = zipfile.ZipFile(file = i, mode = "r")
zip.extractall(os.getcwd())
zip.close()
###Output
2007_fsa_acres_sum_final.zip
2008_fsa_acres_sum_final.zip
2009_fsa_acres_sum_final_8.zip
2010_fsa_acres_sum_final_6.zip
2011_fsa_acres_sum_jan2012.zip
2012_fsa_acres_jan_2013.zip
2013_fsa_acres_jan_2014.zip
2014_fsa_acres_Jan2014.zip
2015_fsa_acres_Jan2016_sq19.zip
2016_fsa_acres_jan2017_edr32.zip
2017_fsa_acres_jan2018.zip
###Markdown
Inspecting `.xlsx` filesAfter unzipping the files, I've manually opened them to inspect the `.xlsx` files. What I found was that each year contains two worksheets:* `pltfail`: one row per state (including Puerto Rico and the Virgin Islands) representing the planted acres (including failed acres) reported to the Farm Service Agency for a variety of crops; and* `prevent`: one row per state (including Puerto Rico and the Virgin Islands) representing the prevented acres reported to the Farm Service Agency for a variety of crops. This means we'll have two master files: one for planted and another for prevented acres from 2007 to 2017. *Note: not all of the workbooks spell these two sheets the same. However, each one starts with `pltfail`, followed by `prevent`.* Column NamesWhile the spelling of the column names are not all similar, thankfully, they are all in the correct order. Each year follows this pattern:* `state`: the state name* `barley`: acres for barley* `corn`: acres for corn* `cotton_els`: acres for cotton, extra long staple* `cotton_upland`: acres for cotton, upland* `oats`: acres for oats* `rice`: acres for rice* `sorghum`: acres for sorghum* `sugar_beets`: acres for sugar beets* `sugarcane`: acres for sugarcane* `wheat`: acres for wheat* `total`: sums the acres across all cropsThere is no need for the `total` column so it'll be dropped. NoteThe `.xlsx` files are not normalized in the sense that sheet names are spelled differently and the headers start at different rows. Going to manually import one data frame at a time.
###Code
# create regex to only give back file names that contain .xlsx
my_pattern = re.compile(".xlsx$")
# store xlsx files that satisfy the regex
my_xlsx_files = list(filter(my_pattern.search, file_names))
# store the first four digits from each item in the list
first_four_digits = []
for i in range(0, len(my_xlsx_files)):
first_four_digits.append(int(my_xlsx_files[i][:4]))
# store column names
standard_col_names = ["state", "barley", "corn"
, "cotton_els", "cotton_upland", "oats"
, "rice", "sorghum", "soybeans"
, "sugar_beets", "sugarcane", "wheat"
, "total"]
# store non state names
non_states = ['Grand Total', 'US']
###Output
_____no_output_____
###Markdown
For each excel file, do the following:* transfrom it into a data frame;* standarize the column names;* create a `year` column;* create a `type` column to identify if this plant and fail data or prevention data;* then row bind all the data frames together in `df`
###Code
# create an empty data frame
df = pd.DataFrame()
for i in range(0, len(my_xlsx_files)):
# for years 2007 - 2011
# make the header the second row in the file
if i in list(range(0, 5)):
header_row = 1
# otherwise, make the header the fourth row in the file
else:
header_row = 3
# print the current file being processed
print(my_xlsx_files[i] + " is currently being processed...")
# read in pltfail sheet
pltfail = pd.read_excel(io = my_xlsx_files[i] \
, sheet_name = 0 \
, header = header_row)
# standardize column names
pltfail.columns = standard_col_names
# create the year column
pltfail["year"] = first_four_digits[i]
# create the type column
pltfail["type"] = "plant and fail"
# read in prevent sheet
prevent = pd.read_excel(io = my_xlsx_files[i] \
, sheet_name = 1 \
, header = header_row)
# standardize column names
prevent.columns = standard_col_names
# create the year column
prevent["year"] = first_four_digits[i]
# create the type column
prevent["type"] = "prevent"
# append pltfail and prevent to temp
temp = pltfail.append(prevent)
# append temp to df
df = df.append(temp)
# print the current file that was binded to df
print("... " + my_xlsx_files[i] + " was successfully binded to `df`.")
df.shape
df.columns
df["year"].value_counts()
df["type"].value_counts()
###Output
2007_fsa_acres_summary_final.xlsx is currently being processed...
... 2007_fsa_acres_summary_final.xlsx was successfully binded to `df`.
2008_fsa_acres_summary_final.xlsx is currently being processed...
... 2008_fsa_acres_summary_final.xlsx was successfully binded to `df`.
2009_fsa_acres_summary_final_8.xlsx is currently being processed...
... 2009_fsa_acres_summary_final_8.xlsx was successfully binded to `df`.
2010_fsa_acres_summary_final_6.xlsx is currently being processed...
... 2010_fsa_acres_summary_final_6.xlsx was successfully binded to `df`.
2011_fsa_acres_sum_jan2012.xlsx is currently being processed...
... 2011_fsa_acres_sum_jan2012.xlsx was successfully binded to `df`.
2012_fsa_acres_jan_2013.xlsx is currently being processed...
... 2012_fsa_acres_jan_2013.xlsx was successfully binded to `df`.
2013_fsa_acres_jan_2014.xlsx is currently being processed...
... 2013_fsa_acres_jan_2014.xlsx was successfully binded to `df`.
2014_fsa_acres_jan2014.xlsx is currently being processed...
... 2014_fsa_acres_jan2014.xlsx was successfully binded to `df`.
2015_fsa_acres_01052016.xlsx is currently being processed...
... 2015_fsa_acres_01052016.xlsx was successfully binded to `df`.
2016_fsa_acres_010417.xlsx is currently being processed...
... 2016_fsa_acres_010417.xlsx was successfully binded to `df`.
2017_fsa_acres_010418.xlsx is currently being processed...
... 2017_fsa_acres_010418.xlsx was successfully binded to `df`.
###Markdown
Now that we have our final data set, let's clean it up a bit:* clean white space in the `state` column and make it title case;* drop the `total` column;* keep all rows where the `state` value doesn't equal `non_states`; and* merge the `state` data frame onto `df_clean`.
###Code
# drop total
df_clean = df.drop("total", axis = 1)
# keep records of only states
df_clean = df_clean.query("state not in @non_states")
df_clean.shape
# keep certain rows
df_clean_abb = df_clean.query("state in @state_abb")
df_clean_nam = df_clean.query("state not in @state_abb")
# join states onto df_clean_abb
# remove the original state column
# and rename 'name' as state
df_clean_abb = pd.merge(df_clean_abb, states \
, how = "left", left_on = "state" \
, right_on = "abb")
df_clean_abb = df_clean_abb.drop("state", axis = 1)
df_clean_abb.columns = ["state" if x == "name" else x for x in df_clean_abb.columns]
df_clean_abb.shape
# clean the state column prior to merging
df_clean_nam["state"] = df_clean_nam["state"].str.strip().str.title()
# join states onto df_clean_nam
df_clean_nam = pd.merge(df_clean_nam, states \
, how = "left", left_on = "state" \
, right_on = "name")
# drop the name column
df_clean_nam = df_clean_nam.drop("name", axis = 1)
df_clean_nam.shape
# bind the two data frames together
df_clean = df_clean_abb.append(df_clean_nam)
df_clean.shape
###Output
_____no_output_____
###Markdown
Export dataPrior to exporting, rearrange the columns so that the data frame is more intuitive.
###Code
# set working directory
os.chdir(directory + write_data)
# reorder columns
# note: can only do this with integer positions rather than column names which is not ideal
new_order = [14, 16, 11, 0, 7, 5
, 1, 2, 3, 4, 6, 8
, 9, 10, 12, 13, 15]
#new_order = ["type", "year", "state", "abb", "region", "division"
# , "barley", "corn", "cotton_els", "cotton_upland", "oats", "rice"
# , "sorghum", "soybeans", "sugar_beets", "sugarcane", "wheat"]
df_clean = df_clean[df_clean.columns[new_order]]
df_clean.to_csv("clean_crop_2011_2017.csv", index = False)
###Output
_____no_output_____ |
notebooks/T9 - 2 - Analisis de Componentes Principales SK Learn.ipynb | ###Markdown
Análisis de Componentes Principales - Usando SkLearn
###Code
import pandas as pd
import chart_studio.plotly as py
import plotly.graph_objs as ir
import plotly.tools as tls
import chart_studio
import plotly.graph_objects as go
from sklearn.preprocessing import StandardScaler
#Cargamos las credenciales de nuestra cuenta en la página de Plotly.ly con nuestra clave autogenerada(se puede cambiar)
chart_studio.tools.set_credentials_file(username='dasafo', api_key='GcvEWHWyosJNEqDGeOcz')
df = pd.read_csv("../datasets/iris/iris.csv")
X = df.iloc[:,0:4].values #sin Species
y = df.iloc[:,4].values #con Species
X_std = StandardScaler().fit_transform(X)
###Output
_____no_output_____
###Markdown
**Aplicamos la libreria de SkLearn**
###Code
from sklearn.decomposition import PCA as sk_pca
acp = sk_pca(n_components=2) #aqui le indicamos ya el numeros de componentes óptimos que era 2(clculado manualmente en T10-2)
Y = acp.fit_transform(X_std) #esta función usa el apartado c) de singular value descomposition explicado en T10-1)
Y
results = []
for name in ('setosa', 'versicolor', 'virginica'):
result = go.Scatter(x = Y[y==name,0], y = Y[y==name, 1],
mode = "markers", name = name,
marker = go.Marker(size=8, line=go.Line(color="rgba(225,225,225,0.2)", width=0.5),
opacity = 0.75))
results.append(result)
data = go.Data(results)
layout = go.Layout(xaxis = go.XAxis(title="CP1", showline=False),
yaxis = go.YAxis(title="CP2", showline=False))
fig = go.Figure(data = data, layout = layout)
chart_studio.plotly.iplot(fig)
py.iplot(fig)
###Output
/home/david/anaconda3/lib/python3.7/site-packages/plotly/graph_objs/_deprecations.py:385: DeprecationWarning:
plotly.graph_objs.Line is deprecated.
Please replace it with one of the following more specific types
- plotly.graph_objs.scatter.Line
- plotly.graph_objs.layout.shape.Line
- etc.
/home/david/anaconda3/lib/python3.7/site-packages/plotly/graph_objs/_deprecations.py:441: DeprecationWarning:
plotly.graph_objs.Marker is deprecated.
Please replace it with one of the following more specific types
- plotly.graph_objs.scatter.Marker
- plotly.graph_objs.histogram.selected.Marker
- etc.
/home/david/anaconda3/lib/python3.7/site-packages/plotly/graph_objs/_deprecations.py:40: DeprecationWarning:
plotly.graph_objs.Data is deprecated.
Please replace it with a list or tuple of instances of the following types
- plotly.graph_objs.Scatter
- plotly.graph_objs.Bar
- plotly.graph_objs.Area
- plotly.graph_objs.Histogram
- etc.
/home/david/anaconda3/lib/python3.7/site-packages/plotly/graph_objs/_deprecations.py:550: DeprecationWarning:
plotly.graph_objs.XAxis is deprecated.
Please replace it with one of the following more specific types
- plotly.graph_objs.layout.XAxis
- plotly.graph_objs.layout.scene.XAxis
/home/david/anaconda3/lib/python3.7/site-packages/plotly/graph_objs/_deprecations.py:578: DeprecationWarning:
plotly.graph_objs.YAxis is deprecated.
Please replace it with one of the following more specific types
- plotly.graph_objs.layout.YAxis
- plotly.graph_objs.layout.scene.YAxis
|
Regression_Sprint_Challenge.ipynb | ###Markdown
_Lambda School Data Science_ Regression Sprint Challenge For this Sprint Challenge, you'll predict the price of used cars. The dataset is real-world. It was collected from advertisements of cars for sale in the Ukraine in 2016.The following import statements have been provided for you, and should be sufficient. But you may not need to use every import. And you are permitted to make additional imports.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
###Output
_____no_output_____
###Markdown
[The dataset](https://raw.githubusercontent.com/ryanleeallred/datasets/master/car_regression.csv) contains 8,495 rows and 9 variables:- make: manufacturer brand- price: seller’s price in advertisement (in USD)- body: car body type- mileage: as mentioned in advertisement (‘000 Km)- engV: rounded engine volume (‘000 cubic cm)- engType: type of fuel- registration: whether car registered in Ukraine or not- year: year of production- drive: drive typeRun this cell to read the data:
###Code
df = pd.read_csv('https://raw.githubusercontent.com/ryanleeallred/datasets/master/car_regression.csv')
print(df.shape)
df.sample(10)
###Output
(8495, 9)
###Markdown
Predictive Modeling with Linear Regression 1.1 Split the data into an X matrix and y vector (`price` is the target we want to predict).
###Code
features=['make', 'body', 'mileage', 'engV', 'engType', 'registration',
'year', 'drive']
target=['price']
X=df.copy()[features]
y=df.copy()[target]
X.head()
###Output
_____no_output_____
###Markdown
1.2 Split the data into test and train sets, using `train_test_split`.You may use a train size of 80% and a test size of 20%.
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.80, test_size=0.20, random_state=50)
###Output
_____no_output_____
###Markdown
1.3 Use scikit-learn to fit a multiple regression model, using your training data.Use `year` and one or more features of your choice. You will not be evaluated on which features you choose. You may choose to use all features.
###Code
lin_reg=LinearRegression()
lin_reg.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
1.4 Report the Intercept and Coefficients for the fitted model.
###Code
intercept=lin_reg.intercept_
intercept
coefficient=lin_reg.coef_
coefficient
###Output
_____no_output_____
###Markdown
1.5 Use the test data to make predictions.
###Code
y_pred=lin_reg.predict(X_test)
###Output
_____no_output_____
###Markdown
1.6 Use the test data to get both the Root Mean Square Error and $R^2$ for the model. You will not be evaluated on how high or low your scores are.
###Code
rmse = (np.sqrt(mean_squared_error(y_test, y_pred)))
rmse
r2 = r2_score(y_test, y_pred)
r2
###Output
_____no_output_____
###Markdown
1.7 How should we interpret the coefficient corresponding to the `year` feature?One sentence can be sufficient it's the gradient of the line we have fit by ordinary least squares to the training set of data for that particular dimension, i.e. if y=mx+c where x is an input from the year feature and y is our prediction, m is the slope of that line 1.8 How should we interpret the Root Mean Square Error?One sentence can be sufficient RMSE is a measure of the difference between values predicted by the model and values observed 1.9 How should we interpret the $R^2$?One sentence can be sufficient $R^2$ is the amount of variance in the target variable that one can predict from the sample variable Log-Linear and Polynomial Regression 2.1 Engineer a new variable by taking the log of the price varible.
###Code
df['ln_price']=np.log(df['price'])
###Output
_____no_output_____
###Markdown
2.2 Visualize scatterplots of the relationship between each feature versus the log of price, to look for non-linearly distributed features.You may use any plotting tools and techniques.
###Code
for feat in features:
sns.residplot(df[feat], df['ln_price'], lowess=True, line_kws=dict(color='r'))
plt.show()
# it looks like year has some kind of plynomial relationship which makes sense as some vintage models will be worth a lot,
#while vehicles of medium age will be worth less. enginve volume is anotehr good candidate. mileage has definitely
#not got a linear relationship to price, but it doesn't look polynomial either
###Output
_____no_output_____
###Markdown
2.3 Create polynomial feature(s)You will not be evaluated on which feature(s) you choose. But try to choose appropriate features.
###Code
df['year_sqrd']=df['year']**2
df['engV_sqrd']=df['engV']**2
###Output
_____no_output_____
###Markdown
2.4 Use the new log-transformed y variable and your x variables (including any new polynomial features) to fit a new linear regression model. Then report the: intercept, coefficients, RMSE, and $R^2$.
###Code
poly_feats=['year_sqrd','engV_sqrd']
all_feats=features+poly_feats
X_log=df.copy()[all_feats]
y_log=df.copy()['ln_price']
X_trainl, X_testl, y_trainl, y_testl = train_test_split(X_log, y_log, train_size=0.80, test_size=0.20, random_state=50)
log_reg=LinearRegression()
log_reg.fit(X_trainl, y_trainl)
y_pred_log=log_reg.predict(X_testl)
r2_log = r2_score(y_testl, y_pred_log)
rmse_log=(np.sqrt(mean_squared_error(y_testl, y_pred_log)))
r2_log,rmse_log
intercept_log=log_reg.intercept_
intercept_log
coefficient_log=log_reg.coef_
coefficient_log
###Output
_____no_output_____
###Markdown
2.5 How do we interpret coefficients in Log-Linear Regression (differently than Ordinary Least Squares Regression)?One sentence can be sufficient our coefficients now represent the rate of change per feature in percentage terms which can make them more intuative Decision Trees 3.1 Use scikit-learn to fit a decision tree regression model, using your training data.Use one or more features of your choice. You will not be evaluated on which features you choose. You may choose to use all features.You may use the log-transformed target or the original un-transformed target. You will not be evaluated on which you choose.
###Code
tree = DecisionTreeRegressor(max_depth=5)
#using all the log data from previous example
X_log=df.copy()[all_feats]
y_log=df.copy()['ln_price']
X_trainl, X_testl, y_trainl, y_testl = train_test_split(X_log, y_log, train_size=0.80, test_size=0.20, random_state=50)
#fitting new model
tree.fit(X_trainl,y_trainl)
###Output
_____no_output_____
###Markdown
3.2 Use the test data to get the $R^2$ for the model. You will not be evaluated on how high or low your scores are.
###Code
y_pred_tree=tree.predict(X_testl)
r2_DTR=r2_score(y_testl, y_pred_tree)
r2_DTR
###Output
_____no_output_____
###Markdown
Regression Diagnostics 4.1 Use statsmodels to run a log-linear or log-polynomial linear regression with robust standard errors.
###Code
#runnin log polynomial
X_log=df.copy()[all_feats]
y_log=df.copy()['ln_price']
model=sm.OLS(y_log, sm.add_constant(X_log))
results = model.fit()
print(results.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: ln_price R-squared: 0.744
Model: OLS Adj. R-squared: 0.743
Method: Least Squares F-statistic: 2462.
Date: Fri, 03 May 2019 Prob (F-statistic): 0.00
Time: 15:43:32 Log-Likelihood: -5940.3
No. Observations: 8495 AIC: 1.190e+04
Df Residuals: 8484 BIC: 1.198e+04
Df Model: 10
Covariance Type: nonrobust
================================================================================
coef std err t P>|t| [0.025 0.975]
--------------------------------------------------------------------------------
const 6816.3696 287.396 23.718 0.000 6253.003 7379.737
make -0.0013 0.000 -5.746 0.000 -0.002 -0.001
body -0.0740 0.004 -20.764 0.000 -0.081 -0.067
mileage 0.0005 7.56e-05 6.658 0.000 0.000 0.001
engV 0.2330 0.005 48.848 0.000 0.224 0.242
engType -0.0470 0.004 -11.091 0.000 -0.055 -0.039
registration 0.6713 0.024 28.277 0.000 0.625 0.718
year -6.9006 0.287 -24.007 0.000 -7.464 -6.337
drive 0.2620 0.008 32.872 0.000 0.246 0.278
year_sqrd 0.0017 7.19e-05 24.322 0.000 0.002 0.002
engV_sqrd -0.0024 4.93e-05 -48.092 0.000 -0.002 -0.002
==============================================================================
Omnibus: 2583.672 Durbin-Watson: 1.945
Prob(Omnibus): 0.000 Jarque-Bera (JB): 22924.444
Skew: -1.202 Prob(JB): 0.00
Kurtosis: 10.680 Cond. No. 2.19e+11
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 2.19e+11. This might indicate that there are
strong multicollinearity or other numerical problems.
###Markdown
4.2 Calculate the Variance Inflation Factor (VIF) of our X variables. Do we have multicollinearity problems?One sentence can be sufficient
###Code
pd.options.display.float_format = '{:,.2f}'.format
vif = [variance_inflation_factor(sm.add_constant(X_log).values, i) for i in range(len(sm.add_constant(X_log).columns))]
#we have to reverse the log here so we don't jsut see all these scientific values
df_exp=pd.DataFrame(pd.Series(vif, sm.add_constant(X_log).columns))
df_exp.columns=['vif']
df_exp['vif']=np.e**df_exp['vif']
df_exp
# in teh colinearity test as expected the column of ones is very high(though we ignore this), and year and year sq and also engV
#and engV sqrd. the rest are ok as far as colinearity is concerned
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Regression Sprint Challenge For this Sprint Challenge, you'll predict the price of used cars. The dataset is real-world. It was collected from advertisements of cars for sale in the Ukraine in 2016.The following import statements have been provided for you, and should be sufficient. But you may not need to use every import. And you are permitted to make additional imports.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
###Output
_____no_output_____
###Markdown
[The dataset](https://raw.githubusercontent.com/ryanleeallred/datasets/master/car_regression.csv) contains 8,495 rows and 9 variables:- make: manufacturer brand- price: seller’s price in advertisement (in USD)- body: car body type- mileage: as mentioned in advertisement (‘000 Km)- engV: rounded engine volume (‘000 cubic cm)- engType: type of fuel- registration: whether car registered in Ukraine or not- year: year of production- drive: drive typeRun this cell to read the data:
###Code
df = pd.read_csv('https://raw.githubusercontent.com/ryanleeallred/datasets/master/car_regression.csv')
print(df.shape)
df.sample(10)
###Output
(8495, 9)
###Markdown
Predictive Modeling with Linear Regression 1.1 Split the data into an X matrix and y vector (`price` is the target we want to predict).
###Code
y = df['price']
X = df.drop(columns='price')
sns.set(style="ticks", color_codes=True)
sns.pairplot(data=df, y_vars=['price'], x_vars=X.columns)
plt.show()
###Output
_____no_output_____
###Markdown
1.2 Split the data into test and train sets, using `train_test_split`.You may use a train size of 80% and a test size of 20%.
###Code
df_train, df_test = train_test_split(df.copy(), test_size = 0.2, random_state = 0)
features = ['make', 'body', 'mileage',
'engV', 'engType', 'registration',
'year', 'drive']
target = 'price'
X_train = df_train[features]
X_test = df_test[features]
Y_train = pd.DataFrame(data=df_train[target], columns = [target])
Y_test = pd.DataFrame(data=df_test[target], columns = [target])
###Output
_____no_output_____
###Markdown
1.3 Use scikit-learn to fit a multiple regression model, using your training data.Use `year` and one or more features of your choice. You will not be evaluated on which features you choose. You may choose to use all features.
###Code
regr = LinearRegression()
test_feat = ['year', 'mileage', 'body', 'engV']
regr.fit(X_train[test_feat], Y_train)
###Output
_____no_output_____
###Markdown
1.4 Report the Intercept and Coefficients for the fitted model.
###Code
print("Coefficients:", regr.coef_)
print("Intercept:", regr.intercept_)
###Output
Coefficients: [[ 1066.92112811 -34.83635579 -2625.16791432 414.39237857]]
Intercept: [-2114649.3121312]
###Markdown
1.5 Use the test data to make predictions.
###Code
Y_test_pred = regr.predict(X_test[test_feat])
###Output
_____no_output_____
###Markdown
1.6 Use the test data to get both the Root Mean Square Error and $R^2$ for the model. You will not be evaluated on how high or low your scores are.
###Code
print("RMSE:", np.sqrt(mean_squared_error(Y_test, Y_test_pred)))
print("R^2:", r2_score(y_true = Y_test, y_pred = Y_test_pred))
###Output
RMSE: 20720.302279591986
R^2: 0.21161915690871225
###Markdown
1.7 How should we interpret the coefficient corresponding to the `year` feature?One sentence can be sufficient The coefficient is positive, meaning later years correspond to higher prices. Every year that is added corresponds on average to a price increase of 1098.28 1.8 How should we interpret the Root Mean Square Error?One sentence can be sufficient The regression line predicts the average y avlue associated with the given value of x. The RMSE is a measure of the y values spread around the average. 1.9 How should we interpret the $R^2$?One sentence can be sufficient R^2 measures the acurracy of fit, equal to the percentage of the dependant variable. Log-Linear and Polynomial Regression 2.1 Engineer a new variable by taking the log of the price varible.
###Code
df_train['ln_price'] = np.log(df_train['price'])
Y_train['ln_price'] = np.log(Y_train)
Y_test['ln_price'] = np.log(Y_test)
###Output
_____no_output_____
###Markdown
2.2 Visualize scatterplots of the relationship between each feature versus the log of price, to look for non-linearly distributed features.You may use any plotting tools and techniques.
###Code
sns.set(style="ticks", color_codes=True)
sns.pairplot(data=df, y_vars=['ln_price'], x_vars=X.columns)
plt.show()
###Output
_____no_output_____
###Markdown
2.3 Create polynomial feature(s)You will not be evaluated on which feature(s) you choose. But try to choose appropriate features.
###Code
X_train['mileage**2'] = X_train['mileage']**2
X_test['mileage**2'] = X_test['mileage']**2
X_train['year**2'] = X_train['year']**2
X_test['year**2'] = X_test['year']**2
###Output
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel.
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:3: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
This is separate from the ipykernel package so we can avoid doing imports until
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:4: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
after removing the cwd from sys.path.
###Markdown
2.4 Use the new log-transformed y variable and your x variables (including any new polynomial features) to fit a new linear regression model. Then report the: intercept, coefficients, RMSE, and $R^2$.
###Code
target = 'ln_price'
new_regr = LinearRegression()
test_feat = ['year', 'mileage', 'body', 'engV', 'mileage**2', 'year**2']
new_regr.fit(X_train[test_feat], Y_train[target])
df_y_test_pred = new_regr.predict(X_test[test_feat])
print("Coefficients:", new_regr.coef_)
print("Intercept:", new_regr.intercept_)
print("R^2:", r2_score(y_true = Y_test[target], y_pred = df_y_test_pred))
print("RMSE:", np.sqrt(mean_squared_error(Y_test[target], df_y_test_pred)))
###Output
Coefficients: [-8.60961831e+00 9.97143297e-04 -1.25280332e-01 1.50663922e-02
-1.56228642e-07 2.17572167e-03]
Intercept: 8524.850620115481
R^2: 0.554215932139115
RMSE: 0.6430218601238622
###Markdown
2.5 How do we interpret coefficients in Log-Linear Regression (differently than Ordinary Least Squares Regression)?One sentence can be sufficient Each coefficient relates to a percent different in the target variable, the coefficient shows a proportional change. Decision Trees 3.1 Use scikit-learn to fit a decision tree regression model, using your training data.Use one or more features of your choice. You will not be evaluated on which features you choose. You may choose to use all features.You may use the log-transformed target or the original un-transformed target. You will not be evaluated on which you choose.
###Code
tree = DecisionTreeRegressor(max_depth = 3)
tree.fit(X_train, Y_train['price'])
###Output
_____no_output_____
###Markdown
3.2 Use the test data to get the $R^2$ for the model. You will not be evaluated on how high or low your scores are.
###Code
y_true = Y_test['price']
y_pred = tree.predict(X_test)
print("R^2 Score:", r2_score(y_true, y_pred))
###Output
R^2 Score: 0.6315207571503785
###Markdown
Regression Diagnostics 4.1 Use statsmodels to run a log-linear or log-polynomial linear regression with robust standard errors.
###Code
model = sm.OLS(Y_train['ln_price'], X_train)
results = model.fit()
print(results.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: ln_price R-squared: 0.996
Model: OLS Adj. R-squared: 0.996
Method: Least Squares F-statistic: 1.867e+05
Date: Fri, 03 May 2019 Prob (F-statistic): 0.00
Time: 18:28:17 Log-Likelihood: -5654.1
No. Observations: 6796 AIC: 1.133e+04
Df Residuals: 6786 BIC: 1.140e+04
Df Model: 10
Covariance Type: nonrobust
================================================================================
coef std err t P>|t| [0.025 0.975]
--------------------------------------------------------------------------------
make -0.0016 0.000 -5.580 0.000 -0.002 -0.001
body -0.0936 0.005 -20.599 0.000 -0.103 -0.085
mileage -0.0007 0.000 -4.304 0.000 -0.001 -0.000
engV 0.0114 0.001 8.490 0.000 0.009 0.014
engType -0.0599 0.005 -11.110 0.000 -0.070 -0.049
registration 0.7140 0.030 23.713 0.000 0.655 0.773
year -0.0874 0.001 -72.084 0.000 -0.090 -0.085
drive 0.3844 0.010 39.666 0.000 0.365 0.403
mileage**2 1.546e-06 3.1e-07 4.988 0.000 9.39e-07 2.15e-06
year**2 4.57e-05 6.01e-07 75.986 0.000 4.45e-05 4.69e-05
==============================================================================
Omnibus: 244.732 Durbin-Watson: 2.032
Prob(Omnibus): 0.000 Jarque-Bera (JB): 695.399
Skew: -0.082 Prob(JB): 9.91e-152
Kurtosis: 4.559 Cond. No. 1.80e+07
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 1.8e+07. This might indicate that there are
strong multicollinearity or other numerical problems.
###Markdown
4.2 Calculate the Variance Inflation Factor (VIF) of our X variables. Do we have multicollinearity problems?One sentence can be sufficient
###Code
vifs = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]
y = [i > 10 for i in vifs]
for col, vif, y in zip(X.columns, vifs, y):
print(f'{col:15} {vif:<7.2f} {"<<" if y else ""}')
###Output
const 3423029618.12 <<
make 1.08
body 11.29 <<
mileage 7.56
engV 23.92 <<
engType 1.34
registration 1.11
year 164292.77 <<
drive 1.25
mileage_sq 5.08
year_sq 164633.03 <<
body_sq 11.58 <<
engV_sq 23.75 <<
###Markdown
_Lambda School Data Science_ Regression Sprint Challenge For this Sprint Challenge, you'll predict the price of used cars. The dataset is real-world. It was collected from advertisements of cars for sale in the Ukraine in 2016.The following import statements have been provided for you, and should be sufficient. But you may not need to use every import. And you are permitted to make additional imports.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
###Output
_____no_output_____
###Markdown
[The dataset](https://raw.githubusercontent.com/ryanleeallred/datasets/master/car_regression.csv) contains 8,495 rows and 9 variables:- make: manufacturer brand- price: seller’s price in advertisement (in USD)- body: car body type- mileage: as mentioned in advertisement (‘000 Km)- engV: rounded engine volume (‘000 cubic cm)- engType: type of fuel- registration: whether car registered in Ukraine or not- year: year of production- drive: drive typeRun this cell to read the data:
###Code
df = pd.read_csv('https://raw.githubusercontent.com/ryanleeallred/datasets/master/car_regression.csv')
print(df.shape)
df.sample(10)
df.dtypes
###Output
_____no_output_____
###Markdown
Predictive Modeling with Linear Regression 1.1 Split the data into an X matrix and y vector (`price` is the target we want to predict).
###Code
target = "price"
features = ["make", "body", "mileage", "engV", "engType", "registration", "year", "drive"]
X = df[features]
y = df[target]
###Output
_____no_output_____
###Markdown
1.2 Split the data into test and train sets, using `train_test_split`.You may use a train size of 80% and a test size of 20%.
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.80, test_size=0.20, random_state=42)
###Output
_____no_output_____
###Markdown
1.3 Use scikit-learn to fit a multiple regression model, using your training data.Use `year` and one or more features of your choice. You will not be evaluated on which features you choose. You may choose to use all features.
###Code
target = 'price'
features = ["year", "mileage"]
X = df[features]
y = df[target]
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.80, test_size=0.20, random_state=42)
model = LinearRegression()
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
1.4 Report the Intercept and Coefficients for the fitted model.
###Code
print('Intercept = ', model.intercept_)
print('Coefficiants:', pd.Series(model.coef_, features))
###Output
Intercept = -2080152.8168321534
Coefficiants: year 1047.962892
mileage -45.955709
dtype: float64
###Markdown
1.5 Use the test data to make predictions.
###Code
print('Car Prices:', model.predict(X_test))
###Output
Car Prices: [25958.82569853 12832.66613332 31446.45552841 ... 20535.1884013
230.919839 21822.81189937]
###Markdown
1.6 Use the test data to get both the Root Mean Square Error and $R^2$ for the model. You will not be evaluated on how high or low your scores are.
###Code
print('RMSE = ', np.sqrt(mean_squared_error(y_test, model.predict(X_test))))
print('R^2 = ', model.score(X_test, y_test))
###Output
RMSE = 23023.641383164486
R^2 = 0.1802189032887348
###Markdown
1.7 How should we interpret the coefficient corresponding to the `year` feature?One sentence can be sufficient The coefficient represents the change in price for each change in year. For example, the estimated price will go up $1048 per year as each year increases. 1.8 How should we interpret the Root Mean Square Error?One sentence can be sufficient It indicates how close the observed data points are to the predicted values. This is represneted in the same units as the target variable (price) and measures how accuratley the model's prediction is. 1.9 How should we interpret the $R^2$?One sentence can be sufficient This is a relative measure of fit of a model that ranges from 0 to 1. The closer to 1, the better the fit. The model above is pretty terrible. Log-Linear and Polynomial Regression 2.1 Engineer a new variable by taking the log of the price varible.
###Code
df['ln_price'] = np.log(df['price'])
###Output
_____no_output_____
###Markdown
2.2 Visualize scatterplots of the relationship between each feature versus the log of price, to look for non-linearly distributed features.You may use any plotting tools and techniques.
###Code
target = 'ln_price'
features = ["make", "body", "mileage", "engV", "engType", "registration", "year", "drive"]
for feature in features:
sns.scatterplot(x=feature, y=target, data=df, alpha=0.2)
plt.show()
###Output
_____no_output_____
###Markdown
2.3 Create polynomial feature(s)You will not be evaluated on which feature(s) you choose. But try to choose appropriate features.
###Code
df["year_squared"] = df["year"]**2
df["mileage_squared"] = df["mileage"]**2
###Output
_____no_output_____
###Markdown
2.4 Use the new log-transformed y variable and your x variables (including any new polynomial features) to fit a new linear regression model. Then report the: intercept, coefficients, RMSE, and $R^2$.
###Code
target = 'ln_price'
features = ['year_squared', 'mileage_squared']
X = df[features]
y = df[target]
model = LinearRegression()
model.fit(X, y)
print('Intercept: ', model.intercept_)
print('Coefficients: ', pd.Series(model.coef_, features))
print('RMSE = ', np.sqrt(mean_squared_error(y, model.predict(X))))
print('R^2: ', model.score(X, y))
###Output
Intercept: -88.1019876620013
Coefficients: year_squared 2.416484e-05
mileage_squared -2.627169e-08
dtype: float64
RMSE = 0.6896512795293018
R^2: 0.48581975665768096
###Markdown
2.5 How do we interpret coefficients in Log-Linear Regression (differently than Ordinary Least Squares Regression)?One sentence can be sufficient Rather than dollar amounts, it is a change in percentage. These are both very small in this example, but would mean a price increase of .0024% per year. Decision Trees 3.1 Use scikit-learn to fit a decision tree regression model, using your training data.Use one or more features of your choice. You will not be evaluated on which features you choose. You may choose to use all features.You may use the log-transformed target or the original un-transformed target. You will not be evaluated on which you choose.
###Code
target = "ln_price"
features = ["body", "mileage", "engV", "engType", "year_squared", "drive"]
X = df[features]
y = df[target]
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.80, test_size=0.20, random_state=42)
tree = DecisionTreeRegressor()
tree.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
3.2 Use the test data to get the $R^2$ for the model. You will not be evaluated on how high or low your scores are.
###Code
print('R^2', tree.score(X_test, y_test))
###Output
R^2 0.7696116236927161
###Markdown
Regression Diagnostics 4.1 Use statsmodels to run a log-linear or log-polynomial linear regression with robust standard errors.
###Code
target = "ln_price"
features = ["body", "mileage", "engV", "engType", "year_squared", "drive"]
X = df[features]
y = df[target]
model = sm.OLS(y, X)
results = model.fit(cov_type='HC3')
print(results.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: ln_price R-squared: 0.993
Model: OLS Adj. R-squared: 0.993
Method: Least Squares F-statistic: 2.341e+05
Date: Fri, 03 May 2019 Prob (F-statistic): 0.00
Time: 16:18:22 Log-Likelihood: -9912.4
No. Observations: 8495 AIC: 1.984e+04
Df Residuals: 8489 BIC: 1.988e+04
Df Model: 6
Covariance Type: HC3
================================================================================
coef std err z P>|z| [0.025 0.975]
--------------------------------------------------------------------------------
body -0.1235 0.006 -19.905 0.000 -0.136 -0.111
mileage -0.0036 0.000 -24.221 0.000 -0.004 -0.003
engV 0.0079 0.003 2.631 0.009 0.002 0.014
engType -0.1262 0.007 -19.418 0.000 -0.139 -0.113
year_squared 2.488e-06 6.61e-09 376.142 0.000 2.47e-06 2.5e-06
drive 0.2670 0.015 17.729 0.000 0.237 0.296
==============================================================================
Omnibus: 738.039 Durbin-Watson: 1.913
Prob(Omnibus): 0.000 Jarque-Bera (JB): 1812.198
Skew: -0.519 Prob(JB): 0.00
Kurtosis: 5.010 Cond. No. 5.60e+06
==============================================================================
Warnings:
[1] Standard Errors are heteroscedasticity robust (HC3)
[2] The condition number is large, 5.6e+06. This might indicate that there are
strong multicollinearity or other numerical problems.
###Markdown
4.2 Calculate the Variance Inflation Factor (VIF) of our X variables. Do we have multicollinearity problems?One sentence can be sufficient
###Code
X = sm.add_constant(X)
vif = [variance_inflation_factor(X.values, i) for i in range(len(X.columns))]
pd.Series(vif, X.columns)
###Output
/usr/local/lib/python3.6/dist-packages/numpy/core/fromnumeric.py:2389: FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
return ptp(axis=axis, out=out, **kwargs)
###Markdown
None of our VIF values are above 10, so it does not appear that there is multicollinearity. Let's try a correlation matrix to see.
###Code
sns.set(style="white")
corr = df.corr()
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
f, ax = plt.subplots(figsize=(11, 9))
cmap = sns.diverging_palette(220, 10, as_cmap=True)
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5});
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Regression Sprint Challenge For this Sprint Challenge, you'll predict the price of used cars. The dataset is real-world. It was collected from advertisements of cars for sale in the Ukraine in 2016.The following import statements have been provided for you, and should be sufficient. But you may not need to use every import. And you are permitted to make additional imports.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
###Output
_____no_output_____
###Markdown
[The dataset](https://raw.githubusercontent.com/ryanleeallred/datasets/master/car_regression.csv) contains 8,495 rows and 9 variables:- make: manufacturer brand- price: seller’s price in advertisement (in USD)- body: car body type- mileage: as mentioned in advertisement (‘000 Km)- engV: rounded engine volume (‘000 cubic cm)- engType: type of fuel- registration: whether car registered in Ukraine or not- year: year of production- drive: drive typeRun this cell to read the data:
###Code
df = pd.read_csv('https://raw.githubusercontent.com/ryanleeallred/datasets/master/car_regression.csv')
print(df.shape)
df.sample(2)
###Output
(8495, 9)
###Markdown
Predictive Modeling with Linear Regression 1.1 Split the data into an X matrix and y vector (`price` is the target we want to predict).
###Code
y = df.price
X = df[df.columns.drop(['price']).tolist()]
X.sample(2)
###Output
_____no_output_____
###Markdown
1.2 Split the data into test and train sets, using `train_test_split`.You may use a train size of 80% and a test size of 20%.
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.80, test_size=0.20, random_state=42)
###Output
_____no_output_____
###Markdown
1.3 Use scikit-learn to fit a multiple regression model, using your training data.Use `year` and one or more features of your choice. You will not be evaluated on which features you choose. You may choose to use all features.
###Code
model = LinearRegression()
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
1.4 Report the Intercept and Coefficients for the fitted model.
###Code
print('Intercept', model.intercept_)
coefficients = pd.Series(model.coef_, X_train.columns)
print(coefficients.to_string())
###Output
Intercept -2269355.0772314165
make -35.167266
body -1770.985091
mileage -40.268597
engV 273.035408
engType -1111.080317
registration 4535.060134
year 1140.731248
drive 8292.046139
###Markdown
1.5 Use the test data to make predictions.
###Code
y_pred = model.predict(X_test)
###Output
_____no_output_____
###Markdown
1.6 Use the test data to get both the Root Mean Square Error and $R^2$ for the model. You will not be evaluated on how high or low your scores are.
###Code
rmse = (np.sqrt(mean_squared_error(y_test, y_pred)))
r2 = r2_score(y_test, y_pred)
print('OLS Test Root Mean Squared Error', rmse)
print('OLS Test R^2 Score', r2)
###Output
OLS Test Root Mean Squared Error 21394.43524600266
OLS Test R^2 Score 0.29213322373743256
###Markdown
1.7 How should we interpret the coefficient corresponding to the `year` feature?One sentence can be sufficient Assuming all other features are held constant, for every unit change in year, the average price should increase by 1140.73 1.8 How should we interpret the Root Mean Square Error?One sentence can be sufficient Assuming all other features are held constant, for every unit change in year, the average price should increase by 1140.73 1.9 How should we interpret the $R^2$?1. List item2. List itemOne sentence can be sufficient $R^2$ is the proportionate reduction of total variation associated with the use of all the features in the model. Log-Linear and Polynomial Regression 2.1 Engineer a new variable by taking the log of the price varible.
###Code
df['Ln_price'] = np.log(df['price'])
###Output
_____no_output_____
###Markdown
2.2 Visualize scatterplots of the relationship between each feature versus the log of price, to look for non-linearly distributed features.You may use any plotting tools and techniques.
###Code
target
for feature in X:
sns.scatterplot(x=feature, y=df.Ln_price, data=df, alpha=0.1)
plt.show()
###Output
_____no_output_____
###Markdown
2.3 Create polynomial feature(s)You will not be evaluated on which feature(s) you choose. But try to choose appropriate features.
###Code
X['year ** 2'] = X['year']**2
X['year ** 3'] = X['year']**3
X['Ln_year'] = np.log(X['year'])
X.sample(2)
###Output
_____no_output_____
###Markdown
2.4 Use the new log-transformed y variable and your x variables (including any new polynomial features) to fit a new linear regression model. Then report the: intercept, coefficients, RMSE, and $R^2$.
###Code
def run_linear_model(X, y):
# Split into test and train data
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.80, test_size=0.20, random_state=42)
# Fit model using train data
model = LinearRegression()
model.fit(X_train, y_train)
# Make predictions using test features
y_pred = model.predict(X_test)
# Compare predictions to test target
rmse = (np.sqrt(mean_squared_error(y_test, y_pred)))
r2 = r2_score(y_test, y_pred)
print('Test Root Mean Squared Error', rmse)
print('Test R^2 Score', r2)
print('Intercept', model.intercept_)
coefficients = pd.Series(model.coef_, X_train.columns)
print(coefficients.to_string())
run_linear_model(X, df.Ln_price)
###Output
Test Root Mean Squared Error 0.5633576324026763
Test R^2 Score 0.6688400346353676
Intercept -171121.8699917546
make -0.001719
body -0.093239
mileage 0.000714
engV 0.008353
engType -0.047572
registration 0.675316
year 259.949229
drive 0.372255
Ln_year 0.391152
year ** 2 -0.131649
year ** 3 0.000022
###Markdown
2.5 How do we interpret coefficients in Log-Linear Regression (differently than Ordinary Least Squares Regression)?One sentence can be sufficient Assuming all other features are held constant, for every unit change in a given feature, the average percentage change price is given by that feature's coefficient. Decision Trees 3.1 Use scikit-learn to fit a decision tree regression model, using your training data.Use one or more features of your choice. You will not be evaluated on which features you choose. You may choose to use all features.You may use the log-transformed target or the original un-transformed target. You will not be evaluated on which you choose.
###Code
def run_decisiontree_model(X, y, max_depth=1):
# Split into test and train data
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.80, test_size=0.20, random_state=42)
# Fit model using train data
model = DecisionTreeRegressor(max_depth=max_depth)
model.fit(X_train, y_train)
# Make predictions using test features
y_pred = model.predict(X_test)
# Compare predictions to test target
rmse = (np.sqrt(mean_squared_error(y_test, y_pred)))
r2 = r2_score(y_test, y_pred)
#print('Decision tree Test Root Mean Squared Error', rmse)
print('Decision Tree Test R^2 Score', r2)
###Output
_____no_output_____
###Markdown
3.2 Use the test data to get the $R^2$ for the model. You will not be evaluated on how high or low your scores are.
###Code
run_decisiontree_model(X, df.price, max_depth=7)
###Output
Decision Tree Test R^2 Score 0.8200506025111765
###Markdown
Regression Diagnostics 4.1 Use statsmodels to run a log-linear or log-polynomial linear regression with robust standard errors.
###Code
model = sm.OLS(df.Ln_price, sm.add_constant(X))
results = model.fit(cov_type="HC3")
print(results.summary())
###Output
/usr/local/lib/python3.6/dist-packages/numpy/core/fromnumeric.py:2389: FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
return ptp(axis=axis, out=out, **kwargs)
###Markdown
4.2 Calculate the Variance Inflation Factor (VIF) of our X variables. Do we have multicollinearity problems?Yes, because 'year ** 3', 'year ** 2', 'year', 'Ln_year' have a VIF much greater than 10.
###Code
X = sm.add_constant(X)
vif = [variance_inflation_factor(X.values, i) for i in range(len(X.columns))]
pd.Series(vif, X.columns).sort_values(ascending=False)
###Output
/usr/local/lib/python3.6/dist-packages/numpy/core/fromnumeric.py:2389: FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
return ptp(axis=axis, out=out, **kwargs)
/usr/local/lib/python3.6/dist-packages/statsmodels/regression/linear_model.py:1543: RuntimeWarning: divide by zero encountered in double_scalars
return 1 - self.ssr/self.centered_tss
###Markdown
_Lambda School Data Science_ Regression Sprint Challenge For this Sprint Challenge, you'll predict the price of used cars. The dataset is real-world. It was collected from advertisements of cars for sale in the Ukraine in 2016.The following import statements have been provided for you, and should be sufficient. But you may not need to use every import. And you are permitted to make additional imports.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
###Output
_____no_output_____
###Markdown
[The dataset](https://raw.githubusercontent.com/ryanleeallred/datasets/master/car_regression.csv) contains 8,495 rows and 9 variables:- make: manufacturer brand- price: seller’s price in advertisement (in USD)- body: car body type- mileage: as mentioned in advertisement (‘000 Km)- engV: rounded engine volume (‘000 cubic cm)- engType: type of fuel- registration: whether car registered in Ukraine or not- year: year of production- drive: drive typeRun this cell to read the data:
###Code
df = pd.read_csv('https://raw.githubusercontent.com/ryanleeallred/datasets/master/car_regression.csv')
print(df.shape)
df.sample(10)
###Output
(8495, 9)
###Markdown
Predictive Modeling with Linear Regression 1.1 Split the data into an X matrix and y vector (`price` is the target we want to predict).
###Code
X = df.drop(columns='price')
y = df['price']
###Output
_____no_output_____
###Markdown
1.2 Split the data into test and train sets, using `train_test_split`.You may use a train size of 80% and a test size of 20%.
###Code
X_train, X_test, Y_train, Y_test = train_test_split(X,y,test_size=.2, random_state=42)
print(X.shape, "\n")
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
###Output
(8495, 8)
(6796, 8)
(1699, 8)
(6796,)
(1699,)
###Markdown
1.3 Use scikit-learn to fit a multiple regression model, using your training data.Use `year` and one or more features of your choice. You will not be evaluated on which features you choose. You may choose to use all features.
###Code
mr_model = LinearRegression()
mr_model.fit(X_train,Y_train)
###Output
_____no_output_____
###Markdown
1.4 Report the Intercept and Coefficients for the fitted model.
###Code
print('Intercept: ', mr_model.intercept_)
print('Coefficients: ', mr_model.coef_)
###Output
Intercept: -2269355.0772314165
Coefficients: [ -35.16726588 -1770.98509064 -40.26859658 273.03540784
-1111.08031708 4535.06013378 1140.73124767 8292.04613874]
###Markdown
1.5 Use the test data to make predictions.
###Code
y_test_predict = mr_model.predict(X_test)
###Output
_____no_output_____
###Markdown
1.6 Use the test data to get both the Root Mean Square Error and $R^2$ for the model. You will not be evaluated on how high or low your scores are.
###Code
RMSE = np.sqrt(mean_squared_error(Y_test, y_test_predict))
print('RMSE is {}'.format(RMSE))
R2 = r2_score(Y_test, y_test_predict)
print('R^2 is {}'.format(R2))
###Output
RMSE is 21394.43524600266
R^2 is 0.29213322373743256
###Markdown
1.7 How should we interpret the coefficient corresponding to the `year` feature?One sentence can be sufficient The coefficient corresponding to the 'year' feature can be interpreted as how much the price can increase for each increase in the 'year' feature for otherwise similar cars. 1.8 How should we interpret the Root Mean Square Error?One sentence can be sufficient The root mean square can be interpreted as a measure of how spread out the residuals are. In other words, they can tell us how concentrated around the best fit line the data is. 1.9 How should we interpret the $R^2$?One sentence can be sufficient In this case, the R^2 tells us that this model captures 29% of the variance of the inputs. In general, this particular score represents the proportion of variance in the dependent variable (price) that is predictable from the independent variables. Log-Linear and Polynomial Regression 2.1 Engineer a new variable by taking the log of the price varible.
###Code
ln_y = np.log(df['price'])
###Output
_____no_output_____
###Markdown
2.2 Visualize scatterplots of the relationship between each feature versus the log of price, to look for non-linearly distributed features.You may use any plotting tools and techniques.
###Code
# Plot scatterplots
features = df.columns.drop('price')
for feature in features:
sns.scatterplot(x=feature, y=ln_y, data=df, alpha=0.5)
plt.show()
###Output
_____no_output_____
###Markdown
2.3 Create polynomial feature(s)You will not be evaluated on which feature(s) you choose. But try to choose appropriate features.
###Code
df['year_squared'] = df['year']**2
###Output
_____no_output_____
###Markdown
2.4 Use the new log-transformed y variable and your x variables (including any new polynomial features) to fit a new linear regression model. Then report the: intercept, coefficients, RMSE, and $R^2$.
###Code
X_train, X_test, Y_train, Y_test = train_test_split(X,ln_y,test_size=.2, random_state=42)
lp_model = LinearRegression()
lp_model.fit(X_train,Y_train)
print('Intercept: ', lp_model.intercept_)
print('Coefficients: ', lp_model.coef_)
y_test_predict = lp_model.predict(X_test)
RMSE = np.sqrt(mean_squared_error(Y_test, y_test_predict))
print('RMSE is {}'.format(RMSE))
R2 = r2_score(Y_test, y_test_predict)
print('R^2 is {}'.format(R2))
###Output
Intercept: -183.02593585249053
Coefficients: [-1.52681064e-03 -9.35510337e-02 -2.96468789e-05 8.70319193e-03
-5.80216535e-02 7.31811098e-01 9.55180643e-02 3.89875953e-01]
RMSE is 0.5845598209790605
R^2 is 0.6434442979521522
###Markdown
2.5 How do we interpret coefficients in Log-Linear Regression (differently than Ordinary Least Squares Regression)?One sentence can be sufficient The coefficients are interpreted differently than OLS because they represent changes in percentages. Decision Trees 3.1 Use scikit-learn to fit a decision tree regression model, using your training data.Use one or more features of your choice. You will not be evaluated on which features you choose. You may choose to use all features.You may use the log-transformed target or the original un-transformed target. You will not be evaluated on which you choose.
###Code
tree = DecisionTreeRegressor()
tree.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
3.2 Use the test data to get the $R^2$ for the model. You will not be evaluated on how high or low your scores are.
###Code
print('R^2', tree.score(X_test, Y_test))
###Output
R^2 0.8712908485811194
###Markdown
Regression Diagnostics 4.1 Use statsmodels to run a log-linear or log-polynomial linear regression with robust standard errors.
###Code
sm_model = sm.OLS(ln_y, sm.add_constant(X))
results = sm_model.fit()
print(results.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: price R-squared: 0.658
Model: OLS Adj. R-squared: 0.658
Method: Least Squares F-statistic: 2040.
Date: Fri, 03 May 2019 Prob (F-statistic): 0.00
Time: 16:36:54 Log-Likelihood: -7167.0
No. Observations: 8495 AIC: 1.435e+04
Df Residuals: 8486 BIC: 1.442e+04
Df Model: 8
Covariance Type: nonrobust
================================================================================
coef std err t P>|t| [0.025 0.975]
--------------------------------------------------------------------------------
const -181.8341 2.144 -84.810 0.000 -186.037 -177.631
make -0.0016 0.000 -6.052 0.000 -0.002 -0.001
body -0.0959 0.004 -23.483 0.000 -0.104 -0.088
mileage -9.471e-05 7.8e-05 -1.214 0.225 -0.000 5.82e-05
engV 0.0092 0.001 8.048 0.000 0.007 0.011
engType -0.0581 0.005 -11.946 0.000 -0.068 -0.049
registration 0.7220 0.027 26.530 0.000 0.669 0.775
year 0.0949 0.001 89.151 0.000 0.093 0.097
drive 0.3908 0.009 44.603 0.000 0.374 0.408
==============================================================================
Omnibus: 429.442 Durbin-Watson: 1.918
Prob(Omnibus): 0.000 Jarque-Bera (JB): 1600.078
Skew: 0.070 Prob(JB): 0.00
Kurtosis: 5.122 Cond. No. 7.06e+05
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 7.06e+05. This might indicate that there are
strong multicollinearity or other numerical problems.
###Markdown
4.2 Calculate the Variance Inflation Factor (VIF) of our X variables. Do we have multicollinearity problems?One sentence can be sufficient
###Code
sm_X = sm.add_constant(X).values
vif = [variance_inflation_factor(X.values, i) for i in range(len(X.columns))]
pd.Series(vif, X.columns)
###Output
/usr/local/lib/python3.6/dist-packages/numpy/core/fromnumeric.py:2389: FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
return ptp(axis=axis, out=out, **kwargs)
###Markdown
_Lambda School Data Science_ Regression Sprint Challenge For this Sprint Challenge, you'll predict the price of used cars. The dataset is real-world. It was collected from advertisements of cars for sale in the Ukraine in 2016.The following import statements have been provided for you, and should be sufficient. But you may not need to use every import. And you are permitted to make additional imports.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
###Output
_____no_output_____
###Markdown
[The dataset](https://raw.githubusercontent.com/ryanleeallred/datasets/master/car_regression.csv) contains 8,495 rows and 9 variables:- make: manufacturer brand- price: seller’s price in advertisement (in USD)- body: car body type- mileage: as mentioned in advertisement (‘000 Km)- engV: rounded engine volume (‘000 cubic cm)- engType: type of fuel- registration: whether car registered in Ukraine or not- year: year of production- drive: drive typeRun this cell to read the data:
###Code
df = pd.read_csv('https://raw.githubusercontent.com/ryanleeallred/datasets/master/car_regression.csv')
print(df.shape)
df.sample(10)
###Output
(8495, 9)
###Markdown
Predictive Modeling with Linear Regression 1.1 Split the data into an X matrix and y vector (`price` is the target we want to predict).
###Code
X = df[['make','body','mileage','engV','engType','registration','year','drive']]
y=df['price']
###Output
_____no_output_____
###Markdown
1.2 Split the data into test and train sets, using `train_test_split`.You may use a train size of 80% and a test size of 20%.
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.80, test_size=0.20)
###Output
_____no_output_____
###Markdown
1.3 Use scikit-learn to fit a multiple regression model, using your training data.Use `year` and one or more features of your choice. You will not be evaluated on which features you choose. You may choose to use all features.
###Code
model = LinearRegression()
model.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
1.4 Report the Intercept and Coefficients for the fitted model.
###Code
print('intercept:',model.intercept_)
pd.Series(model.coef_,X.columns)
###Output
intercept: -2273093.691240952
###Markdown
1.5 Use the test data to make predictions.
###Code
y_pred = model.predict(X_test)
###Output
_____no_output_____
###Markdown
1.6 Use the test data to get both the Root Mean Square Error and $R^2$ for the model. You will not be evaluated on how high or low your scores are.
###Code
print('RMSE:',np.sqrt(mean_squared_error(y_test,y_pred)))
print('\n R^2',r2_score(y_test,y_pred))
###Output
RMSE: 22269.494022621428
R^2 0.27344300790999754
###Markdown
1.7 How should we interpret the coefficient corresponding to the `year` feature? The coefficient for year can be interpreted as the the increase in price per year. 1.8 How should we interpret the Root Mean Square Error? My predictions were off by about 19,640 hryvinia. 1.9 How should we interpret the $R^2$? My model doesn't fit very well because it has a low score. Log-Linear and Polynomial Regression 2.1 Engineer a new variable by taking the log of the price varible.
###Code
df['log_price']=np.log(df['price'])
###Output
_____no_output_____
###Markdown
2.2 Visualize scatterplots of the relationship between each feature versus the log of price, to look for non-linearly distributed features.You may use any plotting tools and techniques.
###Code
import seaborn as sns
for column in X.columns:
sns.residplot(X[column], y, lowess=True, line_kws=dict(color='r'))
plt.show()
for column in X.columns:
df.plot(x=column,y='log_price',kind='scatter',alpha=.5)
for column in X.columns:
df.plot(x=column,y='price',kind='scatter',alpha=.5)
###Output
_____no_output_____
###Markdown
Year looks like it. mileage too. 2.3 Create polynomial feature(s)You will not be evaluated on which feature(s) you choose. But try to choose appropriate features.
###Code
df['year_squared'] = df['year']**2
df['mileage_squared'] = df['mileage']**2
###Output
_____no_output_____
###Markdown
2.4 Use the new log-transformed y variable and your x variables (including any new polynomial features) to fit a new linear regression model. Then report the: intercept, coefficients, RMSE, and $R^2$.
###Code
X = df[['make','body','mileage','engV','engType',
'registration','year','drive','mileage_squared','year_squared']]
y = df['log_price']
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.50, test_size=0.50)
model = LinearRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
rmse = (np.sqrt(mean_squared_error(y_test, y_pred)))
r2 = r2_score(y_test, y_pred)
print('Root Mean Squared Error', rmse)
print('R^2 Score', r2)
print('Intercept', model.intercept_)
coefficients = pd.Series(model.coef_, X_train.columns)
print(coefficients.to_string())
###Output
Root Mean Squared Error 0.5586989507148826
R^2 Score 0.668060529243124
Intercept 6777.288639915637
make -0.001810
body -0.089799
mileage 0.000384
engV 0.006610
engType -0.048735
registration 0.670779
year -6.865995
drive 0.366451
mileage_squared 0.000001
year_squared 0.001741
###Markdown
2.5 How do we interpret coefficients in Log-Linear Regression (differently than Ordinary Least Squares Regression)?One sentence can be sufficient Instead of a set amount of change, LLR coefficients represent a percentage of change. Decision Trees 3.1 Use scikit-learn to fit a decision tree regression model, using your training data.Use one or more features of your choice. You will not be evaluated on which features you choose. You may choose to use all features.You may use the log-transformed target or the original un-transformed target. You will not be evaluated on which you choose.
###Code
tree = DecisionTreeRegressor()
tree.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
3.2 Use the test data to get the $R^2$ for the model. You will not be evaluated on how high or low your scores are.
###Code
print('Train R^2 score:', tree.score(X_train, y_train))
print('Test R^2 score:', tree.score(X_test, y_test))
###Output
Train R^2 score: 0.9998357175150873
Test R^2 score: 0.8453799205173202
###Markdown
Regression Diagnostics 4.1 Use statsmodels to run a log-linear or log-polynomial linear regression with robust standard errors.
###Code
model = sm.OLS(y, X)
results = model.fit(cov_type='HC3')
print(results.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: log_price R-squared: 0.996
Model: OLS Adj. R-squared: 0.996
Method: Least Squares F-statistic: 2.793e+05
Date: Fri, 03 May 2019 Prob (F-statistic): 0.00
Time: 16:25:12 Log-Likelihood: -7135.7
No. Observations: 8495 AIC: 1.429e+04
Df Residuals: 8485 BIC: 1.436e+04
Df Model: 10
Covariance Type: HC3
===================================================================================
coef std err z P>|z| [0.025 0.975]
-----------------------------------------------------------------------------------
make -0.0016 0.000 -5.605 0.000 -0.002 -0.001
body -0.0933 0.004 -22.856 0.000 -0.101 -0.085
mileage -0.0009 0.000 -3.601 0.000 -0.001 -0.000
engV 0.0092 0.002 3.781 0.000 0.004 0.014
engType -0.0600 0.005 -12.525 0.000 -0.069 -0.051
registration 0.7098 0.020 36.272 0.000 0.671 0.748
year -0.0861 0.002 -50.189 0.000 -0.089 -0.083
drive 0.3874 0.010 38.889 0.000 0.368 0.407
mileage_squared 1.794e-06 5.61e-07 3.200 0.001 6.95e-07 2.89e-06
year_squared 4.506e-05 8.52e-07 52.896 0.000 4.34e-05 4.67e-05
==============================================================================
Omnibus: 397.837 Durbin-Watson: 1.918
Prob(Omnibus): 0.000 Jarque-Bera (JB): 1428.343
Skew: -0.004 Prob(JB): 6.91e-311
Kurtosis: 5.009 Cond. No. 1.80e+07
==============================================================================
Warnings:
[1] Standard Errors are heteroscedasticity robust (HC3)
[2] The condition number is large, 1.8e+07. This might indicate that there are
strong multicollinearity or other numerical problems.
###Markdown
4.2 Calculate the Variance Inflation Factor (VIF) of our X variables. Do we have multicollinearity problems?Year and year_squared are highly collinear. Dropping one of the two reduces my r^2 by around 30%. :(
###Code
X =sm.add_constant(X)
vif = [variance_inflation_factor(X.values, i) for i in range(len(X.columns))]
pd.Series(vif, X.columns)
X = df[['make','body','mileage','engV','engType',
'registration','year_squared','drive', 'mileage_squared']]
y = df['log_price']
X =sm.add_constant(X)
vif = [variance_inflation_factor(X.values, i) for i in range(len(X.columns))]
pd.Series(vif, X.columns)
model = sm.OLS(y, X)
results = model.fit(cov_type='HC3')
print(results.summary())
###Output
_____no_output_____ |
ocean/OceanCaseStudies/Marine_heatwaves_S3_CMEMS.ipynb | ###Markdown
Using Copernicus data to investigate Marine Heatwaves Version: 3.0 Date: 14/07/2020 Author: Hayley Evers-King (EUMETSAT) and Ben Loveday (InnoFlair, Plymouth Marine Laboratory) Credit: This code was developed for EUMETSAT under contracts for the European Commission Copernicus programme. License: This code is offered as open source and free-to-use in the public domain, with no warranty, under the MIT license associated with this code repository. **What is this notebook for?**This notebook will download EUMETSAT Sentinel-3 SLSTR data for composite plotting, as well as CMEMS time series data of SST, from both reprocessed and NRT data stream. The notebook covers the application of some simple plotting techniques and application of basic analysis to investigate both historical and current potential for the occurrence of marine heat waves.**What specific tools does this notebook use?**Beyond general Python modules, this notebook imports some functions for managing the harmonised data access api (harmonised_data_access_api_tools.py) which can be found in the wekeo-hda folder on JupyterLab, and additional libraries for marine heatwave analysis provided openly and based on the work of Hobday et al..**What are marine heatwaves and how can Copernicus data be used to observe them?**Like heatwaves on land, marine heatwaves are extended periods of higher than normal temperatures. They have occurred in different areas around the world and can be caused by different oceanographic driving forces. You can find out all about them here. Marine heatwaves can be devastating for marine life, particularly those that can suffer from thermal stress, such as coral reefs. The variable drivers, and historical context for defining heatwaves regionally, mean that we need the ability to measure the range of ocean temperatures that occur all over the world at any given time, and also a long-term base-line understanding of what ‘normal’ temperatures look like.Sea surface temperature measurements from satellites can support the identification of marine heatwaves, both through the daily measurements they make, and their contributions to long-term data records. The Sentinel-3 satellites are the Copernicus programme's contribution to climate-scale monitoring of sea surface temperatures, with the Sea and Land Surface Temperature Radiometer (SLSTR) on Sentinel-3A now able to function as a reference sensor, when combining multiple sea surface temperature data sets, such as those available from the Copernicus Marine Service.In this notebook we will work through an example of how you can access near-real-time data to view current SST in a region that is often affected by marine heatwaves. We will then look at this area in a long term context using a reprocessed time series, to see how the current situation compares to historical marine heat wave episodes. Get WEkEO User credentialsIf you want to download the data to use this notebook, you will need WEkEO User credentials. If you do not have these, you can register here. *** Let's get started! Python is divided into a series of libraries, packages, and modules that each contain a series of methods for specific tasks. The box below imports everything we need to complete the tasks in this notebook including data access, manipulation, analysis and plotting.
###Code
# standard tools
import os, sys, json
from zipfile import ZipFile
import shutil
import sys
import datetime
import numpy as np
from IPython.core.display import HTML
import xarray as xr
import matplotlib.pyplot as plt
from matplotlib import gridspec
import glob
import warnings
warnings.filterwarnings("ignore")
# HDA API tools
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(os.getcwd())),'wekeo-hda'))
import hda_api_functions as hapi # this is the Python version
# specific tools (which can be found here ../../tools/)
sys.path.append(os.path.join(os.getcwd(),'tools'))
sys.path.append(os.path.join(os.getcwd(),'tools','mhw_master'))
import image_tools as img
import SST_plotting_tools as sstp
import marineHeatWaves as mhw
###Output
_____no_output_____
###Markdown
WEkEO provides access to a huge number of datasets through its **'harmonised-data-access'** API. This allows us to query the full data catalogue and download data quickly and directly onto the Jupyter Lab. You can search for what data is available hereIn order to use the HDA-API we need to provide some authentication credentials, which comes in the form of an API key and API token. In this notebook we have provided functions so you can retrieve the API key and token you need directly. You can find out more about this process in the notebook on HDA access (wekeo_harmonized_data_access_api.ipynb) that can be found in the **wekeo-hda** folder on your Jupyterlab.We will also define a few other parameters including where to download the data to, and if we want the HDA-API functions to be verbose. **Lastly, we will tell the notebook where to find the query we will use to find the data.** These 'JSON' queries are what we use to ask WEkEO for data. They have a very specific form, but allow us quite fine grained control over what data to get. You can find the ones that we will use here: **JSON_templates/mhw/..**
###Code
# your WEkEO API username and password (needs to be in ' ')
user_name = 'hek17'
password = 'Cadbury17!'
# Generate an API key
api_key = hapi.generate_api_key(user_name, password)
display(HTML('Your API key is: <b>'+api_key+'</b>'))
# where the data should be downloaded to:
download_dir_path = os.path.join(os.getcwd(),'products')
# where we can find our data query form:
JSON_query_dir = os.path.join(os.getcwd(),'JSON_templates','mhw')
# HDA-API loud and noisy?
verbose = False
# make the output directory if required
if not os.path.exists(download_dir_path):
os.makedirs(download_dir_path)
###Output
_____no_output_____
###Markdown
Now we are ready to get our data.
###Code
# SLSTR L2 SST KEY
dataset_id = "EO:EUM:DAT:SENTINEL-3:SL_2_WST___"
download_data = True #Set this to False if you've already downloaded the data!
###Output
_____no_output_____
###Markdown
We use our dataset ID to load the correct JSON query file from ../JSON_templates/mhw/You can edit this query if you want to get different data, but be aware of asking for too much data - you could be here a while and might run out of space to use this data in the JupyterLab. The box below gets the correct query file.
###Code
# find query file
JSON_query_file = os.path.join(JSON_query_dir,dataset_id.replace(':','_')+".json")
if not os.path.exists(JSON_query_file):
print('Query file ' + JSON_query_file + ' does not exist')
else:
print('Found JSON query file for '+dataset_id)
###Output
_____no_output_____
###Markdown
Now we have a query, we need to launch it to WEkEO to get our data. The box below takes care of this through the following steps: 1. initialise our HDA-API 2. get an access token for our data 3. accepts the WEkEO terms and conditions 4. loads our JSON query into memory 5. launches our search 6. waits for our search results 7. gets our result list 8. downloads our dataThis is quite a complex process, so much of the functionality has been buried 'behind the scenes'. If you want more information, you can check out the Harmonised Data Access API tutorials. The code below will report some information as it runs. At the end, it should tell you that one product has been downloaded.
###Code
if download_data:
# set maximum results to find to make sure we capture JSON pagination
total_results = 1e6
HAPI_dict = hapi.init(dataset_id, api_key, download_dir_path)
HAPI_dict = hapi.get_access_token(HAPI_dict)
HAPI_dict = hapi.acceptTandC(HAPI_dict)
# load the query
with open(JSON_query_file, 'r') as f:
query = json.load(f)
# launch job
print('Launching job...')
HAPI_dict = hapi.get_job_id(HAPI_dict, query)
# check results
print('Getting results...')
print('------------------')
# cycle through JSON page output
found_results = 0
page_count = -1
print(HAPI_dict)
asdf
while found_results < total_results:
page_count = page_count + 1
HAPI_dict = hapi.get_results_list(HAPI_dict, page=page_count, verbose=verbose)
total_results = HAPI_dict['results']['totItems']
found_results = found_results + HAPI_dict['results']['itemsInPage']
print('------------------')
print('Found order ids for {} of {} products'.format(found_results, total_results))
print('------------------')
# get order ids
print('Getting order ids...')
HAPI_dict = hapi.get_order_ids(HAPI_dict)
# download data
print('Downloading data...')
HAPI_dict = hapi.download_data(HAPI_dict, file_extension='.zip')
###Output
_____no_output_____
###Markdown
Sentinel data is usually distributed as a zip file, which contains the SAFE format data within. To use this, we must unzip the file. The code below handles this.
###Code
if download_data:
# unzip file
for filename in HAPI_dict['filenames']:
if os.path.splitext(filename)[-1] == '.zip':
print('Unzipping file')
try:
with ZipFile(filename, 'r') as zipObj:
# Extract all the contents of zip file in current directory
zipObj.extractall(os.path.dirname(filename))
# clear up the zip file
os.remove(filename)
except:
print("Failed to unzip....")
###Output
_____no_output_____
###Markdown
*** Plotting SLSTR dataWe plot our SLSTR scenes using a function that manages data ingestion, flagging, bias correction and makes some map embelishements (e.g. adds dotted lines to the scene edges, so we can tell where the boundaries are). We call this function for each image in the boxes further down. We start by finding all the necessary files (which glob.glob takes care of). The files are added to a list which is then sent to our large function above.
###Code
# verbosity
verbose = False
# figure options
figure_font_size = 20
plot_extents = [-148, -120, 32, 47]
vmin = 8
vmax = 20
grid_factor = 3 #sub-sample to reduce plot resolution
# get the files
SLSTR_files = glob.glob(os.path.join(download_dir_path,'S3*WST*202106*','*.nc'))
###Output
_____no_output_____
###Markdown
And now we pass our list of files to the plotting routine. The plotting routine returns the handles of our axes, so that we can still make some changes once the main plot is done (e.g. add the annotations). Finally, it will save the figure in the same directory as this notebooks.
###Code
# make the plot: we will call this as a function as it contains a 'for' loop to make the plot
fig, axis, colbar = sstp.make_SLSTR_composite_plot(SLSTR_files, plot_extents=plot_extents,\
fsz=figure_font_size, vmin=vmin, vmax=vmax, grid_factor=grid_factor)
# add some embellishments
plt.savefig('SLSTR_All_SST_California_20210617.png',bbox_inches='tight')
###Output
_____no_output_____
###Markdown
You can now find the image in the folder where your code is stored in your instance of the JupyterLab. The image is a composite of three images from the SLSTR sensors aboard the Sentinel-3A and B satellites which captured warm temperatures around the eastern Pacific in June 2021. This highlights the increased coverage that can be achieved in both time and space, using multiple sensors, but in order to understand if these temperatures could be an indicator of the beginning of a marine heatwave we need some longer term context... *** Looking at time series data of sea surface temperature To place the images above in context we'll need to look at a time series of data. For this we can access data from the Copernicus services, where multiple data sources (different satellites etc) are combined to produce data sets that cover longer time periods. In this case we are going to look at the OSTIA Sea Surface Temperature using both the NRT and reprocessed data streams, so we can look at data right up to the time period of the SLSTR images we looked at previously. Reprocessed and NRT data streams are produced separately and we must be careful when we interpret these data sets together. As time progresses, we understand better how satellite instruments perform, algorithms improve, and data is reprocessed to ensure good quality and consistency between sources. With NRT we do not have all the information we have in hindsight, particularly about instrument characterisation, but this data is vitally important for understanding events that are happening right now. So, for example, you would not use combined NRT and reprocessed data sets for long-term trend analysis, and you would want to consider NRT measurements with a higher degree of uncertainty that you might consider with reprocessed data. So whilst we can use the NRT data now to get an indication of whether this is event is looking like it might be significant, we would eventually want to use a longer term time series that have been reprocessed, to establish how unusual this is event is in a more climatic context. As before, we will construct and submit a query to the WEkEO Harmonised Data Access API to get this data. Here we have supplied the JSON files for the data sets of interest, but you could edit these files to look at your own time frames/regions of interest etc.
###Code
# makes a date array for the REP product
start_dates = []
end_dates = []
dataset_ids = []
for ii in range(2008,2019+1):
start_dates.append(str(ii)+"-01-01T00:00:00.000Z")
end_dates.append(str(ii)+"-12-31T00:00:00.000Z")
dataset_ids.append("EO:MO:DAT:SST_GLO_SST_L4_REP_OBSERVATIONS_010_011")
# add the NRT product
for ii in range(2020,2021+1):
start_dates.append(str(ii)+"-01-01T00:00:00.000Z")
end_dates.append(str(ii)+"-12-31T00:00:00.000Z")
dataset_ids.append("EO:MO:DAT:SST_GLO_SST_L4_NRT_OBSERVATIONS_010_001")
# start loop over dates
if download_data:
# init HAPI
HAPI_dict = hapi.init(dataset_id, api_key, download_dir_path)
HAPI_dict = hapi.get_access_token(HAPI_dict)
HAPI_dict = hapi.acceptTandC(HAPI_dict)
for start_date, end_date, dataset_id in zip(start_dates, end_dates, dataset_ids):
# find query file
JSON_query_file = os.path.join(JSON_query_dir,dataset_id.replace(':','_')+".json")
if not os.path.exists(JSON_query_file):
print('Query file ' + JSON_query_file + ' does not exist')
else:
print('Found JSON query file for '+dataset_id)
print('Running for: ' + start_date + ' to ' + end_date)
date_tag = start_date.split('T')[0] + '_' + end_date.split('T')[0]
date_string = start_date.split('T')[0].replace('-','') \
+ '_' + end_date.split('T')[0].replace('-','')
# load the query
with open(JSON_query_file, 'r') as f:
query = f.read()
query = query.replace("%DATE_START%",start_date)
query = query.replace("%DATE_END%",end_date)
query = json.loads(query)
# launch job
print('Launching job...')
HAPI_dict = hapi.get_job_id(HAPI_dict, query)
# check results
print('Getting results...')
HAPI_dict = hapi.get_results_list(HAPI_dict)
HAPI_dict = hapi.get_order_ids(HAPI_dict)
# download data
print('Downloading data...')
if 'NRT' in dataset_id:
tag = 'XNRT'
else:
tag = 'REP'
HAPI_dict = hapi.download_data(HAPI_dict, \
user_filename='METOFFICE-GLO-SST-L4-{0}_{1}.nc'.format(tag, date_tag))
###Output
_____no_output_____
###Markdown
*** Plotting time series data to investigate occurrence and potential for marine heatwaves First we will set up some parameters including times and space we want to work over when plotting.
###Code
# The data product time is measured in seconds since the following refernce date
Date_ref = datetime.datetime(1981,1,1)
# Select the times we want to use for our spatial anomaly plots [month, day]
month_day_start = [1,1]
month_day_end = [12,31]
# Plotting font size
fsz = 30
###Output
_____no_output_____
###Markdown
Then we will find our datasets and concatenate them.
###Code
SST_files = []
my_files = glob.glob(os.path.join(download_dir_path,'METOFFICE-GLO-SST-L4-*'))
for SST_file in sorted(my_files):
SST_files.append(SST_file)
###Output
_____no_output_____
###Markdown
The next cell performs the bulk of the processing work for our plot. We start by loading in coordinates that we can use to subset the data (if required) and make our plots. Subsequently, we loop through the SST products and read in the data, correct Kelvin to Celsius and perform our averaging in either time and space.
###Code
#load the co-ordinate variables we need for subsetting/plotting
ds = xr.open_dataset(SST_files[-1])
lat = ds.lat.data
lon = ds.lon.data
ds.close()
# initialise lists for output times series variables for MWH
all_times = []
all_SST = []
# initialise arrays for output SST fields
iter_SST = np.ones([len(SST_files), len(lat), len(lon)])*np.nan
# now we get the area-averaged data
count = -1
for SST_file in SST_files:
count = count + 1
# xarray does not read times consistently between RAN and NRT, so we load as integer
ds = xr.open_dataset(SST_file, decode_times=False)
this_SST = ds.analysed_sst.data
times = ds.time.data
ds.close()
my_times = []
for time in times:
my_times.append(Date_ref + datetime.timedelta(seconds=int(time)))
times = np.asarray(my_times)
t0 = datetime.datetime(times[0].year, month_day_start[0], month_day_start[1])
t1 = datetime.datetime(times[0].year, month_day_end[0], month_day_end[1])
tt = np.where((times >= t0) & (times <= t1))[0]
if np.nanmean(this_SST) > 100:
this_SST = this_SST - 273.15
time_subset_SST = np.nanmean(np.nanmean(this_SST[tt,:,:],axis=1),axis=1)
iter_SST[count,:,:] = np.squeeze(np.nanmean(time_subset_SST, axis=0))
all_times.append(my_times)
all_SST.append(np.nanmean(np.nanmean(this_SST, axis=1), axis=1))
###Output
_____no_output_____
###Markdown
A little bit of formatting is necessary for the plots we want to make...
###Code
# flatten the SST list
SST_time_series = [item for sublist in all_SST for item in sublist]
SST_time_series = np.asarray(SST_time_series)
# format the dates for the MWH toolkit
Dates_time_series = [item for sublist in all_times for item in sublist]
Dates_time_series_formatted = [datetime.date.toordinal(tt) for tt in Dates_time_series]
# make climatology of time subset region
clim_SST = np.nanmean(iter_SST, axis=0)
# calculate heat waves
mhws, clim = mhw.detect(np.asarray(Dates_time_series_formatted), SST_time_series)
# make matrix
stripe_array = np.ones([20,len(SST_files)])*np.nan
# now make the anomaly
for ii in range(len(SST_files)):
stripe_array[:, ii] = np.nanmean(np.squeeze(iter_SST[ii,:,:]) - clim_SST)
###Output
_____no_output_____
###Markdown
The first plot we'll make, shows ‘stripes’ of the average sea surface temperature anomaly for our region of interest. Recently, ‘climate stripes’ have been used by 'citizen scientists' all over the world to show long-term trends in regional temperatures. The plot below shows a stripes-style graphic derived using the SST time series we have extracted. High anomalies are apparent during 2014, in 2015 during the previous marine heatwave, and were associated with reports in 2019 and 2020.
###Code
fig = plt.figure(figsize=(30,10), dpi =300)
vmax = np.nanmax(abs(stripe_array))
date_ticks = []
for ii in range(len(SST_files)):
date_ticks.append(str(2008+ii))
plt.pcolormesh(stripe_array[:,:],vmin=vmax*-1,vmax=vmax,cmap=plt.cm.RdBu_r)
plt.xticks(np.arange(len(SST_files))+0.5,date_ticks, rotation='90', fontsize=fsz/1.25)
plt.xlim([0,len(SST_files)-1])
plt.yticks([],[], fontsize=fsz/1.25)
cbar = plt.colorbar()
cbar.set_label('SST anomaly [$^{o}$C]',fontsize=fsz/1.25)
plt.savefig('Stripes.png')
###Output
_____no_output_____
###Markdown
Finally, we will look a little more quantitatively at historical marine heatwaves in this region, and see how the situation in recent times compares. The plot below is generated using a a toolkit very kindly provided as open source code by Hobday et al (you can find out more information on the toolkit and the science behind it here and here)
###Code
# plot MWHs
ev = np.argmax(mhws['intensity_max']) # Find largest event
fig = plt.figure(figsize=(35,15), dpi = 300)
plt.rc('font', size=fsz)
# Find indices for all n-MHWs before and after event of interest and shade accordingly
n_before=10
n_after=10
for ev0 in np.arange(ev-n_before, ev+n_after, 1):
t1 = np.where(Dates_time_series_formatted==mhws['time_start'][ev0])[0][0]
t2 = np.where(Dates_time_series_formatted==mhws['time_end'][ev0])[0][0]
p1 = plt.fill_between(Dates_time_series[t1:t2+1], SST_time_series[t1:t2+1],\
clim['thresh'][t1:t2+1], color=(1,0.85,0.85))
# Find indices for MHW of interest and shade accordingly
t1 = np.where(Dates_time_series_formatted==mhws['time_start'][-1])[0][0]
t2 = np.where(Dates_time_series_formatted==mhws['time_end'][-1])[0][0]
p2 = plt.fill_between(Dates_time_series[t1:t2+1], SST_time_series[t1:t2+1],\
clim['thresh'][t1:t2+1], color='r')
# Plot SST, seasonal cycle, threshold, shade MHWs with main event in red
p3, = plt.plot(Dates_time_series, SST_time_series, 'k-', linewidth=2)
p4, = plt.plot(Dates_time_series, clim['thresh'], 'b--', linewidth=2)
p5, = plt.plot(Dates_time_series, clim['seas'], 'b-', linewidth=2)
xmin = datetime.datetime(2013,1,1)
xmax = datetime.datetime(2021,12,31)
plt.xlim(xmin,xmax)
plt.ylim(clim['seas'].min() - 0.3, clim['seas'].max() + mhws['intensity_max'][ev] + 0.2)
plt.ylabel('SST [$^\circ$C]')
plt.annotate('Plotting script credit:\nhttps://github.com/ecjoliver/marineHeatWaves',\
(0.005, 0.935), xycoords='axes fraction', color='0.5', fontsize=fsz/1.25)
leg = plt.legend([p3, p5, p4, p2, p1],\
['OSTIA SST','SST Seas. clim.','SST Seas. thresh.','Recent heatwave','Past heatwave'],\
bbox_to_anchor=(0.5, -0.3), ncol=5, loc="lower center")
leg.get_frame().set_linewidth(0.0)
plt.savefig('MHW.png')
###Output
_____no_output_____
###Markdown
Using Copernicus data to investigate Marine Heatwaves Version: 3.0 Date: 14/07/2020 Author: Hayley Evers-King (EUMETSAT) and Ben Loveday (InnoFlair, Plymouth Marine Laboratory) Credit: This code was developed for EUMETSAT under contracts for the European Commission Copernicus programme. License: This code is offered as open source and free-to-use in the public domain, with no warranty, under the MIT license associated with this code repository. **What is this notebook for?**This notebook will download EUMETSAT Sentinel-3 SLSTR data for composite plotting, as well as CMEMS time series data of SST, from both reprocessed and NRT data stream. The notebook covers the application of some simple plotting techniques and application of basic analysis to investigate both historical and current potential for the occurrence of marine heat waves.**What specific tools does this notebook use?**Beyond general Python modules, this notebook imports some functions for managing the harmonised data access api (harmonised_data_access_api_tools.py) which can be found in the wekeo-hda folder on JupyterLab, and additional libraries for marine heatwave analysis provided openly and based on the work of Hobday et al..**What are marine heatwaves and how can Copernicus data be used to observe them?**Like heatwaves on land, marine heatwaves are extended periods of higher than normal temperatures. They have occurred in different areas around the world and can be caused by different oceanographic driving forces. You can find out all about them here. The variable drivers, and historical context for defining heatwaves regionally, mean that we need the ability to measure the range of ocean temperatures that occur all over the world at any given time, and also a long-term base-line understanding of what ‘normal’ temperatures look like.Sea surface temperature measurements from satellites can support the identification of marine heatwaves, both through the daily measurements they make, and their contributions to long-term data records. The Sentinel-3 satellites are the Copernicus programme's contribution to climate-scale monitoring of sea surface temperatures, with the Sea and Land Surface Temperature Radiometer (SLSTR) on Sentinel-3A now able to function as a reference sensor, when combining multiple sea surface temperature data sets, such as those available from the Copernicus Marine Service. Get WEkEO User credentialsIf you want to download the data to use this notebook, you will need WEkEO User credentials. If you do not have these, you can register here. *** Let's get started! Python is divided into a series of modules that each contain a series of methods for specific tasks. The box below imports all of the moduls we need to complete our plotting task
###Code
# standard tools
import os, sys, json
from zipfile import ZipFile
import shutil
import sys
import datetime
import numpy as np
from IPython.core.display import HTML
import xarray as xr
import matplotlib.pyplot as plt
from matplotlib import gridspec
import glob
import warnings
warnings.filterwarnings("ignore")
# HDA API tools
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(os.getcwd())),'wekeo-hda'))
import hda_api_functions as hapi # this is the PYTHON version
# specific tools (which can be found here ../../tools/)
sys.path.append(os.path.join(os.getcwd(),'tools'))
sys.path.append(os.path.join(os.getcwd(),'tools','mhw_master'))
import image_tools as img
import SST_plotting_tools as sstp
import marineHeatWaves as mhw
###Output
_____no_output_____
###Markdown
WEkEO provides access to a huge number of datasets through its **'harmonised-data-access'** API. This allows us to query the full data catalogue and download data quickly and directly onto the Jupyter Lab. You can search for what data is available hereIn order to use the HDA-API we need to provide some authentication credentials, which comes in the form of an API key and API token. In this notebook we have provided functions so you can retrieve the API key and token you need directly. You can find out more about this process in the notebook on HDA access (wekeo_harmonized_data_access_api.ipynb) that can be found in the **wekeo-hda** folder on your Jupyterlab.We will also define a few other parameters including where to download the data to, and if we want the HDA-API functions to be verbose. **Lastly, we will tell the notebook where to find the query we will use to find the data.** These 'JSON' queries are what we use to ask WEkEO for data. They have a very specific form, but allow us quite fine grained control over what data to get. You can find the ones that we will use here: **JSON_templates/mhw/..**
###Code
# your WEkEO API username and password (needs to be in ' ')
user_name = 'USERNAME'
password = 'PASSWORD'
# Generate an API key
api_key = hapi.generate_api_key(user_name, password)
display(HTML('Your API key is: <b>'+api_key+'</b>'))
# where the data should be downloaded to:
download_dir_path = os.path.join(os.getcwd(),'products')
# where we can find our data query form:
JSON_query_dir = os.path.join(os.getcwd(),'JSON_templates','mhw')
# HDA-API loud and noisy?
verbose = False
# make the output directory if required
if not os.path.exists(download_dir_path):
os.makedirs(download_dir_path)
###Output
_____no_output_____
###Markdown
Now we are ready to get our data.
###Code
# SLSTR L2 SST KEY
dataset_id = "EO:EUM:DAT:SENTINEL-3:SL_2_WST___"
download_data = True #Set this to False if you've already downloaded the data!
###Output
_____no_output_____
###Markdown
We use our dataset ID to load the correct JSON query file from ../JSON_templates/mhw/You can edit this query if you want to get different data, but be aware of asking for too much data - you could be here a while and might run out of space to use this data in the JupyterLab. The box below gets the correct query file.
###Code
# find query file
JSON_query_file = os.path.join(JSON_query_dir,dataset_id.replace(':','_')+".json")
if not os.path.exists(JSON_query_file):
print('Query file ' + JSON_query_file + ' does not exist')
else:
print('Found JSON query file for '+dataset_id)
###Output
_____no_output_____
###Markdown
Now we have a query, we need to launch it to WEkEO to get our data. The box below takes care of this through the following steps: 1. initialise our HDA-API 2. get an access token for our data 3. accepts the WEkEO terms and conditions 4. loads our JSON query into memory 5. launches our search 6. waits for our search results 7. gets our result list 8. downloads our dataThis is quite a complex process, so much of the functionality has been buried 'behind the scenes'. If you want more information, you can check out the Harmonised Data Access API tutorials. The code below will report some information as it runs. At the end, it should tell you that one product has been downloaded.
###Code
if download_data:
HAPI_dict = hapi.init(dataset_id, api_key, download_dir_path)
HAPI_dict = hapi.get_access_token(HAPI_dict)
HAPI_dict = hapi.acceptTandC(HAPI_dict)
# load the query
with open(JSON_query_file, 'r') as f:
query = json.load(f)
# launch job
print('Launching job...')
HAPI_dict = hapi.get_job_id(HAPI_dict, query)
# check results
print('Getting results...')
HAPI_dict = hapi.get_results_list(HAPI_dict)
HAPI_dict = hapi.get_order_ids(HAPI_dict)
# download data
print('Downloading data...')
HAPI_dict = hapi.download_data(HAPI_dict, file_extension='.zip')
###Output
_____no_output_____
###Markdown
Sentinel data is usually distributed as a zip file, which contains the SAFE format data within. To use this, we must unzip the file. The bow below handles this.
###Code
if download_data:
# unzip file
for filename in HAPI_dict['filenames']:
if os.path.splitext(filename)[-1] == '.zip':
print('Unzipping file')
try:
with ZipFile(filename, 'r') as zipObj:
# Extract all the contents of zip file in current directory
zipObj.extractall(os.path.dirname(filename))
# clear up the zip file
os.remove(filename)
except:
print("Failed to unzip....")
###Output
_____no_output_____
###Markdown
*** Plotting SLSTR dataWe plot our SLSTR scenes using a function that manages data ingestion, flagging, bias correction and makes some map embelishements (e.g. adds dotted lines to the scene edges, so we can tell where the boundaries are). We call this function for each image in the boxes further down. We start by finding all the necessary files (which glob.glob takes care of). The files are added to a list which is then sent to our large function above.
###Code
# verbosity
verbose = False
# figure options
figure_font_size = 20
plot_extents = [-160, -116, 10, 45]
vmin = 10
vmax = 28
grid_factor = 10 #sub-sample to reduce plot resolution
# get the files
SLSTR_files = glob.glob(os.path.join(download_dir_path,'S3*WST*201909*','*.nc'))
###Output
_____no_output_____
###Markdown
And now we pass our list of files to the plotting routine. The plotting routine returns the handles of our axes, so that we can still make some changes once the main plot is done (e.g. add the annotations). Finally, it will save the figure in the same directory as this notebooks.
###Code
# make the plot: we will call this as a function as it contains a 'for' loop to make the plot
fig, axis, colbar = sstp.make_SLSTR_composite_plot(SLSTR_files, plot_extents=plot_extents,\
fsz=figure_font_size, vmin=vmin, vmax=vmax, grid_factor=grid_factor)
# add some embellishments
plt.sca(axis)
label='Sentinel-3A SLSTR SST\n07/09/2019 (22:27 local time)'
txt = plt.annotate(label, xy=(0.66, 1.01), xycoords='axes fraction',\
size=figure_font_size/1.25,
color='k', zorder=100, annotation_clip=False)
label='Sentinel-3A SLSTR SST\n08/09/2019 (00:08 local time)'
txt = plt.annotate(label, xy=(0.005, 1.01), xycoords='axes fraction',\
size=figure_font_size/1.25,
color='k', zorder=100, annotation_clip=False)
label='Sentinel-3B SLSTR SST\n08/09/2019 (23:28 local time)'
txt = plt.annotate(label, xy=(0.33, 1.01), xycoords='axes fraction',\
size=figure_font_size/1.25,
color='b', zorder=100, annotation_clip=False)
plt.savefig('SLSTR_All_SST_California_20190909.png',bbox_inches='tight')
###Output
_____no_output_____
###Markdown
You can now find the image in the folder where your code is stored in your instance of the JupyterLab. The image is a composite of three images from the SLSTR sensors aboard the Sentinel-3A and B satellites which captured warm temperatures around the eastern Pacific in September 2019. This highlights the increased coverage that can be achieved in both time and space, using multiple sensors, but in order to understand if these temperatures could be an indicator of the beginning of a marine heatwave we need some longer term context... *** Looking at time series data of sea surface temperature To place the images above in context we'll need to look at a time series of data. For this we can access data from the Copernicus services, where multiple data sources (different satellites etc) are combined to produce data sets that cover longer time periods. In this case we are going to look at the OSTIA Sea Surface Temperature (NRT), so we can look at data right up to the time period of the SLSTR images we looked at previously. Reprocessed and NRT data streams are produced separately and we must be careful when we interpret these data sets together. As time progresses, we understand better how satellite instruments perform, algorithms improve, and data is reprocessed to ensure good quality and consistency between sources. With NRT we do not have all the information we have in hindsight, particularly about instrument characterisation, but this data is vitally important for understanding events that are happening right now. So, for example, you would not use combined NRT and reprocessed data sets for long-term trend analysis, and you would want to consider NRT measurements with a higher degree of uncertainty that you might consider with reprocessed data. So whilst we can use the NRT data now to get an indication of whether this is event is looking like it might be significant, we would eventually want to use a longer term time series that have been reprocessed, to establish how unusual this is event is in a more climatic context. As before, we will construct and submit a query to the WEkEO Harmonised Data Access API to get this data. Here we have supplied the JSON files for the data sets of interest, but you could edit these files to look at your own time frames/regions of interest etc.
###Code
# find query file for OSTIA NRT and initialise dictionary
dataset_id = "EO:MO:DAT:SST_GLO_SST_L4_NRT_OBSERVATIONS_010_001"
# makes a date array...NfH
start_dates = []
end_dates = []
for ii in range(2008,2019+1):
start_dates.append(str(ii)+"-01-01T00:00:00.000Z")
end_dates.append(str(ii)+"-12-31T00:00:00.000Z")
# find query file
JSON_query_file = os.path.join(JSON_query_dir,dataset_id.replace(':','_')+".json")
if not os.path.exists(JSON_query_file):
print('Query file ' + JSON_query_file + ' does not exist')
else:
print('Found JSON query file for '+dataset_id)
# start loop over dates
if download_data:
# init HAPI
HAPI_dict = hapi.init(dataset_id, api_key, download_dir_path)
HAPI_dict = hapi.get_access_token(HAPI_dict)
HAPI_dict = hapi.acceptTandC(HAPI_dict)
for start_date, end_date in zip(start_dates, end_dates):
print('Running for: ' + start_date + ' to ' + end_date)
date_tag = start_date.split('T')[0] + '_' + end_date.split('T')[0]
date_string = start_date.split('T')[0].replace('-','') \
+ '_' + end_date.split('T')[0].replace('-','')
# load the query
with open(JSON_query_file, 'r') as f:
query = f.read()
query = query.replace("%DATE_START%",start_date)
query = query.replace("%DATE_END%",end_date)
query = json.loads(query)
# launch job
print('Launching job...')
HAPI_dict = hapi.get_job_id(HAPI_dict, query)
# check results
print('Getting results...')
HAPI_dict = hapi.get_results_list(HAPI_dict)
HAPI_dict = hapi.get_order_ids(HAPI_dict)
# download data
print('Downloading data...')
HAPI_dict = hapi.download_data(HAPI_dict, \
user_filename='METOFFICE-GLO-SST-L4-NRT_' + date_tag + '.nc')
###Output
_____no_output_____
###Markdown
*** Plotting time series data to investigate occurrence and potential for marine heatwaves First we will set up some parameters including times and space we want to work over when plotting.
###Code
# subset image: cut a relevant section out of an image for area averaging. If false, whole area will be used.
# THERE IS NO AREA WEIGHTING IN THE AVERAGING HERE!
# Subset_extents [lon1,lon2,lat1,lat2] describes the section we cut
subset_image = False
subset_extents = [-160.0, -150.0, 15.0, 25.0]
# The data product time is measured in seconds since the following refernce date
Date_ref = datetime.datetime(1981,1,1)
# Select the times we want to use for our spatial anomaly plots [month, day]
month_day_start = [8,1]
month_day_end = [9,24]
# Plotting font size
fsz = 30
###Output
_____no_output_____
###Markdown
Then we will find our datasets and concatenate them.
###Code
SST_files = []
my_files = glob.glob(os.path.join(download_dir_path,'METOFFICE-GLO-SST-L4-NRT*'))
for SST_file in sorted(my_files):
SST_files.append(SST_file)
###Output
_____no_output_____
###Markdown
The next cell performs the bulk of the processing work for our plot. We start by loading in coordinates that we can use to subset the data (if required) and make our plots. Subsequently, we loop through the SST products and read in the data, correct Kelvin to Celsius and perform our averaging in either time and space.
###Code
#load the co-ordinate variables we need for subsetting/plotting
ds = xr.open_dataset(SST_files[-1])
lat = ds.lat.data
lon = ds.lon.data
ds.close()
# subset coords if required, getting indices to subset out output SST products
if subset_image:
ii = np.where((lon >= subset_extents[0]) & (lon <= subset_extents[1]))[0]
jj = np.where((lat >= subset_extents[2]) & (lat <= subset_extents[3]))[0]
lon = lon[ii]
lat = lat[jj]
# initialise lists for output times series variables for MWH
all_times = []
all_SST = []
# initialise arrays for output SST fields
iter_SST = np.ones([len(SST_files), len(lat), len(lon)])*np.nan
# now we get the area-averaged data
count = -1
for SST_file in SST_files:
print(SST_file)
count = count + 1
# xarray does not read times consistency between RAN and NRT, so we load as integer
ds = xr.open_dataset(SST_file, decode_times=False)
this_SST = ds.analysed_sst.data
times = ds.time.data
ds.close()
my_times = []
for time in times:
my_times.append(Date_ref + datetime.timedelta(seconds=int(time)))
times = np.asarray(my_times)
t0 = datetime.datetime(times[0].year, month_day_start[0], month_day_start[1])
t1 = datetime.datetime(times[0].year, month_day_end[0], month_day_end[1])
tt = np.where((times >= t0) & (times <= t1))[0]
if np.nanmean(this_SST) > 100:
this_SST = this_SST - 273.15
if subset_image:
this_SST = this_SST[:,jj[0]:jj[-1],ii[0]:ii[-1]]
time_subset_SST = this_SST[tt,:,:]
iter_SST[count,:,:] = np.squeeze(np.nanmean(time_subset_SST, axis=0))
all_times.append(my_times)
all_SST.append(np.nanmean(np.nanmean(this_SST, axis=1), axis=1))
###Output
_____no_output_____
###Markdown
A little bit of formatting is necessary for the plots we want to make...
###Code
# flatten the SST list
SST_time_series = [item for sublist in all_SST for item in sublist]
SST_time_series = np.asarray(SST_time_series)
# format the dates for the MWH toolkit
Dates_time_series = [item for sublist in all_times for item in sublist]
Dates_time_series_formatted = [datetime.date.toordinal(tt) for tt in Dates_time_series]
# make climatology of time subset region
clim_SST = np.nanmean(iter_SST, axis=0)
# calculate heat waves
mhws, clim = mhw.detect(np.asarray(Dates_time_series_formatted), SST_time_series)
# make matrix
stripe_array = np.ones([20,len(SST_files)])*np.nan
# now make the anomaly
for ii in range(len(SST_files)):
stripe_array[:, ii] = np.nanmean(np.squeeze(iter_SST[ii,:,:]) - clim_SST)
###Output
_____no_output_____
###Markdown
The first plot we'll make, shows ‘stripes’ of the average sea surface temperature anomaly for the region around Hawaii. Recently, ‘climate stripes’ have been used by 'citizen scientists' all over the world to show long-term trends in regional temperatures. The plot below shows a stripes-style graphic for the eastern Pacific region around Hawaii. High anomalies are apparent during 2014, in 2015 during the previous marine heatwave, and were associated with reports in 2019.
###Code
fig = plt.figure(figsize=(30,10), dpi =300)
vmax = np.nanmax(abs(stripe_array))
date_ticks = []
for ii in range(len(SST_files)):
date_ticks.append(str(2008+ii))
plt.pcolormesh(stripe_array,vmin=vmax*-1,vmax=vmax,cmap=plt.cm.RdBu_r)
plt.xticks(np.arange(len(SST_files))+0.5,date_ticks, rotation='90')
plt.yticks([],[])
cbar = plt.colorbar()
cbar.set_label('SST anomaly [$^{o}$C] (1$^{st}$ Aug - 24$^{th}$ Sep)',fontsize=fsz/1.25)
plt.savefig('Stripes.png')
###Output
_____no_output_____
###Markdown
Finally, we will look a little more quantitatively at historical marine heatwaves in this region, and see how the situation in 2019 compared (with the previously mentioned assumptions about the accuracy of NRT data for longer term analysis). The plot below is generated using a a toolkit very kindly provided as open source code by Hobday et al (you can find out more information on the toolkit and the science behind it here and here)
###Code
# plot MWHs
ev = np.argmax(mhws['intensity_max']) # Find largest event
fig = plt.figure(figsize=(35,15), dpi = 300)
plt.rc('font', size=fsz)
# Find indices for all n-MHWs before and after event of interest and shade accordingly
n_before=3
n_after=1
for ev0 in np.arange(ev-n_before, ev+n_after, 1):
t1 = np.where(Dates_time_series_formatted==mhws['time_start'][ev0])[0][0]
t2 = np.where(Dates_time_series_formatted==mhws['time_end'][ev0])[0][0]
p1 = plt.fill_between(Dates_time_series[t1:t2+1], SST_time_series[t1:t2+1],\
clim['thresh'][t1:t2+1], color=(1,0.85,0.85))
# Find indices for MHW of interest and shade accordingly
t1 = np.where(Dates_time_series_formatted==mhws['time_start'][-1])[0][0]
t2 = np.where(Dates_time_series_formatted==mhws['time_end'][-1])[0][0]
p2 = plt.fill_between(Dates_time_series[t1:t2+1], SST_time_series[t1:t2+1],\
clim['thresh'][t1:t2+1], color='r')
# Plot SST, seasonal cycle, threshold, shade MHWs with main event in red
p3, = plt.plot(Dates_time_series, SST_time_series, 'k-', linewidth=2)
p4, = plt.plot(Dates_time_series, clim['thresh'], 'b--', linewidth=2)
p5, = plt.plot(Dates_time_series, clim['seas'], 'b-', linewidth=2)
xmin = datetime.datetime(2014,1,1).toordinal() - datetime.datetime(1,1,1).toordinal()
xmax = datetime.datetime(2019,12,31).toordinal() - datetime.datetime(1,1,1).toordinal()
plt.xlim(xmin,xmax)
plt.ylim(clim['seas'].min() - 0.3, clim['seas'].max() + mhws['intensity_max'][ev] + 0.2)
plt.ylabel('SST [$^\circ$C]')
plt.annotate('Plotting script credit:\nhttps://github.com/ecjoliver/marineHeatWaves',\
(0.005, 0.935), xycoords='axes fraction', color='0.5', fontsize=fsz/1.25)
leg = plt.legend([p3, p5, p4, p2, p1],\
['OSTIA NRT SST','SST Seas. clim.','SST Seas. thresh.','Current heatwave','Past heatwave'],\
bbox_to_anchor=(1.0, -0.05), ncol=5)
leg.get_frame().set_linewidth(0.0)
plt.savefig('MHW.png')
###Output
_____no_output_____ |
_notebooks/2021-05-21-Abalone_Regression_SageMaker.ipynb | ###Markdown
"SageMaker Regression on Abalone Data"> Estimation of the age of an abalone using readily available measurements- toc: true- branch: master- badges: false- comments: true- hide: false- search_exclude: true- metadata_key1: metadata_value1- metadata_key2: metadata_value2- image: images/Abalone_Regression_SageMaker.jpg- categories: [Fishing-Industry, Regression, AWS-Sagemaker,Linear-Learner]- show_tags: true 1. IntroductionWhat is Abalone? It is a large marine gastropod mollusk that lives in coastal saltwater and is a member of the Haliotidae family. Abalone is often found around the waters of South Africa, Australia, New Zealand, Japan, and the west coast of North America. The abalone shell is flat and spiral-shaped with several small holes around the edges. It has a single shell on top with a large foot to cling to rocks and lives on algae. Sizes range from 4 to 10 inches. The interior of the shell has an iridescent mother of pearl appearance (Figure 1).As a highly prized culinary delicacy (Figure 2), it has a rich, flavorful taste that is sweet buttery, and salty. Abalone is often sold live in the shell, but also frozen, or canned. It is among the world's most expensive seafood. For preparation it is often cut into thick steaks and pan-fried. It can also be eaten raw. 2. Data UnderstandingThere is more information on the Abalone Dataset available at [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). The dataset has 9 features:* Rings (number of)* sex (M, F, Infant)* Length (Longest shell measurement in mm)* Diameter (in mm)* Height (with meat in shell, in mm)* Whole Weight (whole abalone, in grams)* Shucked Weight (weight of meat, in grams)* Viscera Weight (gut weight after bleeding, in grams)* Shell Weight (after being dried, in grams)The number of rings indicates the age of the abalone. The age of abalone is determined by cutting the shell through the cone, staining it, and counting the number of rings through a microscope. Not only is this a boring and time-consuming task but it is also relatively expensive in terms of waste. The remaining measurements, on the other hand, are readily achievable with the correct tools, and with much less effort. The purpose of this model is to estimate the abalone age, specifically the number of rings, based on the other features. 2.0 Setup
###Code
import urllib.request
import pandas as pd
import seaborn as sns
import random
# from IPython.core.debugger import set_trace
import boto3
import sagemaker
from sagemaker.image_uris import retrieve
from time import gmtime, strftime
from sagemaker.serializers import CSVSerializer
from sagemaker.deserializers import JSONDeserializer
# import json
# from itertools import islice
# import math
# import struct
!pip install smdebug
from smdebug.trials import create_trial
import matplotlib.pyplot as plt
import re
# hide
region = boto3.Session().region_name; print('region:', region)
role = sagemaker.get_execution_role(); print('role:', role)
s3 = boto3.resource('s3')
# bucket_str = "learnableloopai-blog"
# bucket = s3.Bucket(bucket_str)
bucket = "learnableloopai-blog"
prefix = "abalone"
###Output
region: us-west-2
role: arn:aws:iam::776779151861:role/service-role/AmazonSageMaker-ExecutionRole-20210506T090965
###Markdown
2.1 DownloadThe [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) is available in the libsvm format. Next, we will download it.
###Code
%%time
# Load the dataset
SOURCE_DATA = "abalone_libsvm.txt"
urllib.request.urlretrieve(
"https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", SOURCE_DATA
)
# hide
!pwd
# hide
!ls -altrh
!head -10 ./{SOURCE_DATA}
###Output
15 1:1 2:0.455 3:0.365 4:0.095 5:0.514 6:0.2245 7:0.101 8:0.15
7 1:1 2:0.35 3:0.265 4:0.09 5:0.2255 6:0.0995 7:0.0485 8:0.07
9 1:2 2:0.53 3:0.42 4:0.135 5:0.677 6:0.2565 7:0.1415 8:0.21
10 1:1 2:0.44 3:0.365 4:0.125 5:0.516 6:0.2155 7:0.114 8:0.155
7 1:3 2:0.33 3:0.255 4:0.08 5:0.205 6:0.0895 7:0.0395 8:0.055
8 1:3 2:0.425 3:0.3 4:0.095 5:0.3515 6:0.141 7:0.0775 8:0.12
20 1:2 2:0.53 3:0.415 4:0.15 5:0.7775 6:0.237 7:0.1415 8:0.33
16 1:2 2:0.545 3:0.425 4:0.125 5:0.768 6:0.294 7:0.1495 8:0.26
9 1:1 2:0.475 3:0.37 4:0.125 5:0.5095 6:0.2165 7:0.1125 8:0.165
19 1:2 2:0.55 3:0.44 4:0.15 5:0.8945 6:0.3145 7:0.151 8:0.32
###Markdown
2.2 Read data into dataframe
###Code
df = pd.read_csv(
SOURCE_DATA,
sep=" ",
encoding="latin1",
names=[
"age",
"sex",
"Length",
"Diameter",
"Height",
"Whole.weight",
"Shucked.weight",
"Viscera.weight",
"Shell.weight",
],
); df
###Output
_____no_output_____
###Markdown
2.3 Convert from libsvm to csv formatThe libsvm format is not suitable to explore the data with pandas. Next we will convert the data to csv format:
###Code
# Extracting the features values from the libvsm format
features = [
"sex",
"Length",
"Diameter",
"Height",
"Whole.weight",
"Shucked.weight",
"Viscera.weight",
"Shell.weight",
]
for f in features:
if f == "sex":
df[f] = (df[f].str.split(":", n=1, expand=True)[1])
else:
df[f] = (df[f].str.split(":", n=1, expand=True)[1])
df
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4177 entries, 0 to 4176
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 age 4177 non-null int64
1 sex 4177 non-null object
2 Length 4177 non-null object
3 Diameter 4177 non-null object
4 Height 4177 non-null object
5 Whole.weight 4177 non-null object
6 Shucked.weight 4177 non-null object
7 Viscera.weight 4177 non-null object
8 Shell.weight 4177 non-null object
dtypes: int64(1), object(8)
memory usage: 293.8+ KB
###Markdown
To understand the data better, we need to convert all the string types to numeric types.
###Code
df = df.astype({
'age':'int32',
'sex':'int32',
'Length':'float32',
'Diameter':'float32',
'Height':'float32',
'Whole.weight':'float32',
'Shucked.weight':'float32',
'Viscera.weight':'float32',
'Shell.weight':'float32',
})
df.info()
df.isnull().values.any()
df.isnull().sum().sum()
sns.set(style="ticks", color_codes=True)
# g = sns.pairplot(df)
g = sns.pairplot(df, diag_kind='kde')
###Output
_____no_output_____
###Markdown
The data is now clean with no missing values. We will write this clean data to a file:
###Code
CLEAN_DATA = "abalone_clean.txt"
# df = df.sample(n=10, random_state=1)
df.to_csv(CLEAN_DATA, sep=',', header=None, index=False)
# hide
!ls -altrh
###Output
total 2.9M
drwx------ 2 root root 16K May 6 16:10 lost+found
drwxr-xr-x 2 ec2-user ec2-user 4.0K May 6 16:10 .sparkmagic
-rw-rw-r-- 1 ec2-user ec2-user 458K May 19 15:14 hs_err_pid19109.log
-rw-rw-r-- 1 ec2-user ec2-user 449K May 19 19:00 PredictContributionEffort_1.ipynb
-rw-rw-rw- 1 ec2-user ec2-user 59K May 20 19:34 PredictContributionEffort_2__pyspark_mnist_xgboost.ipynb
drwxrwxr-x 2 ec2-user ec2-user 4.0K May 21 15:30 .ipynb_checkpoints
-rw-rw-r-- 1 ec2-user ec2-user 29K May 21 18:11 abalone_valid.csv
-rw-rw-r-- 1 ec2-user ec2-user 132K May 21 18:11 abalone_train.csv
-rw-rw-r-- 1 ec2-user ec2-user 29K May 21 18:11 abalone_testg.csv
drwxrwxrwx 3 ec2-user ec2-user 4.0K May 21 21:34 factorization_machines_mnist_2021-05-21
-rw-rw-r-- 1 ec2-user ec2-user 1.2M May 21 22:57 Abalone_Regression_SageMaker.ipynb
drwxr-xr-x 6 ec2-user ec2-user 4.0K May 21 22:57 .
drwx------ 22 ec2-user ec2-user 4.0K May 22 16:12 ..
-rw-rw-r-- 1 ec2-user ec2-user 253K May 22 16:12 abalone_libsvm.txt
-rw-rw-r-- 1 ec2-user ec2-user 188K May 22 16:13 abalone_clean.txt
###Markdown
3. Data PreparationWhat remains is to split the data into suitable partitions for modeling.
###Code
def split_data(
FILE_TOTAL,
FILE_TRAIN,
FILE_VALID,
FILE_TESTG,
FRAC_TRAIN,
FRAC_VALID,
FRAC_TESTG,
):
total = [row for row in open(FILE_TOTAL, "r")]
train_file = open(FILE_TRAIN, "w")
valid_file = open(FILE_VALID, "w")
testg_file = open(FILE_TESTG, "w")
num_total = len(total)
num_train = int(FRAC_TRAIN*num_total)
num_valid = int(FRAC_VALID*num_total)
num_testg = int(FRAC_TESTG*num_total)
sizes = [num_train, num_valid, num_testg]
splits = [[], [], []]
rand_total_ind = 0
#set_trace()
for split_ind,size in enumerate(sizes):
for _ in range(size):
if len(total)<1:
print('ERROR. Make sure fractions are decimals.')
break
rand_total_ind = random.randint(0, len(total) - 1)
#print('len(total) - 1',len(total) - 1)
#print('rand_total_ind:',rand_total_ind)
#print('total[rand_total_ind]:',total[rand_total_ind])
splits[split_ind].append(total[rand_total_ind])
total.pop(rand_total_ind)
for row in splits[0]:
train_file.write(row)
print(f'Training data: {len(splits[0])} rows ({len(splits[0])/num_total})')
for row in splits[1]:
valid_file.write(row)
print(f'Validation data: {len(splits[1])} rows ({len(splits[1])/num_total})')
for row in splits[2]:
testg_file.write(row)
print(f'Testing data: {len(splits[2])} rows ({len(splits[2])/num_total})')
train_file.close()
valid_file.close()
testg_file.close()
# Load the dataset
FILE_TOTAL = "abalone_clean.txt"
FILE_TRAIN = "abalone_train.csv"
FILE_VALID = "abalone_valid.csv"
FILE_TESTG = "abalone_testg.csv"
FRAC_TRAIN = .70
FRAC_VALID = .15
FRAC_TESTG = .15
split_data(
FILE_TOTAL,
FILE_TRAIN,
FILE_VALID,
FILE_TESTG,
FRAC_TRAIN,
FRAC_VALID,
FRAC_TESTG,
)
# hide
!ls -altrh
###Output
total 2.6M
drwx------ 2 root root 16K May 6 16:10 lost+found
drwxr-xr-x 2 ec2-user ec2-user 4.0K May 6 16:10 .sparkmagic
-rw-rw-r-- 1 ec2-user ec2-user 458K May 19 15:14 hs_err_pid19109.log
-rw-rw-r-- 1 ec2-user ec2-user 449K May 19 19:00 PredictContributionEffort_1.ipynb
-rw-rw-rw- 1 ec2-user ec2-user 59K May 20 19:34 PredictContributionEffort_2__pyspark_mnist_xgboost.ipynb
drwxrwxr-x 2 ec2-user ec2-user 4.0K May 21 15:30 .ipynb_checkpoints
-rw-rw-r-- 1 ec2-user ec2-user 253K May 21 15:32 abalone_libsvm.txt
drwx------ 22 ec2-user ec2-user 4.0K May 21 16:48 ..
-rw-rw-r-- 1 ec2-user ec2-user 188K May 21 18:03 abalone_clean.txt
-rw-rw-r-- 1 ec2-user ec2-user 29K May 21 18:11 abalone_valid.csv
-rw-rw-r-- 1 ec2-user ec2-user 132K May 21 18:11 abalone_train.csv
-rw-rw-r-- 1 ec2-user ec2-user 29K May 21 18:11 abalone_testg.csv
-rw-rw-r-- 1 ec2-user ec2-user 943K May 21 18:12 Abalone_Regression_SageMaker.ipynb
drwxr-xr-x 5 ec2-user ec2-user 4.0K May 21 18:12 .
###Markdown
4. ModelingBefore we build a model, we want to position the data on S3. 4.1 Position data on S3
###Code
def write_to_s3(fobj, bucket, key):
return (
boto3.Session(region_name=region)
.resource("s3")
.Bucket(bucket)
.Object(key)
.upload_fileobj(fobj)
)
def upload_to_s3(bucket, prefix, channel, filename):
fobj = open(filename, "rb")
key = f"{prefix}/{channel}/{filename}"
url = f"s3://{bucket}/{key}"
print(f"Writing to {url}")
write_to_s3(fobj, bucket, key)
# upload the files to the S3 bucket
upload_to_s3(bucket, prefix, "train", FILE_TRAIN)
upload_to_s3(bucket, prefix, "valid", FILE_VALID)
upload_to_s3(bucket, prefix, "testg", FILE_TESTG)
# hide
!aws s3 ls
###Output
2021-05-06 17:40:08 776779151861-sagemaker-us-west-2
2021-05-06 02:51:44 aws-emr-resources-776779151861-us-west-2
2021-05-21 18:24:44 learnableloopai-blog
2021-05-11 17:59:27 sagemaker-studio-776779151861-89hkk9e6uzv
2021-05-12 19:48:24 sagemaker-studio-776779151861-k5ps7zp0njh
2021-05-16 12:55:13 todproof-batch-translations
2021-05-05 23:50:17 todproof-contributions-archive
###Markdown
4.2 Setup data channels
###Code
s3_train_data = f"s3://{bucket}/{prefix}/train"
print(f"training files will be taken from: {s3_train_data}")
s3_valid_data = f"s3://{bucket}/{prefix}/valid"
print(f"validation files will be taken from: {s3_valid_data}")
s3_testg_data = f"s3://{bucket}/{prefix}/testg"
print(f"testing files will be taken from: {s3_testg_data}")
s3_output = f"s3://{bucket}/{prefix}/output"
print(f"training artifacts output location: {s3_output}")
# generating the session.s3_input() format for fit() accepted by the sdk
train_data = sagemaker.inputs.TrainingInput(
s3_train_data,
distribution="FullyReplicated",
content_type="text/csv",
s3_data_type="S3Prefix",
record_wrapping=None,
compression=None,
)
valid_data = sagemaker.inputs.TrainingInput(
s3_valid_data,
distribution="FullyReplicated",
content_type="text/csv",
s3_data_type="S3Prefix",
record_wrapping=None,
compression=None,
)
testg_data = sagemaker.inputs.TrainingInput(
s3_testg_data,
distribution="FullyReplicated",
content_type="text/csv",
s3_data_type="S3Prefix",
record_wrapping=None,
compression=None,
)
###Output
training files will be taken from: s3://learnableloopai-blog/abalone/train
validation files will be taken from: s3://learnableloopai-blog/abalone/valid
testing files will be taken from: s3://learnableloopai-blog/abalone/testg
training artifacts output location: s3://learnableloopai-blog/abalone/output
###Markdown
4.3 Training a Linear Learner modelFirst, we retrieve the image for the Linear Learner Algorithm according to the region.Then we create an [estimator from the SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html) using the Linear Learner container image and we setup the training parameters and hyperparameters configuration.
###Code
#
# get the linear learner image
image_uri = retrieve("linear-learner", boto3.Session().region_name, version="1")
# hide
print(image_uri)
# hide
# hyperparameters = {
# "feature_dim": "8",
# "epochs": "16",
# "wd": "0.01",
# "loss": "absolute_loss",
# "predictor_type": "regressor",
# "normalize_data": True,
# "optimizer": "adam",
# "mini_batch_size": "100",
# "lr_scheduler_step": "100",
# "lr_scheduler_factor": "0.99",
# "lr_scheduler_minimum_lr": "0.0001",
# "learning_rate": "0.1",
# }
# hide
# Debugger resources
# https://www.youtube.com/watch?v=8b5-lyRaFgA
# https://sagemaker-examples.readthedocs.io/en/latest/sagemaker-debugger/xgboost_census_explanations/xgboost-census-debugger-rules.html
# https://aws.amazon.com/blogs/machine-learning/ml-explainability-with-amazon-sagemaker-debugger/
# hide
s3_output
%%time
from sagemaker.debugger import rule_configs, Rule, DebuggerHookConfig, CollectionConfig
save_interval = 3
sess = sagemaker.Session()
job_name = "abalone-regression-" + strftime("%H-%M-%S", gmtime())
print("Training job: ", job_name)
linear = sagemaker.estimator.Estimator(
image_uri=image_uri,
role=role,
instance_count=1,
instance_type="ml.m4.xlarge",
#instance_type="local",
input_mode="File",
output_path=s3_output,
base_job_name="abalone-regression-sagemaker",
sagemaker_session=sess,
#hyperparameters=hyperparameters,
#train_max_run=100
debugger_hook_config=DebuggerHookConfig(
#s3_output_path="s3://learnableloopai-blog/abalone/output_debugger",
s3_output_path=s3_output,
collection_configs=[
CollectionConfig(
name="metrics",
parameters={
"save_interval": str(save_interval)
}
),
# CollectionConfig(
# name="feature_importance",
# parameters={
# "save_interval": str(save_interval)
# }
# ),
# CollectionConfig(
# name="full_shap",
# parameters={
# "save_interval": str(save_interval)
# }
# ),
# CollectionConfig(
# name="average_shap",
# parameters={
# "save_interval": str(save_interval)
# }
# ),
# CollectionConfig(
# name="mini_batch_size",
# parameters={
# "save_interval": str(save_interval)
# }
# )
]
),
rules=[
Rule.sagemaker(
rule_configs.loss_not_decreasing(),
rule_parameters={
"collection_names": "metrics",
"num_steps": str(save_interval*2),
},
),
# Rule.sagemaker(
# rule_configs.overtraining(),
# rule_parameters={
# "collection_names": "metrics",
# "patience_validation": str(10),
# },
# ),
# Rule.sagemaker(
# rule_configs.overfit(),
# rule_parameters={
# "collection_names": "metrics",
# "patience": str(10),
# },
# )
]
)
linear.set_hyperparameters(
feature_dim=8,
epochs=16,
wd=0.01,
loss="absolute_loss",
predictor_type="regressor",
normalize_data=True,
optimizer="adam",
mini_batch_size=100,
lr_scheduler_step=100,
lr_scheduler_factor=0.99,
lr_scheduler_minimum_lr=0.0001,
learning_rate=0.1,
)
# hide
#--- linear.fit??
%%time
linear.fit(inputs={
"train": train_data,
"validation": valid_data,
#"test": testg_data
},
wait=False) #cell won't block until done
# hide
# import time
# for _ in range(360):
# job_name = linear.latest_training_job.name
# client = linear.sagemaker_session.sagemaker_client
# description = client.describe_training_job(TrainingJobName=job_name)
# training_job_status = description["TrainingJobStatus"]
# rule_job_summary = linear.latest_training_job.rule_job_summary()
# rule_evaluation_status = rule_job_summary[0]["RuleEvaluationStatus"]
# print("Training Job Status: {}, Rule Evaluation Status: {}".format(training_job_status, rule_evaluation_status))
# if rule_evaluation_status in ["Stopped", "IssuesFound", "NoIssuesFound"]:
# break
# time.sleep(10)
# hide
import time
for _ in range(36):
job_name = linear.latest_training_job.name
client = linear.sagemaker_session.sagemaker_client
description = client.describe_training_job(TrainingJobName=job_name)
training_job_status = description["TrainingJobStatus"]
rule_job_summary = linear.latest_training_job.rule_job_summary()
rule_evaluation_status = rule_job_summary[0]["RuleEvaluationStatus"]
print(
"Training job status: {}, Rule Evaluation Status: {}".format(
training_job_status, rule_evaluation_status
)
)
if training_job_status in ["Completed", "Failed"]:
break
time.sleep(10)
# hide
linear.latest_training_job.rule_job_summary()
# hide
# Analyze debugger output
# ERROR:
# https://stackoverflow.com/questions/65812334/sagemaker-debugger-service-gives-missingcollectionfiles-error
from smdebug.trials import create_trial
s3_output_path = linear.latest_job_debugger_artifacts_path()
trial = create_trial(s3_output_path)
# hide
trial.tensor_names()
# hide
def get_data(trial, tname):
tensor = trial.tensor(tname)
steps = tensor.steps()
vals = [tensor.values(s) for s in steps]
return steps, vals
def plot_collection(trial, collection_name, regex=
###Output
_____no_output_____
###Markdown
6. Deployment
###Code
%%time
linear_predictor = linear.deploy(initial_instance_count=1, instance_type="ml.c4.xlarge")
print(f"\nEndpoint: {linear_predictor.endpoint_name}")
###Output
---------------!
Endpoint: linear-learner-2021-05-21-19-45-14-793
CPU times: user 270 ms, sys: 12.6 ms, total: 283 ms
Wall time: 7min 32s
###Markdown
6.1 Test InferenceNow that the trained model is deployed at an endpoint that is up-and-running, we can use this endpoint for inference. To do this, we are going to configure the [predictor object](https://sagemaker.readthedocs.io/en/v1.2.4/predictors.html) to parse contents of type text/csv and deserialize the reply received from the endpoint to json format.
###Code
# configure the predictor to accept to serialize csv input and parse the reponse as json
linear_predictor.serializer = CSVSerializer()
linear_predictor.deserializer = JSONDeserializer()
###Output
_____no_output_____
###Markdown
---We use the test file containing the records of the data that we kept to test the model prediction. Run the following cell multiple times to perform inference:
###Code
%%time
# get a testing sample from the test file
test_data = [row for row in open(FILE_TESTG, "r")]
sample = random.choice(test_data).split(",")
actual_age = sample[0]
payload = sample[1:] # removing actual age from the sample
payload = ",".join(map(str, payload))
# invoke the predicor and analyise the result
result = linear_predictor.predict(payload)
# extract the prediction value
result = round(float(result["predictions"][0]["score"]), 2)
accuracy = str(round(100 - ((abs(float(result) - float(actual_age)) / float(actual_age)) * 100), 2))
print(f"Actual age: {actual_age}\nPrediction: {result}\nAccuracy: {accuracy}")
###Output
Actual age: 9
Prediction: 8.83
Accuracy: 98.11
CPU times: user 4.66 ms, sys: 0 ns, total: 4.66 ms
Wall time: 19.6 ms
###Markdown
6.2 Delete the EndpointHaving an endpoint running will incur some costs. Therefore as a clean-up job, we should delete the endpoint.
###Code
sagemaker.Session().delete_endpoint(linear_predictor.endpoint_name)
print(f"Deleted {linear_predictor.endpoint_name} successfully!")
###Output
Deleted linear-learner-2021-05-21-19-45-14-793 successfully!
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.