markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
SageMakerClarifyProcessor | from sagemaker import clarify
clarify_processor = clarify.SageMakerClarifyProcessor(
role=role,
instance_count=1,
instance_type="ml.c5.2xlarge",
sagemaker_session=sess
) | _____no_output_____ | Apache-2.0 | 00_quickstart/09_Detect_Model_Bias_Clarify.ipynb | MarcusFra/workshop |
Writing DataConfig and ModelConfigA `DataConfig` object communicates some basic information about data I/O to Clarify. We specify where to find the input dataset, where to store the output, the target column (`label`), the header names, and the dataset type.Similarly, the `ModelConfig` object communicates information about your trained model and `ModelPredictedLabelConfig` provides information on the format of your predictions. **Note**: To avoid additional traffic to your production models, SageMaker Clarify sets up and tears down a dedicated endpoint when processing. `ModelConfig` specifies your preferred instance type and instance count used to run your model on during Clarify's processing. DataConfig | bias_report_prefix = "bias/report-{}".format(pipeline_model_name)
bias_report_output_path = "s3://{}/{}".format(bucket, bias_report_prefix)
data_config = clarify.DataConfig(
s3_data_input_path=test_data_bias_s3_uri,
s3_output_path=bias_report_output_path,
label="star_rating",
features="features",
# label must be last, features in exact order as passed into model
headers=["review_body", "product_category", "star_rating"],
dataset_type="application/jsonlines",
) | _____no_output_____ | Apache-2.0 | 00_quickstart/09_Detect_Model_Bias_Clarify.ipynb | MarcusFra/workshop |
ModelConfig | model_config = clarify.ModelConfig(
model_name=pipeline_model_name,
instance_type="ml.m5.4xlarge",
instance_count=1,
content_type="application/jsonlines",
accept_type="application/jsonlines",
# {"features": ["the worst", "Digital_Software"]}
content_template='{"features":$features}',
) | _____no_output_____ | Apache-2.0 | 00_quickstart/09_Detect_Model_Bias_Clarify.ipynb | MarcusFra/workshop |
_Note: `label` is set to the JSON key for the model prediction results_ | predictions_config = clarify.ModelPredictedLabelConfig(label="predicted_label") | _____no_output_____ | Apache-2.0 | 00_quickstart/09_Detect_Model_Bias_Clarify.ipynb | MarcusFra/workshop |
BiasConfig | bias_config = clarify.BiasConfig(
label_values_or_threshold=[
5,
4,
], # needs to be int or str for continuous dtype, needs to be >1 for categorical dtype
facet_name="product_category",
) | _____no_output_____ | Apache-2.0 | 00_quickstart/09_Detect_Model_Bias_Clarify.ipynb | MarcusFra/workshop |
Run Clarify Job | clarify_processor.run_post_training_bias(
data_config=data_config,
data_bias_config=bias_config,
model_config=model_config,
model_predicted_label_config=predictions_config,
# methods='all', # FlipTest requires all columns to be numeric
methods=["DPPL", "DI", "DCA", "DCR", "RD", "DAR", "DRR", "AD", "TE"],
wait=False,
logs=False,
)
run_post_training_bias_processing_job_name = clarify_processor.latest_job.job_name
run_post_training_bias_processing_job_name
from IPython.core.display import display, HTML
display(
HTML(
'<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/processing-jobs/{}">Processing Job</a></b>'.format(
region, run_post_training_bias_processing_job_name
)
)
)
from IPython.core.display import display, HTML
display(
HTML(
'<b>Review <a target="blank" href="https://console.aws.amazon.com/cloudwatch/home?region={}#logStream:group=/aws/sagemaker/ProcessingJobs;prefix={};streamFilter=typeLogStreamPrefix">CloudWatch Logs</a> After About 5 Minutes</b>'.format(
region, run_post_training_bias_processing_job_name
)
)
)
from IPython.core.display import display, HTML
display(
HTML(
'<b>Review <a target="blank" href="https://s3.console.aws.amazon.com/s3/buckets/{}?prefix={}/">S3 Output Data</a> After The Processing Job Has Completed</b>'.format(
bucket, bias_report_prefix
)
)
)
from pprint import pprint
running_processor = sagemaker.processing.ProcessingJob.from_processing_name(
processing_job_name=run_post_training_bias_processing_job_name, sagemaker_session=sess
)
processing_job_description = running_processor.describe()
pprint(processing_job_description)
running_processor.wait(logs=False) | _____no_output_____ | Apache-2.0 | 00_quickstart/09_Detect_Model_Bias_Clarify.ipynb | MarcusFra/workshop |
Download Report From S3 | !aws s3 ls $bias_report_output_path/
!aws s3 cp --recursive $bias_report_output_path ./generated_bias_report/
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="blank" href="./generated_bias_report/report.html">Bias Report</a></b>')) | _____no_output_____ | Apache-2.0 | 00_quickstart/09_Detect_Model_Bias_Clarify.ipynb | MarcusFra/workshop |
View Bias Report in StudioIn Studio, you can view the results under the experiments tab.Each bias metric has detailed explanations with examples that you can explore.You could also summarize the results in a handy table! Release Resources | %%html
<p><b>Shutting down your kernel for this notebook to release resources.</b></p>
<button class="sm-command-button" data-commandlinker-command="kernelmenu:shutdown" style="display:none;">Shutdown Kernel</button>
<script>
try {
els = document.getElementsByClassName("sm-command-button");
els[0].click();
}
catch(err) {
// NoOp
}
</script> | _____no_output_____ | Apache-2.0 | 00_quickstart/09_Detect_Model_Bias_Clarify.ipynb | MarcusFra/workshop |
Covid 19 Prediction Study - CBC Importing libraries | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
| _____no_output_____ | MIT | covid_study_ver_cbc_4_sao.ipynb | hikmetc/COVID-19-AI |
Baskent Data | # başkent university data
veriler = pd.read_excel(r'covid data 05.xlsx')
# başkent uni data
print('total number of pcr results: ',len(veriler['pcr']))
print('number of positive pcr results: ',len(veriler[veriler['pcr']=='positive']))
print('number of negative pcr results: ',len(veriler[veriler['pcr']=='negative'])) | total number of pcr results: 1391
number of positive pcr results: 707
number of negative pcr results: 684
| MIT | covid_study_ver_cbc_4_sao.ipynb | hikmetc/COVID-19-AI |
Sao Paulo dataset | veri_saopaulo = pd.read_excel(r'sao_dataset.xlsx' )
print('total number of pcr results: ',len(veri_saopaulo['SARS-Cov-2 exam result']))
print('number of positive pcr results: ',len(veri_saopaulo[veri_saopaulo['SARS-Cov-2 exam result']=='positive']))
print('number of negative pcr results: ',len(veri_saopaulo[veri_saopaulo['SARS-Cov-2 exam result']=='negative']))
veri_saopaulo_l = list(veri_saopaulo.columns)
veri_saopaulo_l
veri_saopaulo_l2 = ['Hematocrit', 'Hemoglobin', 'Platelets', 'Mean platelet volume ',
'Red blood Cells', 'Lymphocytes', 'Mean corpuscular hemoglobin concentration\xa0(MCHC)',
'Leukocytes', 'Basophils', 'Mean corpuscular hemoglobin (MCH)', 'Eosinophils',
'Mean corpuscular volume (MCV)', 'Monocytes','Red blood cell distribution width (RDW)']
len(veri_saopaulo_l2)
veriler_sao_cbc = veri_saopaulo[['Hemoglobin','Hematocrit', 'Lymphocytes', 'Leukocytes'
,'Mean corpuscular hemoglobin (MCH)','Mean corpuscular hemoglobin concentration (MCHC)'
,'Mean corpuscular volume (MCV)','Monocytes','Neutrophils','Basophils','Eosinophils'
,'Red blood Cells','Red blood cell distribution width (RDW)','Platelets','SARS-Cov-2 exam result']]
veriler_sao_cbc = veriler_sao_cbc.dropna(axis=0)
veriler_sao_cbc.describe()
# PCR result to integer (0: negative, 1: positive)
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
veriler_sao_cbc["PCR_result"] = le.fit_transform(veriler_sao_cbc["SARS-Cov-2 exam result"])
veriler_sao_cbc.head()
# Sao Paulo Data
print('total number of pcr results: ',len(veriler_sao_cbc['SARS-Cov-2 exam result']))
print('number of positive pcr results: ',len(veriler_sao_cbc[veriler_sao_cbc['SARS-Cov-2 exam result']=='positive']))
print('number of negative pcr results: ',len(veriler_sao_cbc[veriler_sao_cbc['SARS-Cov-2 exam result']=='negative']))
# select random 75 rows to reach balanced data
saopaulo_negative = veriler_sao_cbc[veriler_sao_cbc['SARS-Cov-2 exam result']=='negative']
saopaulo_negative75 = saopaulo_negative.sample(n = 75)
saopaulo_negative75
saopaulo_positive75 = veriler_sao_cbc[veriler_sao_cbc['SARS-Cov-2 exam result']=='positive']
saopaulo_positive75
#concatinating positive and negative datasets
saopaulo_last = [saopaulo_positive75,saopaulo_negative75]
saopaulo_lastdf = pd.concat(saopaulo_last)
saopaulo_lastdf
Xs = saopaulo_lastdf[['Hemoglobin','Hematocrit', 'Lymphocytes', 'Leukocytes'
,'Mean corpuscular hemoglobin (MCH)','Mean corpuscular hemoglobin concentration (MCHC)'
,'Mean corpuscular volume (MCV)','Monocytes','Neutrophils','Basophils','Eosinophils'
,'Red blood Cells','Red blood cell distribution width (RDW)','Platelets']].values
Ys = saopaulo_lastdf['PCR_result'].values | _____no_output_____ | MIT | covid_study_ver_cbc_4_sao.ipynb | hikmetc/COVID-19-AI |
Baskent Data features (demographic data) | # Exporting demographical data to excel
veriler.describe().to_excel(r'/Users/hikmetcancubukcu/Desktop/covidai/veriler başkent covid/covid cbc demographic2.xlsx')
veriler.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 1391 entries, 0 to 1390
Data columns (total 24 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 hastano 1391 non-null int64
1 yasiondalik 1391 non-null float64
2 cinsiyet 1391 non-null object
3 alanin_aminotransferaz 1391 non-null int64
4 aspartat_aminotransferaz 1391 non-null int64
5 basophils 1391 non-null float64
6 c_reactive_protein 1391 non-null float64
7 eosinophils 1391 non-null float64
8 hb 1391 non-null float64
9 hct 1391 non-null float64
10 kreatinin 1391 non-null float64
11 laktat_dehidrogenaz 1391 non-null int64
12 lenfosit 1391 non-null float64
13 lokosit 1391 non-null float64
14 mch 1391 non-null float64
15 mchc 1391 non-null float64
16 mcv 1391 non-null float64
17 monocytes 1391 non-null float64
18 notrofil 1391 non-null float64
19 rbc 1391 non-null float64
20 rdw 1391 non-null float64
21 total_bilirubin 1391 non-null float64
22 trombosit 1391 non-null float64
23 pcr 1391 non-null object
dtypes: float64(18), int64(4), object(2)
memory usage: 260.9+ KB
| MIT | covid_study_ver_cbc_4_sao.ipynb | hikmetc/COVID-19-AI |
Baskent Data preprocessing | # Gender to integer (0 : E, 1 : K)
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
veriler["gender"] = le.fit_transform(veriler["cinsiyet"])
# Pcr to numeric values (negative : 0 , positive : 1)
veriler["pcr_result"] = le.fit_transform(veriler["pcr"])
veriler.info() # başkent uni data
# Dependent & Independent variables (cbc)
X = veriler[['hb','hct','lenfosit','lokosit','mch','mchc','mcv','monocytes','notrofil',
'basophils','eosinophils', 'rbc','rdw','trombosit']].values
Y = veriler['pcr_result'].values
# Train - Test Spilt (80% - 20%)
from sklearn.model_selection import train_test_split
x_train, x_test,y_train,y_test = train_test_split(X,Y,stratify=Y,test_size=0.20, random_state=0)
print('n of test set', len(y_test))
print('n of train set', len(y_train))
# Standardization
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(x_train)
X_test = sc.fit_transform(x_test)
#confusion matrix function
from sklearn.metrics import classification_report, confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
| _____no_output_____ | MIT | covid_study_ver_cbc_4_sao.ipynb | hikmetc/COVID-19-AI |
Logistic Regression | # importing library
from sklearn.linear_model import LogisticRegression
logr= LogisticRegression(random_state=0)
logr.fit(X_train,y_train)
y_hat= logr.predict(X_test)
yhat_logr = logr.predict_proba(X_test)
y_hat22 = y_hat
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, y_hat, labels=[1,0])
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['PCR positive','PCR negative'],normalize= False, title='Confusion matrix')
print (classification_report(y_test, y_hat))
print('precision : positive predictive value')
print('recall : sensitivity')
# 10 fold cross validation
from sklearn.model_selection import cross_val_score
'''
1. estimator : classifier (bizim durum)
2. X
3. Y
4. cv : kaç katlamalı
'''
basari = cross_val_score(estimator = logr, X=X_train, y=y_train , cv = 10)
print(basari.mean())
print(basari.std())
# sao paulo external validation - logistic regression
y_hats= logr.predict(Xs)
yhats_logr = logr.predict_proba(Xs)
y_hats22 = y_hats
# Compute confusion matrix
cnf_matrix = confusion_matrix(Ys, y_hats22, labels=[1,0])
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['PCR positive','PCR negative'],normalize= False, title='Confusion matrix')
print (classification_report(Ys, y_hats))
print('precision : positive predictive value')
print('recall : sensitivity') | precision recall f1-score support
0 0.85 0.59 0.69 75
1 0.68 0.89 0.77 75
accuracy 0.74 150
macro avg 0.76 0.74 0.73 150
weighted avg 0.76 0.74 0.73 150
precision : positive predictive value
recall : sensitivity
| MIT | covid_study_ver_cbc_4_sao.ipynb | hikmetc/COVID-19-AI |
Support Vector Machines | from sklearn.svm import SVC
svc= SVC(kernel="rbf",probability=True)
svc.fit(X_train, y_train)
yhat= svc.predict(X_test)
yhat_svm = svc.predict_proba(X_test)
yhat4 = yhat # svm prediction => yhat4
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, yhat4, labels=[1,0])
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['PCR positive','PCR negative'],normalize= False, title='Confusion matrix')
print (classification_report(y_test, yhat4))
print('precision : positive predictive value')
print('recall : sensitivity')
# 10 fold cross validation
from sklearn.model_selection import cross_val_score
'''
1. estimator : classifier (bizim durum)
2. X
3. Y
4. cv : kaç katlamalı
'''
basari = cross_val_score(estimator = svc, X=X_train, y=y_train , cv = 10)
print(basari.mean())
print(basari.std())
# SAO PAULO EXTERNAL VALIDATION
y_hats4= svc.predict(Xs)
yhats2_svc = svc.predict_proba(Xs)
# Compute confusion matrix
cnf_matrix = confusion_matrix(Ys, y_hats4, labels=[1,0])
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['PCR positive','PCR negative'],normalize= False, title='Confusion matrix')
print (classification_report(Ys, y_hats4))
print('precision : positive predictive value')
print('recall : sensitivity') | precision recall f1-score support
0 0.91 0.67 0.77 75
1 0.74 0.93 0.82 75
accuracy 0.80 150
macro avg 0.82 0.80 0.80 150
weighted avg 0.82 0.80 0.80 150
precision : positive predictive value
recall : sensitivity
| MIT | covid_study_ver_cbc_4_sao.ipynb | hikmetc/COVID-19-AI |
RANDOM FOREST CLASSIFIER | from sklearn.ensemble import RandomForestClassifier
rfc= RandomForestClassifier(n_estimators=200,criterion="entropy")
rfc.fit(X_train,y_train)
yhat7= rfc.predict(X_test)
yhat_rf = rfc.predict_proba(X_test)
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, yhat7, labels=[1,0])
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['PCR positive','PCR negative'],normalize= False, title='Confusion matrix')
print (classification_report(y_test, yhat7))
print('precision : positive predictive value')
print('recall : sensitivity')
# 10 fold cross validation
from sklearn.model_selection import cross_val_score
'''
1. estimator : classifier (bizim durum)
2. X
3. Y
4. cv : kaç katlamalı
'''
basari = cross_val_score(estimator = rfc, X=X_train, y=y_train , cv = 10)
print(basari.mean())
print(basari.std())
# SAO PAULO EXTERNAL VALIDATION
yhats7= rfc.predict(Xs)
yhats7_rfc = rfc.predict_proba(Xs)
# Compute confusion matrix
cnf_matrix = confusion_matrix(Ys, yhats7, labels=[1,0])
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['PCR positive','PCR negative'],normalize= False, title='Confusion matrix')
print (classification_report(Ys, yhats7))
print('precision : positive predictive value')
print('recall : sensitivity') | precision recall f1-score support
0 0.88 0.68 0.77 75
1 0.74 0.91 0.81 75
accuracy 0.79 150
macro avg 0.81 0.79 0.79 150
weighted avg 0.81 0.79 0.79 150
precision : positive predictive value
recall : sensitivity
| MIT | covid_study_ver_cbc_4_sao.ipynb | hikmetc/COVID-19-AI |
XGBOOST | from sklearn.ensemble import GradientBoostingClassifier
classifier = GradientBoostingClassifier()
classifier.fit(X_train, y_train)
yhat8 = classifier.predict(X_test)
yhat_xgboost = classifier.predict_proba(X_test)
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, yhat8, labels=[1,0])
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['PCR positive','PCR negative'],normalize= False, title='Confusion matrix')
print (classification_report(y_test, yhat8))
print('precision : positive predictive value')
print('recall : sensitivity')
# 10 fold cross validation
from sklearn.model_selection import cross_val_score
'''
1. estimator : classifier (bizim durum)
2. X
3. Y
4. cv : kaç katlamalı
'''
basari = cross_val_score(estimator = rfc, X=X_train, y=y_train , cv = 10)
print(basari.mean())
print(basari.std())
# SAO PAULO EXTERNAL VALIDATION
y_hats8= classifier.predict(Xs)
y_hats_xgboost = classifier.predict_proba(Xs)
# Compute confusion matrix
cnf_matrix = confusion_matrix(Ys, y_hats8, labels=[1,0])
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['PCR positive','PCR negative'],normalize= False, title='Confusion matrix')
print (classification_report(Ys, y_hats8))
print('precision : positive predictive value')
print('recall : sensitivity') | precision recall f1-score support
0 0.81 0.63 0.71 75
1 0.70 0.85 0.77 75
accuracy 0.74 150
macro avg 0.75 0.74 0.74 150
weighted avg 0.75 0.74 0.74 150
precision : positive predictive value
recall : sensitivity
| MIT | covid_study_ver_cbc_4_sao.ipynb | hikmetc/COVID-19-AI |
ROC & AUC | #baskent dataset
from sklearn.metrics import roc_curve, auc
logr_fpr, logr_tpr, threshold = roc_curve(y_test, yhat_logr[:,1]) # logr roc data
auc_logr = auc(logr_fpr, logr_tpr)
svm_fpr, svm_tpr, threshold = roc_curve(y_test, yhat_svm[:,1]) # svm roc data
auc_svm = auc(svm_fpr, svm_tpr)
rf_fpr, rf_tpr, threshold = roc_curve(y_test, yhat_rf[:,1]) # rf roc data
auc_rf = auc(rf_fpr, rf_tpr)
xgboost_fpr, xgboost_tpr, threshold = roc_curve(y_test, yhat_xgboost[:,1]) # xgboost roc data
auc_xgboost = auc(xgboost_fpr, xgboost_tpr)
plt.figure(figsize=(4, 4), dpi=300)
plt.plot(rf_fpr, rf_tpr, linestyle='-', label='Random Forest (AUC = %0.3f)' % auc_rf)
plt.plot(logr_fpr, logr_tpr, linestyle='-', label='Logistic (AUC = %0.3f)' % auc_logr)
plt.plot(svm_fpr, svm_tpr, linestyle='-', label='SVM (AUC = %0.3f)' % auc_svm)
plt.plot(xgboost_fpr, xgboost_tpr, linestyle='-', label='XGBoost (AUC = %0.3f)' % auc_xgboost)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend(fontsize=8)
plt.show()
# sao paulo dataset
from sklearn.metrics import roc_curve, auc
logr_fpr, logr_tpr, threshold = roc_curve(Ys, yhats_logr[:,1]) # logr roc data
auc_logr = auc(logr_fpr, logr_tpr)
svm_fpr, svm_tpr, threshold = roc_curve(Ys, yhats2_svc[:,1]) # svm roc data
auc_svm = auc(svm_fpr, svm_tpr)
rf_fpr, rf_tpr, threshold = roc_curve(Ys, yhats7_rfc[:,1]) # rf roc data
auc_rf = auc(rf_fpr, rf_tpr)
xgboost_fpr, xgboost_tpr, threshold = roc_curve(Ys, y_hats_xgboost[:,1]) # xgboost roc data
auc_xgboost = auc(xgboost_fpr, xgboost_tpr)
plt.figure(figsize=(4, 4), dpi=300)
plt.plot(xgboost_fpr, xgboost_tpr, linestyle='-', label='XGBoost (AUC = %0.3f)' % auc_xgboost)
plt.plot(rf_fpr, rf_tpr, linestyle='-', label='Random Forest (AUC = %0.3f)' % auc_rf)
plt.plot(svm_fpr, svm_tpr, linestyle='-', label='SVM (AUC = %0.3f)' % auc_svm)
plt.plot(logr_fpr, logr_tpr, linestyle='-', label='Logistic (AUC = %0.3f)' % auc_logr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend(fontsize=8)
plt.show()
yhat22= y_hat22*1
#yhat22 = [item for sublist in yhat22 for item in sublist]
yhat33= yhat4*1
yhat44= yhat7*1
yhat55= yhat8*1
roc_data_array = [yhat55,yhat22,yhat33,yhat44,y_test]
roc_data = pd.DataFrame(data=roc_data_array)
roc_data.transpose().to_excel(r'roc_covid_cbc_last.xlsx')
# validation data cbc
yhat222= y_hats22*1
#yhat22 = [item for sublist in yhat22 for item in sublist]
yhat333= y_hats4*1
yhat444= yhats7*1
yhat555= y_hats8*1
roc_data_array = [yhat555,yhat222,yhat333,yhat444,Ys]
roc_data = pd.DataFrame(data=roc_data_array)
roc_data.transpose().to_excel(r'roc_covid_cbc_last_val.xlsx') | _____no_output_____ | MIT | covid_study_ver_cbc_4_sao.ipynb | hikmetc/COVID-19-AI |
Uncomment the following line to install [geemap](https://geemap.org) and [cartopy](https://scitools.org.uk/cartopy/docs/latest/installing.htmlinstalling) if needed. Keep in mind that cartopy can be challenging to install. If you are unable to install cartopy on your computer, you can try Google Colab with this the [notebook example](https://colab.research.google.com/github/giswqs/geemap/blob/master/examples/notebooks/cartoee_colab.ipynb). See below the commands to install cartopy and geemap using conda/mamba:```conda create -n carto python=3.8conda activate cartoconda install mamba -c conda-forgemamba install cartopy scipy -c conda-forgemamba install geemap -c conda-forgejupyter notebook``` | # !pip install cartopy scipy
# !pip install geemap | _____no_output_____ | MIT | examples/notebooks/50_cartoee_quickstart.ipynb | Yisheng-Li/geemap |
How to create publication quality maps using `cartoee``cartoee` is a lightweight module to aid in creatig publication quality maps from Earth Engine processing results without having to download data. The `cartoee` package does this by requesting png images from EE results (which are usually good enough for visualization) and `cartopy` is used to create the plots. Utility functions are available to create plot aethetics such as gridlines or color bars. **The notebook and the geemap cartoee module ([cartoee.py](https://geemap.org/cartoee)) were contributed by [Kel Markert](https://github.com/KMarkert). A huge thank you to him.** | %pylab inline
import ee
import geemap
# import the cartoee functionality from geemap
from geemap import cartoee
geemap.ee_initialize() | _____no_output_____ | MIT | examples/notebooks/50_cartoee_quickstart.ipynb | Yisheng-Li/geemap |
Plotting an imageIn this first example we will explore the most basic functionality including plotting and image, adding a colorbar, and adding visual aethetic features. Here we will use SRTM data to plot global elevation. | # get an image
srtm = ee.Image("CGIAR/SRTM90_V4")
# geospatial region in format [E,S,W,N]
region = [180, -60, -180, 85] # define bounding box to request data
vis = {'min':0, 'max':3000} # define visualization parameters for image
fig = plt.figure(figsize=(15, 10))
# use cartoee to get a map
ax = cartoee.get_map(srtm, region=region, vis_params=vis)
# add a colorbar to the map using the visualization params we passed to the map
cartoee.add_colorbar(ax, vis, loc="bottom", label="Elevation", orientation="horizontal")
# add gridlines to the map at a specified interval
cartoee.add_gridlines(ax, interval=[60,30], linestyle=":")
# add coastlines using the cartopy api
ax.coastlines(color="red")
show() | _____no_output_____ | MIT | examples/notebooks/50_cartoee_quickstart.ipynb | Yisheng-Li/geemap |
This is a decent map for minimal amount of code. But we can also easily use matplotlib colormaps to visualize our EE results to add more color. Here we add a `cmap` keyword to the `.get_map()` and `.add_colorbar()` functions. | fig = plt.figure(figsize=(15, 10))
cmap = "gist_earth" # colormap we want to use
# cmap = "terrain"
# use cartoee to get a map
ax = cartoee.get_map(srtm, region=region, vis_params=vis, cmap=cmap)
# add a colorbar to the map using the visualization params we passed to the map
cartoee.add_colorbar(ax, vis, cmap=cmap, loc="right", label="Elevation", orientation="vertical")
# add gridlines to the map at a specified interval
cartoee.add_gridlines(ax, interval=[60,30], linestyle="--")
# add coastlines using the cartopy api
ax.coastlines(color="red")
ax.set_title(label = 'Global Elevation Map', fontsize=15)
show() | _____no_output_____ | MIT | examples/notebooks/50_cartoee_quickstart.ipynb | Yisheng-Li/geemap |
Plotting an RGB image`cartoee` also allows for plotting of RGB image results directly. Here is an example of plotting a Landsat false-color scene. | # get a landsat image to visualize
image = ee.Image('LANDSAT/LC08/C01/T1_SR/LC08_044034_20140318')
# define the visualization parameters to view
vis ={"bands": ['B5', 'B4', 'B3'], "min": 0, "max":5000, "gamma":1.3}
fig = plt.figure(figsize=(15, 10))
# use cartoee to get a map
ax = cartoee.get_map(image, vis_params=vis)
# pad the view for some visual appeal
cartoee.pad_view(ax)
# add the gridlines and specify that the xtick labels be rotated 45 degrees
cartoee.add_gridlines(ax,interval=0.5,xtick_rotation=45,linestyle=":")
# add the coastline
ax.coastlines(color="yellow")
show() | _____no_output_____ | MIT | examples/notebooks/50_cartoee_quickstart.ipynb | Yisheng-Li/geemap |
By default, if a region is not provided via the `region` keyword the whole extent of the image will be plotted as seen in the previous Landsat example. We can also zoom to a specific region of an image by defining the region to plot. | fig = plt.figure(figsize=(15, 10))
# here is the bounding box of the map extent we want to use
# formatted a [E,S,W,N]
zoom_region = [-121.8025, 37.3458, -122.6265, 37.9178]
# plot the map over the region of interest
ax = cartoee.get_map(image, vis_params=vis, region=zoom_region)
# add the gridlines and specify that the xtick labels be rotated 45 degrees
cartoee.add_gridlines(ax, interval=0.15, xtick_rotation=45, linestyle=":")
# add coastline
ax.coastlines(color="yellow")
show() | _____no_output_____ | MIT | examples/notebooks/50_cartoee_quickstart.ipynb | Yisheng-Li/geemap |
Adding north arrow and scale bar | fig = plt.figure(figsize=(15, 10))
# here is the bounding box of the map extent we want to use
# formatted a [E,S,W,N]
zoom_region = [-121.8025, 37.3458, -122.6265, 37.9178]
# plot the map over the region of interest
ax = cartoee.get_map(image, vis_params=vis, region=zoom_region)
# add the gridlines and specify that the xtick labels be rotated 45 degrees
cartoee.add_gridlines(ax, interval=0.15, xtick_rotation=45, linestyle=":")
# add coastline
ax.coastlines(color="yellow")
# add north arrow
cartoee.add_north_arrow(ax, text="N", xy=(0.05, 0.25), text_color="white", arrow_color="white", fontsize=20)
# add scale bar
cartoee.add_scale_bar_lite(ax, length=10, xy=(0.1, 0.05), fontsize=20, color="white", unit="km")
ax.set_title(label = 'Landsat False Color Composite (Band 5/4/3)', fontsize=15)
show() | _____no_output_____ | MIT | examples/notebooks/50_cartoee_quickstart.ipynb | Yisheng-Li/geemap |
EJERCICIO 8El trigo es uno de los tres granos más ampliamente producidos globalmente, junto al maíz y el arroz, y el más ampliamente consumido por el hombre en la civilización occidental desde la antigüedad. El grano de trigo es utilizado para hacer harina, harina integral, sémola, cerveza y una gran variedad de productos alimenticios.Se requiere clasificar semillas de trigo pertenecientes a las variedades Kama, Rosa y Canadiense.Se cuenta con 70 muestras de cada una de las variedades, a cuyas semillas se le realizaron mediciones de diferentes propiedades geométricas: Área, perímetro, compacidad, largo, ancho, coeficiente de asimetría, largo del carpelo (todos valores reales continuos).Utilice perceptrones o una red neuronal artificial (según resulte más conveniente) para lograr producir un clasificador de los tres tipos de semillas de trigo a partir de las muestras obtenidas. Informe el criterio empleado para decidir el tipo de clasificador entrenado y la arquitectura y los parámetros usados en su entrenamiento (según corresponda).Utilice para el entrenamiento sólo el 90% de las muestras disponibles de cada variedad. Informe la matriz de confusión que produce el mejor clasificador obtenido al evaluarlo con las muestras de entrenamiento e indique la matriz que ese clasificador produce al usarlo sobre el resto de las muestras reservadas para prueba. | import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import mpld3
%matplotlib inline
mpld3.enable_notebook()
from cperceptron import Perceptron
from cbackpropagation import ANN #, Identidad, Sigmoide
import patrones as magia
def progreso(ann, X, T, y=None, n=-1, E=None):
if n % 20 == 0:
print("Pasos: {0} - Error: {1:.32f}".format(n, E))
def progresoPerceptron(perceptron, X, T, n):
y = perceptron.evaluar(X)
incorrectas = (T != y).sum()
print("Pasos: {0}\tIncorrectas: {1}\n".format(n, incorrectas))
semillas = np.load('semillas.npy')
datos = semillas[:, :-1]
tipos = semillas[:, -1]
# tipos == 1 --> Kama
# tipos == 2 --> Rosa
# tipos == 3 --> Canadiense
#Armo Patrones
clases, patronesEnt, patronesTest = magia.generar_patrones(
magia.escalar(datos),tipos,90)
X, T = magia.armar_patrones_y_salida_esperada(clases,patronesEnt)
Xtest, Ttest = magia.armar_patrones_y_salida_esperada(clases,patronesEnt)
# Crea la red neuronal
ocultas = 10
entradas = X.shape[1]
salidas = T.shape[1]
ann = ANN(entradas, ocultas, salidas)
ann.reiniciar()
#Entreno
E, n = ann.entrenar_rprop(X, T, min_error=0, max_pasos=5000, callback=progreso, frecuencia_callback=1000)
print("\nRed entrenada en {0} pasos con un error de {1:.32f}".format(n, E))
#Evaluo
Y = (ann.evaluar(Xtest) >= 0.97)
magia.matriz_de_confusion(Ttest,Y) | _____no_output_____ | MIT | Argentina - Mondiola Rock - 90 pts/Practica/TP1/ejercicio 8/.ipynb_checkpoints/Ejercicio 8-checkpoint.ipynb | parolaraul/itChallengeML2017 |
Model Description- Apply a transformer based model to pfam/unirep_50 data and extract the embedding features> In this tutorial, we train nn.TransformerEncoder model on a language modeling task. The language modeling task is to assign a probability for the likelihood of a given word (or a sequence of words) to follow a sequence of words. A sequence of tokens are passed to the embedding layer first, followed by a positional encoding layer to account for the order of the word (see the next paragraph for more details). The nn.TransformerEncoder consists of multiple layers of nn.TransformerEncoderLayer. Along with the input sequence, a square attention mask is required because the self-attention layers in nn.TransformerEncoder are only allowed to attend the earlier positions in the sequence. For the language modeling task, any tokens on the future positions should be masked. To have the actual words, the output of nn.TransformerEncoder model is sent to the final Linear layer, which is followed by a log-Softmax function. Math and model formulation and code reference:- Attention is all you need https://arxiv.org/abs/1706.03762- ResNet https://towardsdatascience.com/understanding-and-visualizing-resnets-442284831be8- MIT Visualization http://jalammar.github.io/illustrated-transformer/- An Annotated transformer http://nlp.seas.harvard.edu/2018/04/03/attention.htmla-real-world-example | import math
import torch.nn as nn
import argparse
import random
import warnings
import numpy as np
import torch
import torch.nn.functional as F
from torch import optim
import torch.backends.cudnn as cudnn
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
from torch.autograd import Variable
import itertools
import pandas as pd
# seed = 7
# torch.manual_seed(seed)
# np.random.seed(seed)
pfamA_motors = pd.read_csv("../data/pfamA_motors.csv")
df_dev = pd.read_csv("../data/df_dev.csv")
pfamA_motors = pfamA_motors.iloc[:,1:]
clan_train_dat = pfamA_motors.groupby("clan").head(4000)
clan_train_dat = clan_train_dat.sample(frac=1).reset_index(drop=True)
clan_test_dat = pfamA_motors.loc[~pfamA_motors["id"].isin(clan_train_dat["id"]),:].groupby("clan").head(400)
clan_train_dat.shape
def df_to_tup(dat):
data = []
for i in range(dat.shape[0]):
row = dat.iloc[i,:]
tup = (row["seq"],row["clan"])
data.append(tup)
return data
clan_training_data = df_to_tup(clan_train_dat)
clan_test_data = df_to_tup(clan_test_dat)
for seq,clan in clan_training_data:
print(seq)
print(clan)
break
aminoacid_list = [
'A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L',
'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y'
]
clan_list = ["actin_like","tubulin_c","tubulin_binding","p_loop_gtpase"]
aa_to_ix = dict(zip(aminoacid_list, np.arange(1, 21)))
clan_to_ix = dict(zip(clan_list, np.arange(0, 4)))
def word_to_index(seq,to_ix):
"Returns a list of indices (integers) from a list of words."
return [to_ix.get(word, 0) for word in seq]
ix_to_aa = dict(zip(np.arange(1, 21), aminoacid_list))
ix_to_clan = dict(zip(np.arange(0, 4), clan_list))
def index_to_word(ixs,ix_to):
"Returns a list of words, given a list of their corresponding indices."
return [ix_to.get(ix, 'X') for ix in ixs]
def prepare_sequence(seq):
idxs = word_to_index(seq[0:-1],aa_to_ix)
return torch.tensor(idxs, dtype=torch.long)
def prepare_labels(seq):
idxs = word_to_index(seq[1:],aa_to_ix)
return torch.tensor(idxs, dtype=torch.long)
prepare_labels('YCHXXXXX')
# set device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
class PositionalEncoding(nn.Module):
"""
PositionalEncoding module injects some information about the relative or absolute position of
the tokens in the sequence. The positional encodings have the same dimension as the embeddings
so that the two can be summed. Here, we use sine and cosine functions of different frequencies.
"""
def __init__(self, d_model, dropout=0.1, max_len=5000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)
pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0).transpose(0, 1)
# pe[:, 0::2] = torch.sin(position * div_term)
# pe[:, 1::2] = torch.cos(position * div_term)
# pe = pe.unsqueeze(0)
self.register_buffer('pe', pe)
def forward(self, x):
# x = x + self.pe[:x.size(0), :]
# print("x.size() : ", x.size())
# print("self.pe.size() :", self.pe[:x.size(0),:,:].size())
x = torch.add(x ,Variable(self.pe[:x.size(0),:,:], requires_grad=False))
return self.dropout(x)
class TransformerModel(nn.Module):
def __init__(self, ntoken, ninp, nhead, nhid, nlayers, dropout=0.5):
super(TransformerModel, self).__init__()
from torch.nn import TransformerEncoder, TransformerEncoderLayer
self.model_type = 'Transformer'
self.src_mask = None
self.pos_encoder = PositionalEncoding(ninp)
encoder_layers = TransformerEncoderLayer(ninp, nhead, nhid, dropout)
self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers)
self.encoder = nn.Embedding(ntoken, ninp)
self.ninp = ninp
self.decoder = nn.Linear(ninp, ntoken)
self.init_weights()
def _generate_square_subsequent_mask(self, sz):
mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1)
mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
return mask
def init_weights(self):
initrange = 0.1
self.encoder.weight.data.uniform_(-initrange, initrange)
self.decoder.bias.data.zero_()
self.decoder.weight.data.uniform_(-initrange, initrange)
def forward(self, src):
if self.src_mask is None or self.src_mask.size(0) != src.size(0):
device = src.device
mask = self._generate_square_subsequent_mask(src.size(0)).to(device)
self.src_mask = mask
# print("src.device: ", src.device)
src = self.encoder(src) * math.sqrt(self.ninp)
# print("self.encoder(src) size: ", src.size())
src = self.pos_encoder(src)
# print("elf.pos_encoder(src) size: ", src.size())
output = self.transformer_encoder(src, self.src_mask)
# print("output size: ", output.size())
output = self.decoder(output)
return output
ntokens = len(aminoacid_list) + 1 # the size of vocabulary
emsize = 12 # embedding dimension
nhid = 100 # the dimension of the feedforward network model in nn.TransformerEncoder
nlayers = 6 # the number of nn.TransformerEncoderLayer in nn.TransformerEncoder
nhead = 12 # the number of heads in the multiheadattention models
dropout = 0.1 # the dropout value
model = TransformerModel(ntokens, emsize, nhead, nhid, nlayers, dropout)
import time
criterion = nn.CrossEntropyLoss()
lr = 3.0 # learning rate
optimizer = torch.optim.SGD(model.parameters(), lr=lr)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.95)
model.to(device)
model.train() # Turn on the train mode
start_time = time.time()
print_every = 1
loss_vector = []
for epoch in np.arange(0, df_dev.shape[0]):
seq = df_dev.iloc[epoch, 6]
sentence_in = prepare_sequence(seq)
targets = prepare_labels(seq)
# sentence_in = sentence_in.to(device = device)
sentence_in = sentence_in.unsqueeze(1).to(device = device)
targets = targets.to(device = device)
optimizer.zero_grad()
output = model(sentence_in)
print("targets size: ", targets.size())
loss = criterion(output.view(-1, ntokens), targets)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)
optimizer.step()
if epoch % print_every == 0:
print(f"At Epoch: %.1f"% epoch)
print(f"Loss %.4f"% loss)
loss_vector.append(loss)
break
start_time = time.time()
print_every = 1000
# loss_vector = []
for epoch in np.arange(0, df_dev.shape[0]):
seq = df_dev.iloc[epoch, 6]
sentence_in = prepare_sequence(seq)
targets = prepare_labels(seq)
# sentence_in = sentence_in.to(device = device)
sentence_in = sentence_in.unsqueeze(1).to(device = device)
targets = targets.to(device = device)
optimizer.zero_grad()
output = model(sentence_in)
# print("targets size: ", targets.size())
loss = criterion(output.view(-1, ntokens), targets)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)
optimizer.step()
if epoch % print_every == 0:
print(f"At Epoch: %.1f"% epoch)
print(f"Loss %.4f"% loss)
elapsed = time.time() - start_time
print(f"time elapsed %.4f"% elapsed)
torch.save(model.state_dict(), "../data/transformer_encoder_201012.pt")
# loss_vector.append(loss)
torch.save(model.state_dict(), "../data/transformer_encoder_201012.pt")
print("done")
ntokens = len(aminoacid_list) + 1 # the size of vocabulary
emsize = 128 # embedding dimension
nhid = 100 # the dimension of the feedforward network model in nn.TransformerEncoder
nlayers = 3 # the number of nn.TransformerEncoderLayer in nn.TransformerEncoder
nhead = 12 # the number of heads in the multiheadattention models
dropout = 0.1 # the dropout value
model = TransformerModel(ntokens, emsize, nhead, nhid, nlayers, dropout) | _____no_output_____ | MIT | code/first_try/.ipynb_checkpoints/transformer_encoder-checkpoint.ipynb | steveyu323/motor_embedding |
Meridional Overturning`mom6_tools.moc` functions for computing and plotting meridional overturning. The goal of this notebook is the following:1) server as an example on to compute a meridional overturning streamfunction (global and Atalntic) from CESM/MOM output; 2) evaluate model experiments by comparing transports against observed estimates;3) compare model results vs. another model results (TODO). | %matplotlib inline
import matplotlib
import numpy as np
import xarray as xr
# mom6_tools
from mom6_tools.moc import *
from mom6_tools.m6toolbox import check_time_interval, genBasinMasks
import matplotlib.pyplot as plt
# The following parameters must be set accordingly
######################################################
# case name - must be changed for each configuration
case_name = 'g.c2b6.GNYF.T62_t061.long_run_nuopc.001'
# Path to the run directory
path = "/glade/scratch/gmarques/g.c2b6.GNYF.T62_t061.long_run_nuopc.001/run/"
# initial and final years for computing time mean
year_start = 80
year_end = 90
# add your name and email address below
author = 'Gustavo Marques ([email protected])'
######################################################
# create an empty class object
class args:
pass
args.infile = path
args.static = 'g.c2b6.GNYF.T62_t061.long_run_nuopc.001.mom6.static.nc'
args.monthly = 'g.c2b6.GNYF.T62_t061.long_run_nuopc.001.mom6.hm_*nc'
args.year_start = year_start
args.year_end = year_end
args.case_name = case_name
args.label = ''
args.savefigs = False
stream = True
# mom6 grid
grd = MOM6grid(args.infile+args.static)
depth = grd.depth_ocean
# remote Nan's, otherwise genBasinMasks won't work
depth[numpy.isnan(depth)] = 0.0
basin_code = m6toolbox.genBasinMasks(grd.geolon, grd.geolat, depth)
# load data
ds = xr.open_mfdataset(args.infile+args.monthly,decode_times=False)
# convert time in years
ds['time'] = ds.time/365.
ti = args.year_start
tf = args.year_end
# check if data includes years between ti and tf
check_time_interval(ti,tf,ds)
# create a ndarray subclass
class C(numpy.ndarray): pass
if 'vmo' in ds.variables:
varName = 'vmo'; conversion_factor = 1.e-9
elif 'vh' in ds.variables:
varName = 'vh'; conversion_factor = 1.e-6
if 'zw' in ds.variables: conversion_factor = 1.e-9 # Backwards compatible for when we had wrong units for 'vh'
else: raise Exception('Could not find "vh" or "vmo" in file "%s"'%(args.infile+args.static))
tmp = np.ma.masked_invalid(ds[varName].sel(time=slice(ti,tf)).mean('time').data)
tmp = tmp[:].filled(0.)
VHmod = tmp.view(C)
VHmod.units = ds[varName].units
Zmod = m6toolbox.get_z(ds, depth, varName)
if args.case_name != '': case_name = args.case_name + ' ' + args.label
else: case_name = rootGroup.title + ' ' + args.label
# Global MOC
m6plot.setFigureSize([16,9],576,debug=False)
axis = plt.gca()
cmap = plt.get_cmap('dunnePM')
z = Zmod.min(axis=-1); psiPlot = MOCpsi(VHmod)*conversion_factor
psiPlot = 0.5 * (psiPlot[0:-1,:]+psiPlot[1::,:])
#yy = y[1:,:].max(axis=-1)+0*z
yy = grd.geolat_c[:,:].max(axis=-1)+0*z
print(z.shape, yy.shape, psiPlot.shape)
ci=m6plot.pmCI(0.,40.,5.)
plotPsi(yy, z, psiPlot, ci, 'Global MOC [Sv]')
plt.xlabel(r'Latitude [$\degree$N]')
plt.suptitle(case_name)
#findExtrema(yy, z, psiPlot, max_lat=-30.)
#findExtrema(yy, z, psiPlot, min_lat=25.)
#findExtrema(yy, z, psiPlot, min_depth=2000., mult=-1.)
# Atlantic MOC
m6plot.setFigureSize([16,9],576,debug=False)
cmap = plt.get_cmap('dunnePM')
m = 0*basin_code; m[(basin_code==2) | (basin_code==4) | (basin_code==6) | (basin_code==7) | (basin_code==8)]=1
ci=m6plot.pmCI(0.,22.,2.)
z = (m*Zmod).min(axis=-1); psiPlot = MOCpsi(VHmod, vmsk=m*numpy.roll(m,-1,axis=-2))*conversion_factor
psiPlot = 0.5 * (psiPlot[0:-1,:]+psiPlot[1::,:])
#yy = y[1:,:].max(axis=-1)+0*z
yy = grd.geolat_c[:,:].max(axis=-1)+0*z
plotPsi(yy, z, psiPlot, ci, 'Atlantic MOC [Sv]')
plt.xlabel(r'Latitude [$\degree$N]')
plt.suptitle(case_name)
#findExtrema(yy, z, psiPlot, min_lat=26.5, max_lat=27.) # RAPID
#findExtrema(yy, z, psiPlot, max_lat=-33.)
#findExtrema(yy, z, psiPlot)
#findExtrema(yy, z, psiPlot, min_lat=5.)
| _____no_output_____ | Apache-2.0 | docs/source/examples/meridional_overturning.ipynb | gustavo-marques/mom6-tools |
Mean = $\frac{1}{n} \sum_{i=1}^n a_i$ | # create a rdd from 0 to 99
rdd = sc.parallelize(range(100))
sum_ = rdd.sum()
n = rdd.count()
mean = sum_/n
print(mean) | 49.5
| Apache-2.0 | (b)intro 2.ipynb | fahimalamabir/scalable_machine_learning_Apache_Spark |
Median (1) sort the list (2) pick the middle element | rdd.collect()
rdd.sortBy(lambda x:x).collect() | _____no_output_____ | Apache-2.0 | (b)intro 2.ipynb | fahimalamabir/scalable_machine_learning_Apache_Spark |
To access the middle element, we need to access the index. | rdd.sortBy(lambda x:x).zipWithIndex().collect()
sortedandindexed = rdd.sortBy(lambda x:x).zipWithIndex().map(lambda x:x)
n = sortedandindexed.count()
if (n%2 == 1):
index = (n-1)/2;
print(sortedandindexed.lookup(index))
else:
index1 = (n/2)-1
index2 = n/2
value1 = sortedandindexed.lookup(index1)[0]
value2 = sortedandindexed.lookup(index2)[0]
print((value1 + value2)/2) | 49.5
| Apache-2.0 | (b)intro 2.ipynb | fahimalamabir/scalable_machine_learning_Apache_Spark |
Standard Deviation: - tells you how wide the is spread around the mean so if SD is low, all the values should be close to the mean - to calculate it first calculate the mean $\bar{x}$ - SD = $\sqrt{\frac{1}{N}\sum_{i=1}^N(x_i - \bar{x})^2}$ | from math import sqrt
sum_ = rdd.sum()
n = rdd.count()
mean = sum_/n
sqrt(rdd.map(lambda x: pow(x-mean,2)).sum()/n) | _____no_output_____ | Apache-2.0 | (b)intro 2.ipynb | fahimalamabir/scalable_machine_learning_Apache_Spark |
Skewness- tells us how asymmetric data is spread around the mean - check positive skew, negative skew - Skew = $\frac{1}{n}\frac{\sum_{j=1}^n (x_j- \bar{x})^3}{\text{SD}^3}$, x_j= individual value | sd= sqrt(rdd.map(lambda x: pow(x-mean,2)).sum()/n)
n = float(n) # to round off
skw = (1/n)*rdd.map(lambda x : pow(x- mean,3)/pow(sd,3)).sum()
skw | _____no_output_____ | Apache-2.0 | (b)intro 2.ipynb | fahimalamabir/scalable_machine_learning_Apache_Spark |
Kurtosis- tells us the shape of the data- indicates outlier content within the data- kurt = $\frac{1}{n}\frac{\sum_{j=1}^n (x_j- \bar{x})^4}{\text{SD}^4}$, x_j= individual value | (1/n)*rdd.map(lambda x : pow(x- mean,4)/pow(sd,4)).sum()
| _____no_output_____ | Apache-2.0 | (b)intro 2.ipynb | fahimalamabir/scalable_machine_learning_Apache_Spark |
Covariance \& Correlation- how two columns interact with each other- how all columns interact with each other- cov(X,Y) = $\frac{1}{n} \sum_{i=1}^n (x_i-\bar{x})(y_i -\bar{y})$ | rddX = sc.parallelize(range(100))
rddY = sc.parallelize(range(100))
# to avoid loss of precision use float
meanX = rddX.sum()/float(rddX.count())
meanY = rddY.sum()/float(rddY.count())
# since we need to use rddx, rddy same time we need to zip them together
rddXY = rddX.zip(rddY)
covXY = rddXY.map(lambda x:(x[0]-meanX)*(x[1]-meanY)).sum()/rddXY.count()
covXY | _____no_output_____ | Apache-2.0 | (b)intro 2.ipynb | fahimalamabir/scalable_machine_learning_Apache_Spark |
Correlation- corr(X,Y) =$ \frac{\text{cov(X,Y)}}{SD_X SD_Y}$Measure of dependency - Correlation +1 Columns totally correlate 0 columns show no interaction -1 inverse dependency | from math import sqrt
n = rddXY.count()
mean = sum_/n
SDX = sqrt(rdd.map(lambda x: pow(x-meanX,2)).sum()/n)
SDY = sqrt(rdd.map(lambda y: pow(y-meanY,2)).sum()/n)
corrXY = covXY/(SDX *SDY)
corrXY
# corellation matrix in practice
import random
from pyspark.mllib.stat import Statistics
col1 = sc.parallelize(range(100))
col2 = sc.parallelize(range(100,200))
col3 = sc.parallelize(list(reversed(range(100))))
col4 = sc.parallelize(random.sample(range(100),100))
data = col1
data.take(5)
data1 = col1.zip(col2)
data1.take(5) | _____no_output_____ | Apache-2.0 | (b)intro 2.ipynb | fahimalamabir/scalable_machine_learning_Apache_Spark |
Welcome to exercise one of week two of “Apache Spark for Scalable Machine Learning on BigData”. In this exercise you’ll read a DataFrame in order to perform a simple statistical analysis. Then you’ll rebalance the dataset. No worries, we’ll explain everything to you, let’s get started.Let’s create a data frame from a remote file by downloading it: | # delete files from previous runs
!rm -f hmp.parquet*
# download the file containing the data in PARQUET format
!wget https://github.com/IBM/coursera/raw/master/hmp.parquet
# create a dataframe out of it
df = spark.read.parquet('hmp.parquet')
# register a corresponding query table
df.createOrReplaceTempView('df') | --2020-11-06 02:38:52-- https://github.com/IBM/coursera/raw/master/hmp.parquet
Resolving github.com (github.com)... 140.82.114.3
Connecting to github.com (github.com)|140.82.114.3|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://github.com/IBM/skillsnetwork/raw/master/hmp.parquet [following]
--2020-11-06 02:38:52-- https://github.com/IBM/skillsnetwork/raw/master/hmp.parquet
Reusing existing connection to github.com:443.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/IBM/skillsnetwork/master/hmp.parquet [following]
--2020-11-06 02:38:53-- https://raw.githubusercontent.com/IBM/skillsnetwork/master/hmp.parquet
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.52.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.52.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 932997 (911K) [application/octet-stream]
Saving to: ‘hmp.parquet’
hmp.parquet 100%[===================>] 911.13K 3.79MB/s in 0.2s
2020-11-06 02:38:53 (3.79 MB/s) - ‘hmp.parquet’ saved [932997/932997]
| Apache-2.0 | (b)intro 2.ipynb | fahimalamabir/scalable_machine_learning_Apache_Spark |
This is a classical classification data set. One thing we always do during data analysis is checking if the classes are balanced. In other words, if there are more or less the same number of example in each class. Let’s find out by a simple aggregation using SQL. | from pyspark.sql.functions import col
counts = df.groupBy('class').count().orderBy('count')
display(counts)
df.groupBy('class').count().show()
spark.sql('select class,count(*) from df group by class').show() | +--------------+--------+
| class|count(1)|
+--------------+--------+
| Use_telephone| 15225|
| Standup_chair| 25417|
| Eat_meat| 31236|
| Getup_bed| 45801|
| Drink_glass| 42792|
| Pour_water| 41673|
| Comb_hair| 23504|
| Walk| 92254|
| Climb_stairs| 40258|
| Sitdown_chair| 25036|
| Liedown_bed| 11446|
|Descend_stairs| 15375|
| Brush_teeth| 29829|
| Eat_soup| 6683|
+--------------+--------+
| Apache-2.0 | (b)intro 2.ipynb | fahimalamabir/scalable_machine_learning_Apache_Spark |
This looks nice, but it would be nice if we can aggregate further to obtain some quantitative metrics on the imbalance like, min, max, mean and standard deviation. If we divide max by min we get a measure called minmax ration which tells us something about the relationship between the smallest and largest class. Again, let’s first use SQL for those of you familiar with SQL. Don’t be scared, we’re used nested sub-selects, basically selecting from a result of a SQL query like it was a table. All within on SQL statement. | spark.sql('''
select
*,
max/min as minmaxratio -- compute minmaxratio based on previously computed values
from (
select
min(ct) as min, -- compute minimum value of all classes
max(ct) as max, -- compute maximum value of all classes
mean(ct) as mean, -- compute mean between all classes
stddev(ct) as stddev -- compute standard deviation between all classes
from (
select
count(*) as ct -- count the number of rows per class and rename it to ct
from df -- access the temporary query table called df backed by DataFrame df
group by class -- aggrecate over class
)
)
''').show() | +----+-----+------------------+------------------+-----------------+
| min| max| mean| stddev| minmaxratio|
+----+-----+------------------+------------------+-----------------+
|6683|92254|31894.928571428572|21284.893716741157|13.80427951518779|
+----+-----+------------------+------------------+-----------------+
| Apache-2.0 | (b)intro 2.ipynb | fahimalamabir/scalable_machine_learning_Apache_Spark |
The same query can be expressed using the DataFrame API. Again, don’t be scared. It’s just a sequential expression of transformation steps. You now an choose which syntax you like better. | df.show()
df.printSchema()
from pyspark.sql.functions import col, min, max, mean, stddev
df \
.groupBy('class') \
.count() \
.select([
min(col("count")).alias('min'),
max(col("count")).alias('max'),
mean(col("count")).alias('mean'),
stddev(col("count")).alias('stddev')
]) \
.select([
col('*'),
(col("max") / col("min")).alias('minmaxratio')
]) \
.show()
| +----+-----+------------------+------------------+-----------------+
| min| max| mean| stddev| minmaxratio|
+----+-----+------------------+------------------+-----------------+
|6683|92254|31894.928571428572|21284.893716741157|13.80427951518779|
+----+-----+------------------+------------------+-----------------+
| Apache-2.0 | (b)intro 2.ipynb | fahimalamabir/scalable_machine_learning_Apache_Spark |
Now it’s time for you to work on the data set. First, please create a table of all classes with the respective counts, but this time, please order the table by the count number, ascending. | df1 = df.groupBy('class').count()
df1.sort('count',ascending=True).show() | +--------------+-----+
| class|count|
+--------------+-----+
| Eat_soup| 6683|
| Liedown_bed|11446|
| Use_telephone|15225|
|Descend_stairs|15375|
| Comb_hair|23504|
| Sitdown_chair|25036|
| Standup_chair|25417|
| Brush_teeth|29829|
| Eat_meat|31236|
| Climb_stairs|40258|
| Pour_water|41673|
| Drink_glass|42792|
| Getup_bed|45801|
| Walk|92254|
+--------------+-----+
| Apache-2.0 | (b)intro 2.ipynb | fahimalamabir/scalable_machine_learning_Apache_Spark |
Pixiedust is a very sophisticated library. It takes care of sorting as well. Please modify the bar chart so that it gets sorted by the number of elements per class, ascending. Hint: It’s an option available in the UI once rendered using the display() function. | import pixiedust
from pyspark.sql.functions import col
counts = df.groupBy('class').count().orderBy('count')
display(counts) | _____no_output_____ | Apache-2.0 | (b)intro 2.ipynb | fahimalamabir/scalable_machine_learning_Apache_Spark |
Imbalanced classes can cause pain in machine learning. Therefore let’s rebalance. In the flowing we limit the number of elements per class to the amount of the least represented class. This is called undersampling. Other ways of rebalancing can be found here:[https://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/](https://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0201EN-SkillsNetwork-20647446&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ) | from pyspark.sql.functions import min
# create a lot of distinct classes from the dataset
classes = [row[0] for row in df.select('class').distinct().collect()]
# compute the number of elements of the smallest class in order to limit the number of samples per calss
min = df.groupBy('class').count().select(min('count')).first()[0]
# define the result dataframe variable
df_balanced = None
# iterate over distinct classes
for cls in classes:
# only select examples for the specific class within this iteration
# shuffle the order of the elements (by setting fraction to 1.0 sample works like shuffle)
# return only the first n samples
df_temp = df \
.filter("class = '"+cls+"'") \
.sample(False, 1.0) \
.limit(min)
# on first iteration, assing df_temp to empty df_balanced
if df_balanced == None:
df_balanced = df_temp
# afterwards, append vertically
else:
df_balanced=df_balanced.union(df_temp) | _____no_output_____ | Apache-2.0 | (b)intro 2.ipynb | fahimalamabir/scalable_machine_learning_Apache_Spark |
Please verify, by using the code cell below, if df_balanced has the same number of elements per class. You should get 6683 elements per class. | $$$ | _____no_output_____ | Apache-2.0 | (b)intro 2.ipynb | fahimalamabir/scalable_machine_learning_Apache_Spark |
Importing NLTK packages | import nltk
import pandas as pd
restuarant = pd.read_csv("User_restaurants_reviews.csv")
restuarant.head()
from nltk.tokenize import sent_tokenize, word_tokenize
example_text = restuarant["Review"][1]
print(example_text)
nltk.download('stopwords') | [nltk_data] Downloading package stopwords to
[nltk_data] C:\Users\Aditya\AppData\Roaming\nltk_data...
[nltk_data] Package stopwords is already up-to-date!
| MIT | NLP Basics.ipynb | SaiAdityaGarlapati/nlp-peronsal-archive |
Importing stopwords and filtering data using list comprehension | from nltk.corpus import stopwords
stop_words = set(stopwords.words('english')) ##Selecting the stop words we want
print(len(stop_words))
print(stop_words)
nltk.download('punkt')
word_tokens = word_tokenize(example_text)
print(word_tokens)
filtered_sentence = [word for word in word_tokens if not word in stop_words]
print(filtered_sentence) | ['I', 'learned', 'electric', 'slicer', 'used', 'blade', 'becomes', 'hot', 'enough', 'start', 'cook', 'prosciutto', '.']
| MIT | NLP Basics.ipynb | SaiAdityaGarlapati/nlp-peronsal-archive |
Stemming the sentence | from nltk.stem import PorterStemmer
stemmer = PorterStemmer()
stem_tokens=[stemmer.stem(word) for word in word_tokens]
print(stem_tokens) | ['I', 'learn', 'that', 'if', 'an', 'electr', 'slicer', 'is', 'use', 'the', 'blade', 'becom', 'hot', 'enough', 'to', 'start', 'to', 'cook', 'the', 'prosciutto', '.']
| MIT | NLP Basics.ipynb | SaiAdityaGarlapati/nlp-peronsal-archive |
Comparing the stemmed sentence using jaccard similarity | from sklearn.metrics import jaccard_similarity_score
score = jaccard_similarity_score(word_tokens,stem_tokens)
print(score)
nltk.download('averaged_perceptron_tagger')
#Write a function to get all the possible POS tags of NLTK?
text = word_tokenize("And then therefore it was something completely different")
nltk.pos_tag(text)
nltk.download('tagsets')
def all_pos_tags():
print(nltk.help.upenn_tagset())
all_pos_tags()
#Write a function to remove punctuation in NLTK
def remove_punctuation(s):
words = nltk.word_tokenize(s)
words=[word.lower() for word in words if word.isalpha()]
print(words)
str1 = restuarant["Review"][12]
remove_punctuation(str1)
#Write a function to remove stop words in NLTK
def remove_stop_words(s):
word_tokens = word_tokenize(s)
print(word_tokens)
filtered_sentence = [word for word in word_tokens if not word in stop_words]
print(filtered_sentence)
str1 = restuarant["Review"][20]
remove_stop_words(str1)
#Write a function to tokenise a sentence in NLTK
def tokenize_sentence(s):
word_tokens = word_tokenize(s)
print(word_tokens)
str1 = restuarant["Review"][20]
tokenize_sentence(str1)
Write a function to check whether the word is a German word or not? https://stackoverflow.com/questions/3788870/how-to-check-if-a-word-is-an-english-word-with-python
Write a function to get the human names from the text below: President Abraham Lincoln suspended the writ of habeas corpus in the Civil War. President Franklin D. Roosevelt claimed emergency powers to fight the Great Depression and World War II. President George W. Bush adopted an expansive concept of White House power after 9/11. President Barack Obama used executive action to shield some undocumented immigrants from deportation.
Write a function to create a word cloud using Python (with or without NLTK) | _____no_output_____ | MIT | NLP Basics.ipynb | SaiAdityaGarlapati/nlp-peronsal-archive |
0. Setup Paths | import os
CUSTOM_MODEL_NAME = 'my_ssd_mobnet'
PRETRAINED_MODEL_NAME = 'ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8'
PRETRAINED_MODEL_URL = 'http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz'
TF_RECORD_SCRIPT_NAME = 'generate_tfrecord.py'
LABEL_MAP_NAME = 'label_map.pbtxt'
paths = {
'WORKSPACE_PATH': os.path.join('Tensorflow', 'workspace'),
'SCRIPTS_PATH': os.path.join('Tensorflow','scripts'),
'APIMODEL_PATH': os.path.join('Tensorflow','models'),
'ANNOTATION_PATH': os.path.join('Tensorflow', 'workspace','annotations'),
'IMAGE_PATH': os.path.join('Tensorflow', 'workspace','images'),
'MODEL_PATH': os.path.join('Tensorflow', 'workspace','models'),
'PRETRAINED_MODEL_PATH': os.path.join('Tensorflow', 'workspace','pre-trained-models'),
'CHECKPOINT_PATH': os.path.join('Tensorflow', 'workspace','models',CUSTOM_MODEL_NAME),
'OUTPUT_PATH': os.path.join('Tensorflow', 'workspace','models',CUSTOM_MODEL_NAME, 'export'),
'TFJS_PATH':os.path.join('Tensorflow', 'workspace','models',CUSTOM_MODEL_NAME, 'tfjsexport'),
'TFLITE_PATH':os.path.join('Tensorflow', 'workspace','models',CUSTOM_MODEL_NAME, 'tfliteexport'),
'PROTOC_PATH':os.path.join('Tensorflow','protoc')
}
files = {
'PIPELINE_CONFIG':os.path.join('Tensorflow', 'workspace','models', CUSTOM_MODEL_NAME, 'pipeline.config'),
'TF_RECORD_SCRIPT': os.path.join(paths['SCRIPTS_PATH'], TF_RECORD_SCRIPT_NAME),
'LABELMAP': os.path.join(paths['ANNOTATION_PATH'], LABEL_MAP_NAME)
}
for path in paths.values():
if not os.path.exists(path):
if os.name == 'posix':
!mkdir -p {path}
if os.name == 'nt':
!mkdir {path} | _____no_output_____ | MIT | 2. Training and Detection.ipynb | luchaoshi45/tensorflow_jupyter_cnn |
1. Download TF Models Pretrained Models from Tensorflow Model Zoo and Install TFOD | # https://www.tensorflow.org/install/source_windows
if os.name=='nt':
!pip install wget
import wget
if not os.path.exists(os.path.join(paths['APIMODEL_PATH'], 'research', 'object_detection')):
!git clone https://github.com/tensorflow/models {paths['APIMODEL_PATH']}
# Install Tensorflow Object Detection
if os.name=='posix':
!apt-get install protobuf-compiler
!cd Tensorflow/models/research && protoc object_detection/protos/*.proto --python_out=. && cp object_detection/packages/tf2/setup.py . && python -m pip install .
if os.name=='nt':
url="https://github.com/protocolbuffers/protobuf/releases/download/v3.15.6/protoc-3.15.6-win64.zip"
wget.download(url)
!move protoc-3.15.6-win64.zip {paths['PROTOC_PATH']}
!cd {paths['PROTOC_PATH']} && tar -xf protoc-3.15.6-win64.zip
os.environ['PATH'] += os.pathsep + os.path.abspath(os.path.join(paths['PROTOC_PATH'], 'bin'))
!cd Tensorflow/models/research && protoc object_detection/protos/*.proto --python_out=. && copy object_detection\\packages\\tf2\\setup.py setup.py && python setup.py build && python setup.py install
!cd Tensorflow/models/research/slim && pip install -e .
VERIFICATION_SCRIPT = os.path.join(paths['APIMODEL_PATH'], 'research', 'object_detection', 'builders', 'model_builder_tf2_test.py')
# Verify Installation
!python {VERIFICATION_SCRIPT}
!pip install pyyaml
!pip install tensorflow_io
!pip install protobuf
!pip install scipy
!pip install pillow
!pip install matplotlib
!pip install pandas
!pip install pycocotools
!pip install tensorflow --upgrade
!pip uninstall protobuf matplotlib -y
!pip install protobuf matplotlib==3.2
import object_detection
if os.name =='posix':
!wget {PRETRAINED_MODEL_URL}
!mv {PRETRAINED_MODEL_NAME+'.tar.gz'} {paths['PRETRAINED_MODEL_PATH']}
!cd {paths['PRETRAINED_MODEL_PATH']} && tar -zxvf {PRETRAINED_MODEL_NAME+'.tar.gz'}
if os.name == 'nt':
wget.download(PRETRAINED_MODEL_URL)
!move {PRETRAINED_MODEL_NAME+'.tar.gz'} {paths['PRETRAINED_MODEL_PATH']}
!cd {paths['PRETRAINED_MODEL_PATH']} && tar -zxvf {PRETRAINED_MODEL_NAME+'.tar.gz'} | _____no_output_____ | MIT | 2. Training and Detection.ipynb | luchaoshi45/tensorflow_jupyter_cnn |
2. Create Label Map | labels = [{'name':'stone', 'id':1}, {'name':'cloth', 'id':2}, {'name':'scissors', 'id':3}]
with open(files['LABELMAP'], 'w') as f:
for label in labels:
f.write('item { \n')
f.write('\tname:\'{}\'\n'.format(label['name']))
f.write('\tid:{}\n'.format(label['id']))
f.write('}\n') | _____no_output_____ | MIT | 2. Training and Detection.ipynb | luchaoshi45/tensorflow_jupyter_cnn |
3. Create TF records | # OPTIONAL IF RUNNING ON COLAB
ARCHIVE_FILES = os.path.join(paths['IMAGE_PATH'], 'archive.tar.gz')
if os.path.exists(ARCHIVE_FILES):
!tar -zxvf {ARCHIVE_FILES}
if not os.path.exists(files['TF_RECORD_SCRIPT']):
!git clone https://github.com/nicknochnack/GenerateTFRecord {paths['SCRIPTS_PATH']}
!python {files['TF_RECORD_SCRIPT']} -x {os.path.join(paths['IMAGE_PATH'], 'train')} -l {files['LABELMAP']} -o {os.path.join(paths['ANNOTATION_PATH'], 'train.record')}
!python {files['TF_RECORD_SCRIPT']} -x {os.path.join(paths['IMAGE_PATH'], 'test')} -l {files['LABELMAP']} -o {os.path.join(paths['ANNOTATION_PATH'], 'test.record')} | _____no_output_____ | MIT | 2. Training and Detection.ipynb | luchaoshi45/tensorflow_jupyter_cnn |
4. Copy Model Config to Training Folder | if os.name =='posix':
!cp {os.path.join(paths['PRETRAINED_MODEL_PATH'], PRETRAINED_MODEL_NAME, 'pipeline.config')} {os.path.join(paths['CHECKPOINT_PATH'])}
if os.name == 'nt':
!copy {os.path.join(paths['PRETRAINED_MODEL_PATH'], PRETRAINED_MODEL_NAME, 'pipeline.config')} {os.path.join(paths['CHECKPOINT_PATH'])} | _____no_output_____ | MIT | 2. Training and Detection.ipynb | luchaoshi45/tensorflow_jupyter_cnn |
5. Update Config For Transfer Learning | import tensorflow as tf
from object_detection.utils import config_util
from object_detection.protos import pipeline_pb2
from google.protobuf import text_format
config = config_util.get_configs_from_pipeline_file(files['PIPELINE_CONFIG'])
config
pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
with tf.io.gfile.GFile(files['PIPELINE_CONFIG'], "r") as f:
proto_str = f.read()
text_format.Merge(proto_str, pipeline_config)
pipeline_config.model.ssd.num_classes = len(labels)
pipeline_config.train_config.batch_size = 4
pipeline_config.train_config.fine_tune_checkpoint = os.path.join(paths['PRETRAINED_MODEL_PATH'], PRETRAINED_MODEL_NAME, 'checkpoint', 'ckpt-0')
pipeline_config.train_config.fine_tune_checkpoint_type = "detection"
pipeline_config.train_input_reader.label_map_path= files['LABELMAP']
pipeline_config.train_input_reader.tf_record_input_reader.input_path[:] = [os.path.join(paths['ANNOTATION_PATH'], 'train.record')]
pipeline_config.eval_input_reader[0].label_map_path = files['LABELMAP']
pipeline_config.eval_input_reader[0].tf_record_input_reader.input_path[:] = [os.path.join(paths['ANNOTATION_PATH'], 'test.record')]
config_text = text_format.MessageToString(pipeline_config)
with tf.io.gfile.GFile(files['PIPELINE_CONFIG'], "wb") as f:
f.write(config_text) | _____no_output_____ | MIT | 2. Training and Detection.ipynb | luchaoshi45/tensorflow_jupyter_cnn |
6. Train the model | !pip install lvis
!pip install gin
!pip install gin-config
!pip install tensorflow_addons
TRAINING_SCRIPT = os.path.join(paths['APIMODEL_PATH'], 'research', 'object_detection', 'model_main_tf2.py')
command = "python {} --model_dir={} --pipeline_config_path={} --num_train_steps=2000".format(TRAINING_SCRIPT, paths['CHECKPOINT_PATH'],files['PIPELINE_CONFIG'])
print(command)
#!{command} | _____no_output_____ | MIT | 2. Training and Detection.ipynb | luchaoshi45/tensorflow_jupyter_cnn |
7. Evaluate the Model | command = "python {} --model_dir={} --pipeline_config_path={} --checkpoint_dir={}".format(TRAINING_SCRIPT, paths['CHECKPOINT_PATH'],files['PIPELINE_CONFIG'], paths['CHECKPOINT_PATH'])
print(command)
#!{command} | _____no_output_____ | MIT | 2. Training and Detection.ipynb | luchaoshi45/tensorflow_jupyter_cnn |
8. Load Train Model From Checkpoint | import os
import tensorflow as tf
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as viz_utils
from object_detection.builders import model_builder
from object_detection.utils import config_util
# Load pipeline config and build a detection model
configs = config_util.get_configs_from_pipeline_file(files['PIPELINE_CONFIG'])
detection_model = model_builder.build(model_config=configs['model'], is_training=False)
# Restore checkpoint
ckpt = tf.compat.v2.train.Checkpoint(model=detection_model)
ckpt.restore(os.path.join(paths['CHECKPOINT_PATH'], 'ckpt-3')).expect_partial()
@tf.function
def detect_fn(image):
image, shapes = detection_model.preprocess(image)
prediction_dict = detection_model.predict(image, shapes)
detections = detection_model.postprocess(prediction_dict, shapes)
return detections | _____no_output_____ | MIT | 2. Training and Detection.ipynb | luchaoshi45/tensorflow_jupyter_cnn |
9. Detect from an Image | import cv2
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
category_index = label_map_util.create_category_index_from_labelmap(files['LABELMAP'])
IMAGE_PATH = os.path.join(paths['IMAGE_PATH'], 'test', 'scissors.ce01a4a7-a850-11ec-85bd-005056c00008.jpg')
img = cv2.imread(IMAGE_PATH)
image_np = np.array(img)
input_tensor = tf.convert_to_tensor(np.expand_dims(image_np, 0), dtype=tf.float32)
detections = detect_fn(input_tensor)
num_detections = int(detections.pop('num_detections'))
detections = {key: value[0, :num_detections].numpy()
for key, value in detections.items()}
detections['num_detections'] = num_detections
# detection_classes should be ints.
detections['detection_classes'] = detections['detection_classes'].astype(np.int64)
label_id_offset = 1
image_np_with_detections = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections,
detections['detection_boxes'],
detections['detection_classes']+label_id_offset,
detections['detection_scores'],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=5,
min_score_thresh=.8,
agnostic_mode=False)
plt.imshow(cv2.cvtColor(image_np_with_detections, cv2.COLOR_BGR2RGB))
plt.show() | _____no_output_____ | MIT | 2. Training and Detection.ipynb | luchaoshi45/tensorflow_jupyter_cnn |
10. Real Time Detections from your Webcam | !pip uninstall opencv-python-headless -y
cap = cv2.VideoCapture(0)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
while cap.isOpened():
ret, frame = cap.read()
frame = cv2.flip(frame,1,dst=None) #水平镜像
image_np = np.array(frame)
input_tensor = tf.convert_to_tensor(np.expand_dims(image_np, 0), dtype=tf.float32)
detections = detect_fn(input_tensor)
num_detections = int(detections.pop('num_detections'))
detections = {key: value[0, :num_detections].numpy()
for key, value in detections.items()}
detections['num_detections'] = num_detections
# detection_classes should be ints.
detections['detection_classes'] = detections['detection_classes'].astype(np.int64)
label_id_offset = 1
image_np_with_detections = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections,
detections['detection_boxes'],
detections['detection_classes']+label_id_offset,
detections['detection_scores'],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=5,
min_score_thresh=.8,
agnostic_mode=False)
cv2.imshow('object detection', cv2.resize(image_np_with_detections, (800, 600)))
if cv2.waitKey(10) & 0xFF == ord('q'):
cap.release()
cv2.destroyAllWindows()
break | _____no_output_____ | MIT | 2. Training and Detection.ipynb | luchaoshi45/tensorflow_jupyter_cnn |
10. Freezing the Graph | FREEZE_SCRIPT = os.path.join(paths['APIMODEL_PATH'], 'research', 'object_detection', 'exporter_main_v2.py ')
FREEZE_SCRIPT
command = "python {} --input_type=image_tensor --pipeline_config_path={} --trained_checkpoint_dir={} --output_directory={}".format(FREEZE_SCRIPT ,files['PIPELINE_CONFIG'], paths['CHECKPOINT_PATH'], paths['OUTPUT_PATH'])
print(command)
!{command} | _____no_output_____ | MIT | 2. Training and Detection.ipynb | luchaoshi45/tensorflow_jupyter_cnn |
11. Conversion to TFJS | !pip install tensorflowjs
command = "tensorflowjs_converter --input_format=tf_saved_model --output_node_names='detection_boxes,detection_classes,detection_features,detection_multiclass_scores,detection_scores,num_detections,raw_detection_boxes,raw_detection_scores' --output_format=tfjs_graph_model --signature_name=serving_default {} {}".format(os.path.join(paths['OUTPUT_PATH'], 'saved_model'), paths['TFJS_PATH'])
print(command)
!{command}
# Test Code: https://github.com/nicknochnack/RealTimeSignLanguageDetectionwithTFJS | _____no_output_____ | MIT | 2. Training and Detection.ipynb | luchaoshi45/tensorflow_jupyter_cnn |
12. Conversion to TFLite | TFLITE_SCRIPT = os.path.join(paths['APIMODEL_PATH'], 'research', 'object_detection', 'export_tflite_graph_tf2.py ')
command = "python {} --pipeline_config_path={} --trained_checkpoint_dir={} --output_directory={}".format(TFLITE_SCRIPT ,files['PIPELINE_CONFIG'], paths['CHECKPOINT_PATH'], paths['TFLITE_PATH'])
print(command)
!{command}
FROZEN_TFLITE_PATH = os.path.join(paths['TFLITE_PATH'], 'saved_model')
TFLITE_MODEL = os.path.join(paths['TFLITE_PATH'], 'saved_model', 'detect.tflite')
command = "tflite_convert \
--saved_model_dir={} \
--output_file={} \
--input_shapes=1,300,300,3 \
--input_arrays=normalized_input_image_tensor \
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
--inference_type=FLOAT \
--allow_custom_ops".format(FROZEN_TFLITE_PATH, TFLITE_MODEL, )
print(command)
!{command} | _____no_output_____ | MIT | 2. Training and Detection.ipynb | luchaoshi45/tensorflow_jupyter_cnn |
13. Zip and Export Models | !tar -czf models.tar.gz {paths['CHECKPOINT_PATH']}
from google.colab import drive
drive.mount('/content/drive') | _____no_output_____ | MIT | 2. Training and Detection.ipynb | luchaoshi45/tensorflow_jupyter_cnn |
15天入门Python3CopyRight by 黑板客 转载请联系heibanke_at_aliyun.com **上节作业**汉诺塔如何存储和操作数据? | %load day07/hnt.py
| _____no_output_____ | MIT | Code/day08.ipynb | heibanke/learn_python_in_15days |
day08:生成器—临阵磨枪1. 生成器2. itertools4. 作业——八皇后 生成器生成器函数 1) return关键词被yield取代 2) 当调用这个“函数”的时候,它会立即返回一个迭代器,而不立即执行函数内容,直到调用其返回迭代器的next方法是才开始执行,直到遇到yield语句暂停。 3) 继续调用生成器返回的迭代器的next方法,恢复函数执行,直到再次遇到yield语句 4) 如此反复,一直到遇到StopIteration | # 最简单的例子,产生0~N个整数
def irange(N):
a = 0
while a<N:
yield a
a = a+1
b = irange(10)
print(b)
next(b) | _____no_output_____ | MIT | Code/day08.ipynb | heibanke/learn_python_in_15days |
当你要产生的数据只用来遍历。那么这个数据就适合用生成器来实现。不过要注意,生成器只能遍历一次。 | # fabonacci序列
from __future__ import print_function
def fib():
a, b = 0, 1
while True:
yield b
a, b = b, a + b
for i in fib():
if i > 1000:
break
else:
print(i) | _____no_output_____ | MIT | Code/day08.ipynb | heibanke/learn_python_in_15days |
生成器表达式 | a = (x**2 for x in range(10))
next(a)
%%timeit -n 1 -r 1
sum([x**2 for x in range(10000000)])
%%timeit -n 1 -r 1
sum(x**2 for x in range(10000000)) | _____no_output_____ | MIT | Code/day08.ipynb | heibanke/learn_python_in_15days |
send生成器可以修改遍历过程,插入指定的数据 | def counter(maximum):
i = 0
while i < maximum:
val = (yield i)
print("i=%s, val=%s"%(i, val))
# If value provided, change counter
if val is not None:
i = val
else:
i += 1
it = counter(10)
print("yield value: %s"%(next(it)))
print("yield value: %s"%(next(it)))
print(it.send(5))
# 想一下下个print(next(it))会输出什么? | _____no_output_____ | MIT | Code/day08.ipynb | heibanke/learn_python_in_15days |
itertools1. chain 将多个生成器串起来2. repeat 重复元素3. permutations 排列,从N个数里取m个,考虑顺序。4. combinations 组合,从N个数里取m个,不考虑顺序。5. product 依次从不同集合里任选一个数。笛卡尔乘积 | import itertools
horses=[1,2,3,4]
races = itertools.permutations(horses,3)
a=itertools.product([1,2],[3,4],[5,6])
b=itertools.repeat([1,2,3],4)
c=itertools.combinations([1,2,3,4],3)
d=itertools.chain(races, a, b, c)
print([i for i in races])
print("====================")
print([i for i in a])
print("====================")
print([i for i in b])
print("====================")
print([i for i in c])
print("====================")
print([i for i in d])
| _____no_output_____ | MIT | Code/day08.ipynb | heibanke/learn_python_in_15days |
**作业:八皇后问题**8*8的棋盘上放下8个皇后,彼此吃不到对方。找出所有的位置组合。1. 棋盘的每一行,每一列,每一个条正斜线,每一条反斜线,都只能有1个皇后2. 使用生成器3. 支持N皇后 | from day08.eight_queen import gen_n_queen, printsolution
solves = gen_n_queen(5)
s = next(solves)
print(s)
printsolution(s)
def printsolution(solve):
n = len(solve)
sep = "+" + "-+" * n
print(sep)
for i in range(n):
squares = [" " for j in range(n)]
squares[solve[i]] = "Q"
print("|" + "|".join(squares) + "|")
print(sep) | _____no_output_____ | MIT | Code/day08.ipynb | heibanke/learn_python_in_15days |
Вычислить $ \sqrt[k]{a} $ | import numpy as np
def printable_test(a, k, f, prc=1e-4):
ans = f(a, k)
print(f'Our result: {a}^(1/{k}) ~ {ans:.10f}')
print(f'True result: {a**(1/k):.10f}\n')
print(f'Approx a ~ {ans**k:.10f}')
print(f'True a = {a}')
assert abs(a - ans**k) < prc, f'the answer differs by {abs(a - ans**k):.10f} from the true one'
def not_printable_test(a, k, f, prc=1e-4):
ans = f(a, k)
assert abs(a - ans**k) < prc, f'f({a}, {k}): the answer differs by {abs(a - ans**k):.10f} from the true one'
def test(func):
rng = np.random.default_rng(12345)
test_len = 1000
vals = rng.integers(low=0, high=1000, size=test_len)
pws = rng.integers(low=1, high=100, size=test_len)
for a, k in zip(vals, pws):
not_printable_test(a, k, func)
print(f'All {test_len} tests have passed!')
def root_bisection(a: float, k: float, iters=1000) -> float:
def f(x):
return x**k - a
assert k > 0, 'Negative `k` values are not allowed'
l, r = 0, a
for _ in range(iters):
m = l + (r - l) / 2
if f(m) * f(l) <= 0:
r = m
else:
l = m
return l + (r - l) / 2
test(root_bisection)
printable_test(1350, 12, root_bisection)
print('\n')
printable_test(-1, 1, root_bisection)
def root_newton(a: float, k: float, iters=1000) -> float:
def f(x):
return x**k - a
def dx(x):
return k * x**(k - 1)
assert k > 0, 'Negative `k` values are not allowed'
x = 1
for _ in range(iters):
x = x - f(x) / dx(x)
return x
test(root_newton)
printable_test(1350, 12, root_newton)
print('\n')
printable_test(-1, 1, root_newton) | Our result: 1350^(1/12) ~ 1.8233126596
True result: 1.8233126596
Approx a ~ 1350.0000000000
True a = 1350
Our result: -1^(1/1) ~ -1.0000000000
True result: -1.0000000000
Approx a ~ -1.0000000000
True a = -1
| MIT | savinov-vlad/hw1.ipynb | dingearteom/co-mkn-hw-2021 |
Дан многочлен P степени не больше 5 и отрезок [L, R] Локализовать корни: $ P(L_i) \cdot P(R_i) <0 $ И найти на каждом таком отрезке корни | from typing import List
import numpy as np
class Polynom:
def __init__(self, coefs: List[float]):
# self.coefs = [a0, a1, a2, ...., an]
self.coefs = coefs
def __str__(self):
if not self.coefs:
return ''
descr = str(self.coefs[0])
for i, coef in enumerate(self.coefs[1:]):
sign = '+' if coef > 0 else '-'
descr += f' {sign} {abs(coef)} * x^{i + 1}'
return descr
def __repr__(self):
return self.__str__()
def value_at(self, x: float) -> float:
res = 0
for i, coef in enumerate(self.coefs):
res += x**i * coef
return res
def dx(self):
if not self.coefs or len(self.coefs) == 1:
return Polynom([0])
return Polynom([(i + 1) * coef for i, coef in enumerate(self.coefs[1:])])
def is_root(self, x: float, prc=1e-4) -> bool:
return abs(0 - self.value_at(x)) < prc
def root_segments(self, l: float, r: float, min_seg_len=1e-2) -> List[List[float]]:
segs = []
prev_end, cur_end = l, l + min_seg_len
while cur_end < r:
if self.value_at(prev_end) * self.value_at(cur_end) < 0:
segs.append([prev_end, cur_end])
prev_end = cur_end
if self.value_at(cur_end) == 0:
move = min_seg_len / 10
segs.append([prev_end, cur_end + move])
prev_end = cur_end + move
cur_end += min_seg_len
return segs
def find_single_root(self, l: float, r: float, iters=1000) -> float:
for _ in range(iters):
m = l + (r - l) / 2
if self.value_at(l) * self.value_at(m) < 0:
r = m
else:
l = m
return l + (r - l) / 2
def find_roots(self, l: float, r: float) -> List[float]:
roots = []
segs = self.root_segments(l, r)
for seg_l, seg_r in segs:
roots.append(self.find_single_root(seg_l, seg_r))
return roots
def check_roots(self, roots: List[float]) -> bool:
return np.all([self.is_root(x) for x in roots])
def find_min(self, l: float, r: float) -> float:
assert self.coefs, 'Polynom must contain at least one coef'
if len(self.coefs) == 1:
return self.coefs[0]
pts = [l, *self.dx().find_roots(l, r), r]
return min(self.value_at(pt) for pt in pts)
# x^3 + 97.93 x^2 - 229.209 x + 132.304
# (x - 1.1) * (x - 1.2) * (x + 100.23)
p = Polynom([132.304, -229.209, 97.93, 1])
p.find_roots(-1000, 1000)
p.find_min(-10, 10)
Polynom([1, 2, 1]).find_roots(-1000, 1000)
Polynom([1, -2]).find_roots(-1000, 1000) | _____no_output_____ | MIT | savinov-vlad/hw1.ipynb | dingearteom/co-mkn-hw-2021 |
Найти минимум функции $ e^{ax} + e^{-bx} + c(x - d)^2$ | from numpy import exp
from typing import Tuple
class ExpMinFinder:
def __init__(self, a: float, b: float, c: float, d: float):
if a <= 0 or b <= 0 or c <= 0:
raise ValueError("Parameters must be non-negative")
self.a = a
self.b = b
self.c = c
self.d = d
def f(self, x) -> float:
return exp(self.a * x) + exp(-self.b * x) + self.c * (x - self.d)**2
def dx(self, x) -> float:
return self.a * exp(self.a * x) - self.b * exp(-self.b * x) + 2 * self.c * (x - self.d)
def ddx(self, x) -> float:
return self.a**2 * exp(self.a * x) + self.b**2 * exp(-self.b * x) + 2 * self.c
def min_bisection(self, iters=1000) -> Tuple[float, float]:
l, r = -100, 100
for _ in range(iters):
m = l + (r - l) / 2
if self.dx(m) * self.dx(l) < 0:
r = m
else:
l = m
min_at = l + (r - l) / 2
return min_at, self.f(min_at)
def min_newton(self, iters=1000) -> Tuple[float, float]:
x = 1
for _ in range(iters):
x = x - self.dx(x) / self.ddx(x)
return x, self.f(x)
def min_ternary(self, iters=1000) -> Tuple[float, float]:
l, r = -100, 100
for _ in range(iters):
m1 = l + (r - l) / 3
m2 = r - (r - l) / 3
if self.f(m1) >= self.f(m2):
l = m1
else:
r = m2
min_at = l + (r - l) / 2
return min_at, self.f(min_at)
def test_exp():
rng = np.random.default_rng(12345)
test_len = 100
a_ = rng.integers(low=1, high=10, size=test_len)
b_ = rng.integers(low=1, high=10, size=test_len)
c_ = rng.integers(low=1, high=10, size=test_len)
d_ = rng.integers(low=1, high=10, size=test_len)
for a, b, c, d in zip(a_, b_, c_, d_):
m = ExpMinFinder(a, b, c, d)
assert abs(m.min_bisection()[1] - m.min_newton()[1]) < 1e-3, \
f'Results: {m.min_bisection():.3f} {m.min_newton():.3f}, values: {a, b, c, d}'
assert abs(m.min_newton()[1] - m.min_ternary()[1]) < 1e-3, \
f'Results: {m.min_newton():.3f} {m.min_ternary():.3f}, values: {a, b, c, d}'
print(f'All {test_len} tests have passed')
test_exp()
m = ExpMinFinder(1, 2, 3, 4)
m.min_bisection()
m.min_newton()
m.min_ternary() | _____no_output_____ | MIT | savinov-vlad/hw1.ipynb | dingearteom/co-mkn-hw-2021 |
En este Nootebock se realiza la limpieza del conjunto train, de tal manera, que al terminar la pipeline ya se puede emplear dicho conjunto para el entrenamiento de modelos.Se expondrá en un pequeño comentario en la parte superior por la razon que se realiza el cambioPara una mejor descripción se puede consultar *PreprocesadoTrainRaw.ipynb* donde se comentan un poco mejor los pasos realizados. | import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from sklearn.decomposition import PCA
df = pd.read_table('Modelar_UH2019.txt', sep = '|', dtype={'HY_cod_postal':str})
df = pd.read_table('Modelar_UH2019.txt', sep = '|', dtype={'HY_cod_postal':str})
# Tenemos varios Nans en HY_provincias, por lo que creamos la siguiente función que nos ayudará a imputarlos con
# ayuda del código postal
def ArreglarProvincias(df):
# Diccionario de los códigos postales. 'xxddd' --> xx es el código asociado a la provincia
diccionario_postal = {'02':'Albacete','03':'Alicante','04':'Almería','01':'Álava','33':'Asturias',
'05':'Avila','06':'Badajoz','07':'Baleares', '08':'Barcelona','48':'Bizkaia',
'09':'Burgos','10':'Cáceres','11':'Cádiz','39':'Cantabria','12':'Castellón',
'13':'Ciudad Real','14':'Córdoba','15':'A Coruña','16':'Cuenca','20':'Gipuzkoa',
'17':'Gerona','18':'Granada','19':'Guadalajara','21':'Huelva','22':'Huesca',
'23':'Jaén','24':'León','25':'Lérida','27':'Lugo','28':'Madrid','29':'Málaga',
'30':'Murcia','31':'Navarra','32':'Ourense','34':'Palencia','35':'Las Palmas',
'36':'Pontevedra','26':'La Rioja','37':'Salamanca','38':'Tenerife','40':'Segovia',
'41':'Sevilla','42':'Soria','43':'Tarragona','44':'Teruel','45':'Toledo','46':'Valencia',
'47':'Valladolid','49':'Zamora','50':'Zaragoza','51':'Ceuta','52':'Melilla'}
# Obtenemos los códigos postales que nos faltan
codigos_postales = df.loc[df.HY_provincia.isnull()].HY_cod_postal
# Recorremos la pareja index, value
for idx, cod in zip(codigos_postales.index, codigos_postales):
# Del cod solo nos interesan los dos primeros valores para la provincia.
df.loc[idx,'HY_provincia'] = diccionario_postal[cod[:2]]
# Devolvemos el df de las provincias
return df
# Obtenemos nuestro df con las provincias imputadas
df = ArreglarProvincias(df)
########## Metros ##############
# Volvemos Nans los valores de 0m^2 o inferior --> Los 0 provocan errores en una nueva variable de €/m2
df.loc[df['HY_metros_utiles'] <= 0,'HY_metros_utiles'] = np.nan
df.loc[df['HY_metros_totales'] <= 0,'HY_metros_totales'] = np.nan
# Obtenemos las posiciones de los valores faltantes een los metros útiles
posiciones_nans = df['HY_metros_totales'].isnull()
# Rellenamos los Nans con los metros totales
df.loc[posiciones_nans,'HY_metros_totales'] = df.loc[posiciones_nans,'HY_metros_utiles']
# Obtenemos las posiciones de los valores faltantes een los metros útiles
posiciones_nans = df['HY_metros_utiles'].isnull()
# Rellenamos los Nans con los metros totales
df.loc[posiciones_nans,'HY_metros_utiles'] = df.loc[posiciones_nans,'HY_metros_totales']
# Si continuamos teniendo Nans
if df[['HY_metros_utiles', 'HY_metros_totales']].isnull().sum().sum()>0: # Hay 2 .sum para sumarlo todo
# Agrupamos por HY_tipo
group_tipo = df[['HY_tipo', 'HY_metros_utiles', 'HY_metros_totales']].dropna().groupby('HY_tipo').mean()
# Cuales son los indices de los registros que tienen nans
index_nans = df.index[df['HY_metros_utiles'].isnull()]
for i in index_nans:
tipo = df.loc[i, 'HY_tipo']
df.loc[i, ['HY_metros_utiles', 'HY_metros_totales']] = group_tipo.loc[tipo]
# Eliminamos los outliers
# Definimos la cota a partir de la cual son outliers
cota = df['HY_metros_utiles'].mean()+3*df['HY_metros_utiles'].std()
# Y nos quedamos con todos aquellos que no la superan
df = df[df['HY_metros_utiles'] <= cota]
# Idem para metros totales
# Definimos la cota a partir de la cual son outliers
cota = df['HY_metros_totales'].mean()+3*df['HY_metros_totales'].std()
# Y nos quedamos con todos aquellos que no la superan
df = df[df['HY_metros_totales'] <= cota]
# Por último, eliminamos los registros que presenten una diferencia excesiva de metros
dif_metros = np.abs(df.HY_metros_utiles - df.HY_metros_totales)
df = df[dif_metros <= 500]
########## Precios ############
# Creamos una nueva variable que sea ¿Existe precio anterior?--> Si/No
df['PV_precio_anterior'] = df['HY_precio_anterior'].isnull()
# Y modificamos precio anterior para que tenga los valores del precio actual como anterior
df.loc[df['HY_precio_anterior'].isnull(),'HY_precio_anterior'] = df.loc[df['HY_precio_anterior'].isnull(),'HY_precio']
# Eliminamos también los precios irrisorios (Todos aquellos precios inferiores a 100€)
v = df[['HY_precio', 'HY_precio_anterior']].apply(lambda x: x[0] <= 100 and x[1] <= 100, axis = 1)
df = df[v == False]
######## Descripción y distribución #########
# Creamos 2 nuevas variables con la longitud del texto expuesto (Nan = 0)
# Igualamos los NaN a carácteres vacíos
df.loc[df['HY_descripcion'].isnull(),'HY_descripcion'] = ''
df.loc[df['HY_distribucion'].isnull(),'HY_distribucion'] = ''
# Calculamos su longitud
df['PV_longitud_descripcion'] = df['HY_descripcion'].apply(lambda x: len(x))
df['PV_longitud_distribucion'] = df['HY_distribucion'].apply(lambda x: len(x))
####### Cantidad de imágenes #########
# Añadimos una nueva columna que es la cantidad de imágenes que tiene asociado el piso
# El df de información de las imágenes tiene 3 columnas: id, posicion_foto, carácteres_aleatorios
df_imagenes = pd.read_csv('df_info_imagenes.csv', sep = '|',encoding = 'utf-8')
# Realizamos un count de los ids de las imagenes (Y nos quedamos con el valor de la
# variable Posiciones (Al ser un count, nos es indiferente la variable seleccionada))
df_count_imagenes = df_imagenes.groupby('HY_id').count()['Posiciones']
# Definimos la función que asocia a cada id su número de imágenes
def AñadirCantidadImagenes(x):
try:
return df_count_imagenes.loc[x]
except:
return 0
# Creamos la variable
df['PV_cantidad_imagenes'] = df['HY_id'].apply(lambda x: AñadirCantidadImagenes(x))
######### Imputación de las variables IDEA #########
# En el notebook ImputacionNans.ipynb se explica en mayor profundidad las funciones definidas. Por el momento,
# para imputar los valores Nans de las variables IDEA realizamos lo siguiente:
# -1. Hacemos la media de las variables que no son Nan por CP
# -2. Imputamos por la media del CP
# -3. Repetimos para aquellos codigos postales que son todo Nans con la media por provincias (Sin contar los imputados)
# -4. Imputamos los Nans que faltan por la media general de todo (Sin contar los imputados)
var_list = [
['IDEA_pc_1960', 'IDEA_pc_1960_69', 'IDEA_pc_1970_79', 'IDEA_pc_1980_89','IDEA_pc_1990_99', 'IDEA_pc_2000_10'],
['IDEA_pc_comercio','IDEA_pc_industria', 'IDEA_pc_oficina', 'IDEA_pc_otros','IDEA_pc_residencial', 'IDEA_pc_trast_parking'],
['IDEA_ind_tienda', 'IDEA_ind_turismo', 'IDEA_ind_alimentacion'],
['IDEA_ind_riqueza'],
['IDEA_rent_alquiler'],
['IDEA_ind_elasticidad', 'IDEA_ind_liquidez'],
['IDEA_unitprice_sale_residential', 'IDEA_price_sale_residential', 'IDEA_stock_sale_residential'],
['IDEA_demand_sale_residential'],
['IDEA_unitprice_rent_residential', 'IDEA_price_rent_residential', 'IDEA_stock_rent_residential'],
['IDEA_demand_rent_residential']
]
# Función que imputa Nans por la media de CP o Provincias (La versión de ImputacionNans.ipynb imprime el número
# de valores faltantes después de la imputación)
def ImputarNans_cp(df, vars_imput, var):
'''
df --> Nuestro dataframe a modificar
vars_imput --> Variables que queremos imputar.
var --> Variable por la que queremos realizar la agrupación (HY_cod_postal ó HY_provincia)
'''
# Obtenemos nuestros df de grupos
group_cp = df[[var]+vars_imput].dropna().groupby(var).mean()
# Obtenemos los CP que son Nans
codigos_nans = df.loc[df[vars_imput[0]].isnull(), var] # Valdría cualquiera de las 6 variables.
# Como sabemos que códigos podremos completar y cuales no, solo utilizaremos los que se pueden completar
cods = np.intersect1d(codigos_nans.unique(),group_cp.index)
# Cuales son los índices de los Nans
index_nan = df.index[df[vars_imput[0]].isnull()]
for cod in cods:
# Explicación del indexado: De todos los códigos que coinciden con el nuestro nos quedamos con los que tienen índice
# nan, y para poder acceder a df, necesitamos los índices de Nan que cumplen lo del código.
i = index_nan[(df[var] == cod)[index_nan]]
df.loc[i, vars_imput] = group_cp.loc[cod].values
# Devolvemos los dataframes
return df, group_cp
# Bucle que va variable por variable imputando los valores
for vars_group in var_list:
#print('*'*50)
#print('Variables:', vars_group)
#print('-'*10+' CP '+'-'*10)
df, group_cp = ImputarNans_cp(df, vars_group, var = 'HY_cod_postal')
#print('-'*10+' Provincias '+'-'*10)
df, group_provincia = ImputarNans_cp(df, vars_group, var = 'HY_provincia')
# Si aún quedan Nans los ponemos a todos con la media de todo
registros_faltantes = df[vars_group[0]].isnull().sum()
if registros_faltantes>0:
#print('-'*30)
df.loc[df[vars_group[0]].isnull(), vars_group] = group_provincia.mean(axis = 0).values
#print('Se han imputado {} registros por la media de todo'.format(registros_faltantes))
# Guardamos los datos en la carpeta DF_grupos ya que tenemos que imputar en test por estos mismos valores.
df.to_csv('./DF_grupos/df_filled_{}.csv'.format(vars_group[0]), sep = '|', encoding='utf-8', index = False)
group_cp.to_csv('./DF_grupos/group_cp_{}.csv'.format(vars_group[0]), sep = '|', encoding='utf-8')
group_provincia.to_csv('./DF_grupos/group_prov_{}.csv'.format(vars_group[0]), sep = '|', encoding='utf-8')
####### Indice elasticidad ##########
# Creamos una nueva variable que redondea el indice de elasticidad al entero más cercano (La variable toma 1,2,3,4,5)
df['PV_ind_elasticidad'] = np.round(df['IDEA_ind_elasticidad'])
###### Antigüedad zona #########
# Definimos la variable de antigüedad de la zona dependiendo del porcentaje de pisos construidos en la zona
# Primero tomaremos las variables [IDEA_pc_1960,IDEA_pc_1960_69,IDEA_pc_1970_79,IDEA_pc_1980_89,
# IDEA_pc_1990_99,IDEA_pc_2000_10] y las transformaremos en solo 3. Y luego nos quedaremos
# con el máximo de esas tres para determinar el estado de la zona.
df['Viejos'] = df[['IDEA_pc_1960', 'IDEA_pc_1960_69']].sum(axis = 1)
df['Medios'] = df[['IDEA_pc_1970_79', 'IDEA_pc_1980_89']].sum(axis = 1)
df['Nuevos'] = df[['IDEA_pc_1990_99', 'IDEA_pc_2000_10']].sum(axis = 1)
df['PV_clase_piso'] = df[['Viejos','Medios','Nuevos']].idxmax(axis = 1)
# Añadimos una nueva variable que es si la longitud de la descripción es nula, va de 0 a 1000 carácteres, ó supera los 1000
df['PV_longitud_descripcion2'] = pd.cut(df['PV_longitud_descripcion'], bins = [-1,0,1000, np.inf], labels=['Ninguna', 'Media', 'Larga'], include_lowest=False)
# Precio de euro el metro
df['PV_precio_metro'] = df.HY_precio/df.HY_metros_totales
# Cambiamos Provincias por 'Castellón','Murcia','Almería','Valencia','Otros'
def estructurar_provincias(x):
'''
Funcion que asocia a x (Nombre de provincia) su clase
'''
# Lista de clases que nos queremos quedar
if x in ['Castellón','Murcia','Almería','Valencia']:
return x
else:
return 'Otros'
df['PV_provincia'] = df.HY_provincia.apply(lambda x: estructurar_provincias(x))
# Una nueva que es si el inmueble presenta alguna distribución
df.loc[df['PV_longitud_distribucion'] > 0,'PV_longitud_distribucion'] = 1
# Cambiamos certificado energetico a Si/No (1/0)
df['PV_cert_energ'] = df['HY_cert_energ'].apply(lambda x: np.sum(x != 'No'))
# Cambiamos las categorías de HY_tipo a solo 3: [Piso, Garaje, Otros]
def CategorizarHY_tipo(dato):
if dato in ['Piso', 'Garaje']:
return dato
else:
return 'Otros'
df['PV_tipo'] = df['HY_tipo'].apply(CategorizarHY_tipo)
# Cambiamos la variable Garaje a Tiene/No tiene (1/0)
df.loc[df['HY_num_garajes']>1,'HY_num_garajes'] = 1
# Cambiamos baños por 0, 1, +1 (No tiene, tiene 1, tiene mas de 1)
df['PV_num_banos'] = pd.cut(df['HY_num_banos'], [-1,0,1,np.inf], labels = [0,1,'+1'])
# Cambiamos Num terrazas a Si/No (1/0)
df.loc[df['HY_num_terrazas']>1, 'HY_num_terrazas'] = 1
# Definimos las variables a eliminar para definir nuestro conjunto X
drop_vars = ['HY_id', 'HY_cod_postal', 'HY_provincia', 'HY_descripcion',
'HY_distribucion', 'HY_tipo', 'HY_antiguedad','HY_num_banos', 'HY_cert_energ',
'HY_num_garajes', 'IDEA_pc_1960', 'IDEA_area', 'IDEA_poblacion', 'IDEA_densidad', 'IDEA_ind_elasticidad',
'Viejos', 'Medios','Nuevos']
# Explicación:
# + 'HY_id', 'HY_cod_postal' --> Demasiadas categorías
# + 'HY_provincia' --> Ya tenemos PV_provincia que las agrupa
# + 'HY_descripcion','HY_distribucion' --> Tenemos sus longitudes
# + 'HY_tipo' --> Ya hemos creado PV_tipo
# + 'HY_cert_energ','HY_num_garajes'--> Ya tenemos las PV asociadas (valores con 0 1)
# + 'IDEA_pc_1960' --> Está duplicada
# + 'IDEA_area', 'IDEA_poblacion', 'IDEA_densidad' --> Demasiados Nans
# + 'IDEA_ind_elasticidad' --> Tenemos la variable equivalente en PV
# + 'Viejos', 'Medios','Nuevos' --> Ya tenemos PV_clase_piso
# + 'TARGET' --> Por motivos obvios no la queremos en X
X = df.copy().drop(drop_vars+['TARGET'],axis = 1)
y = df.TARGET.copy()
# Eliminamos los outliers de las siguientes variables
cont_vars = ['HY_metros_utiles', 'HY_metros_totales','GA_page_views', 'GA_mean_bounce',
'GA_exit_rate', 'GA_quincena_ini', 'GA_quincena_ult','PV_longitud_descripcion',
'PV_longitud_distribucion', 'PV_cantidad_imagenes',
'PV_ind_elasticidad', 'PV_precio_metro']
for var in cont_vars:
cota = X[var].mean()+3*X[var].std()
y = y[X[var]<=cota]
X = X[X[var]<=cota]
# Y eliminamos los Outliers de nuestra variable respuesta
X = X[y <= y.mean()+3*y.std()]
y = y[y <= y.mean()+3*y.std()]
# Realizamos el logaritmo de nuestra variable respuesta (Nota: Sumamos 1 para evitar log(0))
y = np.log(y+1)
# Creamos las variables Dummy para las categóricas
dummy_vars = ['PV_provincia','PV_longitud_descripcion2',
'PV_clase_piso','PV_tipo','PV_num_banos']
# Unimos nuestro conjunto con el de dummies
X = X.join(pd.get_dummies(X[dummy_vars]))
# Eliminamos las variables que ya son Dummies
X = X.drop(dummy_vars, axis=1)
############# PCA ####################
# Realizamos una PCA con las variables IDEA (Nota: soolo tomamos 1 componente porque nos explica el 99.95% de la varianza)
idea_vars_price = [
'IDEA_unitprice_sale_residential', 'IDEA_price_sale_residential',
'IDEA_stock_sale_residential', 'IDEA_demand_sale_residential',
'IDEA_unitprice_rent_residential', 'IDEA_price_rent_residential',
'IDEA_stock_rent_residential', 'IDEA_demand_rent_residential']
pca_prices = PCA(n_components=1)
idea_pca_price = pca_prices.fit_transform(X[idea_vars_price])
X['PV_idea_pca_price'] = (idea_pca_price-idea_pca_price.min())/(idea_pca_price.max()-idea_pca_price.min())
# Realizamos una PCA con las variables IDEA (Nota: soolo tomamos 1 componente porque nos explica el 78% de la varianza)
idea_vars_pc = [
'IDEA_pc_comercio',
'IDEA_pc_industria', 'IDEA_pc_oficina', 'IDEA_pc_otros',
'IDEA_pc_residencial', 'IDEA_pc_trast_parking', 'IDEA_ind_tienda',
'IDEA_ind_turismo', 'IDEA_ind_alimentacion', 'IDEA_ind_riqueza',
'IDEA_rent_alquiler', 'IDEA_ind_liquidez']
pca_pc = PCA(n_components=1)
idea_pca_pc = pca_pc.fit_transform(X[idea_vars_pc])
X['PV_idea_pca_pc'] = (idea_pca_pc-idea_pca_pc.min())/(idea_pca_pc.max()-idea_pca_pc.min())
# Nos quedamos con la información PCA de nuestras PV
pca_PV = PCA(n_components=3)
PV_pca = pca_PV.fit_transform(X[['PV_cert_energ',
'PV_provincia_Almería', 'PV_provincia_Castellón', 'PV_provincia_Murcia',
'PV_provincia_Otros', 'PV_provincia_Valencia',
'PV_longitud_descripcion2_Larga', 'PV_longitud_descripcion2_Media',
'PV_longitud_descripcion2_Ninguna', 'PV_clase_piso_Medios',
'PV_clase_piso_Nuevos', 'PV_clase_piso_Viejos', 'PV_tipo_Garaje',
'PV_tipo_Otros', 'PV_tipo_Piso', 'PV_num_banos_0', 'PV_num_banos_1',
'PV_num_banos_+1']])
X['PV_pca1'] = PV_pca[:, 0]
X['PV_pca2'] = PV_pca[:, 1]
X['PV_pca3'] = PV_pca[:, 2]
# Eliminamos los posibles outliers creados
pca_vars = ['PV_idea_pca_price', 'PV_idea_pca_pc','PV_pca1', 'PV_pca2', 'PV_pca3']
for var in pca_vars:
cota = X[var].mean()+3*X[var].std()
y = y[X[var]<=cota]
X = X[X[var]<=cota]
X = X.drop([
'IDEA_unitprice_sale_residential', 'IDEA_price_sale_residential',
'IDEA_stock_sale_residential', 'IDEA_demand_sale_residential',
'IDEA_unitprice_rent_residential', 'IDEA_price_rent_residential',
'IDEA_stock_rent_residential', 'IDEA_demand_rent_residential',
'IDEA_pc_comercio',
'IDEA_pc_industria', 'IDEA_pc_oficina', 'IDEA_pc_otros',
'IDEA_pc_residencial', 'IDEA_pc_trast_parking', 'IDEA_ind_tienda',
'IDEA_ind_turismo', 'IDEA_ind_alimentacion', 'IDEA_ind_riqueza',
'IDEA_rent_alquiler', 'IDEA_ind_liquidez', 'PV_cert_energ',
'PV_provincia_Almería', 'PV_provincia_Castellón', 'PV_provincia_Murcia',
'PV_provincia_Otros', 'PV_provincia_Valencia',
'PV_longitud_descripcion2_Larga', 'PV_longitud_descripcion2_Media',
'PV_longitud_descripcion2_Ninguna', 'PV_clase_piso_Medios',
'PV_clase_piso_Nuevos', 'PV_clase_piso_Viejos', 'PV_tipo_Garaje',
'PV_tipo_Otros', 'PV_tipo_Piso', 'PV_num_banos_0', 'PV_num_banos_1',
'PV_num_banos_+1'], axis = 1) | _____no_output_____ | MIT | NotebookFinalTrainTest.ipynb | Riferji/Cajamar-2019 |
Entrenamiento de modelosHemos entrenado una gran cantidad de modelos, incluso podríamos llegar a decir que más de 1000 (a base de bucles y funciones) para ver cual es el que más se ajusta a nuestro dataset. Y para no tenerlos nadando entre los cientos de pruebas que hemos reaalizado en los notebooks *Modelos2\_TestingsModelos.ipynb*, *Modelos3\_featureSelection.ipynb*, *Modelos4\_ForwardAndEnsemble.ipynb* | import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn import linear_model
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn import neighbors
from sklearn import tree
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.neural_network import MLPRegressor
import xgboost as xgb
from xgboost.sklearn import XGBRegressor
# Métrica
from sklearn.metrics import median_absolute_error
models = {
'DecisionTreeRegressor10':tree.DecisionTreeRegressor(max_depth = 10),
'RandomForestRegressor20':RandomForestRegressor(max_depth=5, n_estimators = 20, random_state=0),
'RandomForestRegressor50':RandomForestRegressor(max_depth=10, n_estimators = 50, random_state=0),
'RandomForestRegressor100':RandomForestRegressor(max_depth=10, n_estimators = 100, random_state=0),
'ExtraTreesRegressor10':ExtraTreesRegressor(n_estimators=10,random_state=0),
'ExtraTreesRegressor100':ExtraTreesRegressor(n_estimators=100, random_state=0),
'ExtraTreesRegressor150':ExtraTreesRegressor(n_estimators=150, random_state=0),
'GradientBoostingRegressor30_md5':GradientBoostingRegressor(n_estimators=30, learning_rate=0.1, max_depth=5, random_state=0, loss='ls'),
'GradientBoostingRegressor50_md5':GradientBoostingRegressor(n_estimators=50, learning_rate=0.1, max_depth=5, random_state=0, loss='ls'),
'XGB25':XGBRegressor(max_depth = 10, n_estimators=25, random_state=7),
'XGB46':XGBRegressor(max_depth = 10, n_estimators=46, random_state=7),
'XGB60':XGBRegressor(max_depth = 10, n_estimators=60, random_state=7),
'XGB100':XGBRegressor(max_depth = 10, n_estimators=100, random_state=7)
}
def EntrenarModelos(X, y, models, drop_vars):
'''
X, y --> Nuestra data
models --> Diccionario de modelos a entrenar
drop_vars --> Variables que no queremos en nuestro modelo
'''
X_train, X_test, y_train, y_test = train_test_split(X.drop(drop_vars, axis = 1), y, test_size=0.3, random_state=7)
y_test_predict = {}
errores = {}
# Definimos el diccionario donde vamos guardando el mejor modelo con su error asociado
minimo = {'':np.inf}
for name, model in models.items():
#try:
model = model.fit(X_train, y_train)
y_test_predict[name] = model.predict(X_test)
errores[name] = median_absolute_error(np.exp(y_test)-1, np.exp(y_test_predict[name])-1)
print(name,': ', errores[name], sep = '')
# Actualizamos el diccionario
if list(minimo.values())[0] > errores[name]:
minimo = {name:errores[name]}
return minimo
EntrenarModelos(X, y, models, []) | DecisionTreeRegressor10: 21.55431393653663
RandomForestRegressor20: 18.580995303598044
RandomForestRegressor50: 19.072373408609195
RandomForestRegressor100: 18.861664050362826
ExtraTreesRegressor10: 19.80307387148771
ExtraTreesRegressor100: 18.588761921652768
ExtraTreesRegressor150: 18.57115721270116
GradientBoostingRegressor30_md5: 19.084825961682014
GradientBoostingRegressor50_md5: 18.973164773235773
XGB25: 18.734364471435548
XGB46: 18.948498382568367
XGB60: 19.172454528808608
XGB100: 19.46763259887696
| MIT | NotebookFinalTrainTest.ipynb | Riferji/Cajamar-2019 |
Para la optimización de los parámetros implementamos un grid search manual con el que vamos variando los parámetros mediante bucles for. Nosotros encontramos el óptimo en *n_estimators= 30, reg_lambda* = 0.9, *subsample = 0.6*, *colsample_bytree = 0.7* | models = {'BestXGBoost' : XGBRegressor(max_depth = 10,
n_estimators= 30,
reg_lambda = 0.9,
subsample = 0.6,
colsample_bytree = 0.7,
objective = 'reg:linear',
random_state=7)
}
EntrenarModelos(X, y, models, [])
# Variable que indica si queremos iniciar la búsqueda
QuererBuscar = False
if QuererBuscar == True:
models = {}
for i1 in [30, 40, 46, 50, 60]:
for i2 in [0.7, 0.8, 0.9, 1]:
for i3 in [0.5, 0.6, 0.7, 0.8, 0.9, 1]:
for i4 in [0.5, 0.6, 0.7, 0.8, 0.9, 1]:
models['XGB_{}_{}_{}_{}'.format(i1, i2, i3, i4)] = XGBRegressor(max_depth = 10,
n_estimators= i1,
reg_lambda = i2,
subsample = i3,
colsample_bytree = i4,
objective = 'reg:linear',
random_state=7)
print(len(models))
else:
models = {'BestXGBoost' : XGBRegressor(max_depth = 10,
n_estimators= 30,
reg_lambda = 0.9,
subsample = 0.6,
colsample_bytree = 0.7,
objective = 'reg:linear',
random_state=7)
}
EntrenarModelos(X, y, models, []) | BestXGBoost: 17.369460296630855
| MIT | NotebookFinalTrainTest.ipynb | Riferji/Cajamar-2019 |
Una vez definido el mejor modelo vamos a realizar una búsqueda de las mejores variables. Y para ello definimos una función forward que nos vaya añadiendo variables según su error. | def Entrenar(X,y,model):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=7)
model = model.fit(X_train, y_train)
y_pred = model.predict(X_test)
error = median_absolute_error(np.exp(y_test)-1, np.exp(y_pred)-1)
return error
def EntrenarForward(X, y, model, ini_vars):
'''
X,y --> Nuestra data
model --> un modelo
ini_vars --> variables con las que comenzamos
'''
# Variable que indica si hemos terminado
fin = False
# Variables con las que estamos trabajando
current_vars = ini_vars
all_vars = X.columns
possible_vars = np.setdiff1d(all_vars, current_vars)
while not fin and len(possible_vars) > 0: # Lo que antes pase
possible_vars = np.setdiff1d(all_vars, current_vars)
if len(current_vars) == 0:
# Si no tenemos variables, cuestro error es inf
best_error = np.inf
else:
base_error = Entrenar(X[current_vars], y, model)
best_error = base_error
best_var = ''
for var in possible_vars:
var_error = Entrenar(X[current_vars+[var]],y,model)
if var_error < best_error:
best_error = var_error
best_var = var
print('Best var: {} --> {:.4f}'.format(best_var, best_error))
# Si tenemos una best_var
if len(best_var) > 0:
current_vars += [best_var]
else:
fin = True
print('Best vars:', current_vars)
print('Best error:', best_error)
return best_error
EntrenarForward(X, y, XGBRegressor(max_depth = 10,
n_estimators= 30,
reg_lambda = 0.9,
subsample = 0.6,
colsample_bytree = 0.7,
objective = 'reg:linear',
random_state=7),
[])
EntrenarForward(X, y, XGBRegressor(max_depth = 10,
n_estimators= 30,
reg_lambda = 0.9,
subsample = 0.6,
colsample_bytree = 0.7,
objective = 'reg:linear',
random_state=7),
['HY_precio']) | Best var: GA_page_views --> 18.8182
Best var: PV_pca2 --> 18.6006
Best var: IDEA_pc_1960_69 --> 18.4076
Best var: GA_mean_bounce --> 18.2948
Best var: GA_exit_rate --> 18.1165
Best var: PV_longitud_descripcion --> 18.1131
Best var: IDEA_pc_2000_10 --> 18.0981
Best var: IDEA_pc_1990_99 --> 17.9477
Best var: PV_pca3 --> 17.8531
Best var: --> 17.8531
Best vars: ['HY_precio', 'GA_page_views', 'PV_pca2', 'IDEA_pc_1960_69', 'GA_mean_bounce', 'GA_exit_rate', 'PV_longitud_descripcion', 'IDEA_pc_2000_10', 'IDEA_pc_1990_99', 'PV_pca3']
Best error: 17.85312286376954
| MIT | NotebookFinalTrainTest.ipynb | Riferji/Cajamar-2019 |
Observemos las feature importances de nuestro mejor árbol ya que no mejoramos con el forward. | X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=7)
xgb_model = XGBRegressor(max_depth = 10,
n_estimators= 30,
reg_lambda = 0.9,
subsample = 0.6,
colsample_bytree = 0.7,
objective = 'reg:linear',
random_state=7).fit(X_train, y_train)
y_test_predict = xgb_model.predict(X_test)
error = median_absolute_error(np.exp(y_test)-1, np.exp(y_test_predict)-1)
print('BestXGBoost: ', error, sep = '')
v = xgb_model.feature_importances_
plt.figure(figsize=(8,6))
plt.bar(range(len(v)), v)
plt.title('Feature Importances')
plt.xticks(range(len(X_train.columns)), list(X_train.columns), rotation = 90)
plt.show()
# Ordenamos las variables de menor a mayor
features_ordered, colnames_ordered = zip(*sorted(zip(xgb_model.feature_importances_, X_train.columns)))
# No tenemos en cuenta las 10 peores variables
EntrenarModelos(X, y, models, list(colnames_ordered[10:])) | BestXGBoost: 20.791393280029297
| MIT | NotebookFinalTrainTest.ipynb | Riferji/Cajamar-2019 |
Por lo que el mejor modelo es el primer XGBoost entrenado Conjunto de TestRealizamos las mismas transforaciones para test | df = pd.read_table('Estimar_UH2019.txt', sep = '|', dtype={'HY_cod_postal':str})
# Tenemos varios Nans en HY_provincias, por lo que creamos la siguiente función que nos ayudará a imputarlos con
# ayuda del código postal
def ArreglarProvincias(df):
# Diccionario de los códigos postales. 'xxddd' --> xx es el código asociado a la provincia
diccionario_postal = {'02':'Albacete','03':'Alicante','04':'Almería','01':'Álava','33':'Asturias',
'05':'Avila','06':'Badajoz','07':'Baleares', '08':'Barcelona','48':'Bizkaia',
'09':'Burgos','10':'Cáceres','11':'Cádiz','39':'Cantabria','12':'Castellón',
'13':'Ciudad Real','14':'Córdoba','15':'A Coruña','16':'Cuenca','20':'Gipuzkoa',
'17':'Gerona','18':'Granada','19':'Guadalajara','21':'Huelva','22':'Huesca',
'23':'Jaén','24':'León','25':'Lérida','27':'Lugo','28':'Madrid','29':'Málaga',
'30':'Murcia','31':'Navarra','32':'Ourense','34':'Palencia','35':'Las Palmas',
'36':'Pontevedra','26':'La Rioja','37':'Salamanca','38':'Tenerife','40':'Segovia',
'41':'Sevilla','42':'Soria','43':'Tarragona','44':'Teruel','45':'Toledo','46':'Valencia',
'47':'Valladolid','49':'Zamora','50':'Zaragoza','51':'Ceuta','52':'Melilla'}
# Obtenemos los códigos postales que nos faltan
codigos_postales = df.loc[df.HY_provincia.isnull()].HY_cod_postal
# Recorremos la pareja index, value
for idx, cod in zip(codigos_postales.index, codigos_postales):
# Del cod solo nos interesan los dos primeros valores para la provincia.
df.loc[idx,'HY_provincia'] = diccionario_postal[cod[:2]]
# Devolvemos el df de las provincias
return df
# Obtenemos nuestro df con las provincias imputadas
df = ArreglarProvincias(df)
########## Metros ##############
# Volvemos Nans los valores de 0m^2 o inferior --> Los 0 provocan errores en una nueva variable de €/m2
df.loc[df['HY_metros_utiles'] <= 0,'HY_metros_utiles'] = np.nan
df.loc[df['HY_metros_totales'] <= 0,'HY_metros_totales'] = np.nan
# Obtenemos las posiciones de los valores faltantes en los metros útiles
posiciones_nans = df['HY_metros_totales'].isnull()
# Rellenamos los Nans con los metros totales
df.loc[posiciones_nans,'HY_metros_totales'] = df.loc[posiciones_nans,'HY_metros_utiles']
# Obtenemos las posiciones de los valores faltantes een los metros útiles
posiciones_nans = df['HY_metros_utiles'].isnull()
# Rellenamos los Nans con los metros totales
df.loc[posiciones_nans,'HY_metros_utiles'] = df.loc[posiciones_nans,'HY_metros_totales']
# Si continuamos teniendo Nans
if df[['HY_metros_utiles', 'HY_metros_totales']].isnull().sum().sum()>0: # Hay 2 .sum para sumarlo todo
# Cuales son los indices de los registros que tienen nans
index_nans = df.index[df['HY_metros_utiles'].isnull()]
for i in index_nans:
tipo = df.loc[i, 'HY_tipo']
df.loc[i, ['HY_metros_utiles', 'HY_metros_totales']] = group_tipo.loc[tipo] # Recuperamos group_tipo
########## Precios ############
# Creamos una nueva variable que sea ¿Existe precio anterior?--> Si/No
df['PV_precio_anterior'] = df['HY_precio_anterior'].isnull()
# Y modificamos precio anterior para que tenga los valores del precio actual como anterior
df.loc[df['HY_precio_anterior'].isnull(),'HY_precio_anterior'] = df.loc[df['HY_precio_anterior'].isnull(),'HY_precio']
######## Descripción y distribución #########
# Creamos 2 nuevas variables con la longitud del texto expuesto (Nan = 0)
# Igualamos los NaN a carácteres vacíos
df.loc[df['HY_descripcion'].isnull(),'HY_descripcion'] = ''
df.loc[df['HY_distribucion'].isnull(),'HY_distribucion'] = ''
# Calculamos su longitud
df['PV_longitud_descripcion'] = df['HY_descripcion'].apply(lambda x: len(x))
df['PV_longitud_distribucion'] = df['HY_distribucion'].apply(lambda x: len(x))
####### Cantidad de imágenes #########
# Añadimos una nueva columna que es la cantidad de imágenes que tiene asociado el piso
# El df de información de las imágenes tiene 3 columnas: id, posicion_foto, carácteres_aleatorios
df_imagenes = pd.read_csv('df_info_imagenes.csv', sep = '|',encoding = 'utf-8')
# Realizamos un count de los ids de las imagenes (Y nos quedamos con el valor de la
# variable Posiciones (Al ser un count, nos es indiferente la variable seleccionada))
df_count_imagenes = df_imagenes.groupby('HY_id').count()['Posiciones']
# Definimos la función que asocia a cada id su número de imágenes
def AñadirCantidadImagenes(x):
try:
return df_count_imagenes.loc[x]
except:
return 0
# Creamos la variable
df['PV_cantidad_imagenes'] = df['HY_id'].apply(lambda x: AñadirCantidadImagenes(x))
######### Imputación de las variables IDEA #########
# En el notebook ImputacionNans.ipynb se explica en mayor profundidad las funciones definidas. Por el momento,
# para imputar los valores Nans de las variables IDEA realizamos lo siguiente:
# -1. Hacemos la media de las variables que no son Nan por CP
# -2. Imputamos por la media del CP
# -3. Repetimos para aquellos codigos postales que son todo Nans con la media por provincias (Sin contar los imputados)
# -4. Imputamos los Nans que faltan por la media general de todo (Sin contar los imputados)
var_list = [
['IDEA_pc_1960', 'IDEA_pc_1960_69', 'IDEA_pc_1970_79', 'IDEA_pc_1980_89','IDEA_pc_1990_99', 'IDEA_pc_2000_10'],
['IDEA_pc_comercio','IDEA_pc_industria', 'IDEA_pc_oficina', 'IDEA_pc_otros','IDEA_pc_residencial', 'IDEA_pc_trast_parking'],
['IDEA_ind_tienda', 'IDEA_ind_turismo', 'IDEA_ind_alimentacion'],
['IDEA_ind_riqueza'],
['IDEA_rent_alquiler'],
['IDEA_ind_elasticidad', 'IDEA_ind_liquidez'],
['IDEA_unitprice_sale_residential', 'IDEA_price_sale_residential', 'IDEA_stock_sale_residential'],
['IDEA_demand_sale_residential'],
['IDEA_unitprice_rent_residential', 'IDEA_price_rent_residential', 'IDEA_stock_rent_residential'],
['IDEA_demand_rent_residential']
]
# Función para arregla los codigos postales mal leidos (Son leidos como enteros).
def ArreglarCP(cp):
if len(cp)==4:
return '0'+cp
else:
return cp
def ImputarNans_test(df, vars_imput, var):
'''
df --> Nuestro dataframe a modificar
vars_imput --> Variables que queremos imputar.
var --> Variable por la que queremos realizar la agrupación (HY_cod_postal ó HY_provincia)
'''
# Obtenemos nuestros df definidos durante el Train
if var == 'HY_cod_postal':
# Hay un error en la escritura del CP, ya que se guardó como int
group_cp = pd.read_csv('./DF_grupos/group_cp_{}.csv'.format(vars_imput[0]), sep = '|', encoding='utf-8', dtype={'HY_cod_postal': str})
group_cp['HY_cod_postal'] = group_cp['HY_cod_postal'].apply(lambda x: ArreglarCP(x))
group_cp.index = group_cp['HY_cod_postal']
group_cp = group_cp.drop('HY_cod_postal',axis = 1)
elif var == 'HY_provincia':
group_cp = pd.read_csv('./DF_grupos/group_prov_{}.csv'.format(vars_imput[0]), sep = '|', encoding='utf-8', index_col='HY_provincia')
else:
print('Solo se acepta HY_cod_postal ó HY_provincia como valor de "var"')
# Obtenemos los CP que son Nans
codigos_nans = df.loc[df[vars_imput[0]].isnull(), var] # Valdría cualquiera de las variables.
# Como sabemos que códigos podremos completar y cuales no, solo utilizaremos los que se pueden completar
cods = np.intersect1d(codigos_nans.unique(),group_cp.index)
# Cuales son los índices de los Nans
index_nan = df.index[df[vars_imput[0]].isnull()]
for cod in cods:
# Explicación del indexado: De todos los códigos que coinciden con el nuestro nos quedamos con los que tienen índice
# nan, y para poder acceder a df, necesitamos los índices de Nan que cumplen lo del código.
i = index_nan[(df[var] == cod)[index_nan]]
df.loc[i, vars_imput] = group_cp.loc[cod].values
# Si ya hemos terminado de imputar y aún nos quedan Nans imputamos por la media de todo
if var == 'HY_provincia' and df[vars_imput[0]].isnull().sum()>0:
df.loc[df[vars_imput[0]].isnull(), vars_imput] = group_cp.mean(axis = 0).values
# Devolvemos el dataframe imputado
return df
# Como en el caso anterior, vamos conjunto por conjunto
for vars_group in var_list:
df = ImputarNans_test(df, vars_group, var = 'HY_cod_postal')
df = ImputarNans_test(df, vars_group, var = 'HY_provincia')
####### Indice elasticidad ##########
# Creamos una nueva variable que redondea el indice de elasticidad al entero más cercano (La variable toma 1,2,3,4,5)
df['PV_ind_elasticidad'] = np.round(df['IDEA_ind_elasticidad'])
###### Antigüedad zona #########
# Definimos la variable de antigüedad de la zona dependiendo del porcentaje de pisos construidos en la zona
# Primero tomaremos las variables [IDEA_pc_1960,IDEA_pc_1960_69,IDEA_pc_1970_79,IDEA_pc_1980_89,
# IDEA_pc_1990_99,IDEA_pc_2000_10] y las transformaremos en solo 3. Y luego nos quedaremos
# con el máximo de esas tres para determinar el estado de la zona.
df['Viejos'] = df[['IDEA_pc_1960', 'IDEA_pc_1960_69']].sum(axis = 1)
df['Medios'] = df[['IDEA_pc_1970_79', 'IDEA_pc_1980_89']].sum(axis = 1)
df['Nuevos'] = df[['IDEA_pc_1990_99', 'IDEA_pc_2000_10']].sum(axis = 1)
df['PV_clase_piso'] = df[['Viejos','Medios','Nuevos']].idxmax(axis = 1)
# Añadimos una nueva variable que es si la longitud de la descripción es nula, va de 0 a 1000 carácteres, ó supera los 1000
df['PV_longitud_descripcion2'] = pd.cut(df['PV_longitud_descripcion'], bins = [-1,0,1000, np.inf], labels=['Ninguna', 'Media', 'Larga'], include_lowest=False)
# Precio de euro el metro
df['PV_precio_metro'] = df.HY_precio/df.HY_metros_totales
# Cambiamos Provincias por 'Castellón','Murcia','Almería','Valencia','Otros'
def estructurar_provincias(x):
'''
Funcion que asocia a x (Nombre de provincia) su clase
'''
# Lista de clases que nos queremos quedar
if x in ['Castellón','Murcia','Almería','Valencia']:
return x
else:
return 'Otros'
df['PV_provincia'] = df.HY_provincia.apply(lambda x: estructurar_provincias(x))
# Una nueva que es si el inmueble presenta alguna distribución
df.loc[df['PV_longitud_distribucion'] > 0,'PV_longitud_distribucion'] = 1
# Cambiamos certificado energetico a Si/No (1/0)
df['PV_cert_energ'] = df['HY_cert_energ'].apply(lambda x: np.sum(x != 'No'))
# Cambiamos las categorías de HY_tipo a solo 3: [Piso, Garaje, Otros]
def CategorizarHY_tipo(dato):
if dato in ['Piso', 'Garaje']:
return dato
else:
return 'Otros'
df['PV_tipo'] = df['HY_tipo'].apply(CategorizarHY_tipo)
# Cambiamos la variable Garaje a Tiene/No tiene (1/0)
df.loc[df['HY_num_garajes']>1,'HY_num_garajes'] = 1
# Cambiamos baños por 0, 1, +1 (No tiene, tiene 1, tiene mas de 1)
df['PV_num_banos'] = pd.cut(df['HY_num_banos'], [-1,0,1,np.inf], labels = [0,1,'+1'])
# Cambiamos Num terrazas a Si/No (1/0)
df.loc[df['HY_num_terrazas']>1, 'HY_num_terrazas'] = 1
# Definimos las variables a eliminar para definir nuestro conjunto X
drop_vars = ['HY_id', 'HY_cod_postal', 'HY_provincia', 'HY_descripcion',
'HY_distribucion', 'HY_tipo', 'HY_antiguedad','HY_num_banos', 'HY_cert_energ',
'HY_num_garajes', 'IDEA_pc_1960', 'IDEA_area', 'IDEA_poblacion', 'IDEA_densidad', 'IDEA_ind_elasticidad',
'Viejos', 'Medios','Nuevos']
# Explicación:
# + 'HY_id', 'HY_cod_postal' --> Demasiadas categorías
# + 'HY_provincia' --> Ya tenemos PV_provincia que las agrupa
# + 'HY_descripcion','HY_distribucion' --> Tenemos sus longitudes
# + 'HY_tipo' --> Ya hemos creado PV_tipo
# + 'HY_cert_energ','HY_num_garajes'--> Ya tenemos las PV asociadas (valores con 0 1)
# + 'IDEA_pc_1960' --> Está duplicada
# + 'IDEA_area', 'IDEA_poblacion', 'IDEA_densidad' --> Demasiados Nans
# + 'IDEA_ind_elasticidad' --> Tenemos la variable equivalente en PV
# + 'Viejos', 'Medios','Nuevos' --> Ya tenemos PV_clase_piso
X_real_test = df.copy().drop(drop_vars, axis = 1)
# Definimos las variables como en Train
cont_vars = ['HY_metros_utiles', 'HY_metros_totales','GA_page_views', 'GA_mean_bounce',
'GA_exit_rate', 'GA_quincena_ini', 'GA_quincena_ult','PV_longitud_descripcion',
'PV_longitud_distribucion', 'PV_cantidad_imagenes',
'PV_ind_elasticidad', 'PV_precio_metro']
# Creamos las variables Dummy para las categóricas
dummy_vars = ['PV_provincia','PV_longitud_descripcion2',
'PV_clase_piso','PV_tipo','PV_num_banos']
# Unimos nuestro conjunto con el de dummies
X_real_test = X_real_test.join(pd.get_dummies(X_real_test[dummy_vars]))
# Eliminamos las variables que ya son Dummies
X_real_test = X_real_test.drop(dummy_vars, axis=1)
############# PCA ####################
# Realizamos una PCA con las variables IDEA
idea_vars_price = [
'IDEA_unitprice_sale_residential', 'IDEA_price_sale_residential',
'IDEA_stock_sale_residential', 'IDEA_demand_sale_residential',
'IDEA_unitprice_rent_residential', 'IDEA_price_rent_residential',
'IDEA_stock_rent_residential', 'IDEA_demand_rent_residential']
idea_pca_price = pca_prices.transform(X_real_test[idea_vars_price])
X_real_test['PV_idea_pca_price'] = (idea_pca_price-idea_pca_price.min())/(idea_pca_price.max()-idea_pca_price.min())
# Realizamos una PCA con las variables IDEA
idea_vars_pc = [
'IDEA_pc_comercio',
'IDEA_pc_industria', 'IDEA_pc_oficina', 'IDEA_pc_otros',
'IDEA_pc_residencial', 'IDEA_pc_trast_parking', 'IDEA_ind_tienda',
'IDEA_ind_turismo', 'IDEA_ind_alimentacion', 'IDEA_ind_riqueza',
'IDEA_rent_alquiler', 'IDEA_ind_liquidez']
idea_pca_pc = pca_pc.transform(X_real_test[idea_vars_pc])
X_real_test['PV_idea_pca_pc'] = (idea_pca_pc-idea_pca_pc.min())/(idea_pca_pc.max()-idea_pca_pc.min())
# Nos quedamos con la información PCA de nuestras PV
PV_pca = pca_PV.transform(X_real_test[['PV_cert_energ',
'PV_provincia_Almería', 'PV_provincia_Castellón', 'PV_provincia_Murcia',
'PV_provincia_Otros', 'PV_provincia_Valencia',
'PV_longitud_descripcion2_Larga', 'PV_longitud_descripcion2_Media',
'PV_longitud_descripcion2_Ninguna', 'PV_clase_piso_Medios',
'PV_clase_piso_Nuevos', 'PV_clase_piso_Viejos', 'PV_tipo_Garaje',
'PV_tipo_Otros', 'PV_tipo_Piso', 'PV_num_banos_0', 'PV_num_banos_1',
'PV_num_banos_+1']])
X_real_test['PV_pca1'] = PV_pca[:, 0]
X_real_test['PV_pca2'] = PV_pca[:, 1]
X_real_test['PV_pca3'] = PV_pca[:, 2]
# Eliminamos las variables que ya no queremos
X_real_test = X_real_test.drop([
'IDEA_unitprice_sale_residential', 'IDEA_price_sale_residential',
'IDEA_stock_sale_residential', 'IDEA_demand_sale_residential',
'IDEA_unitprice_rent_residential', 'IDEA_price_rent_residential',
'IDEA_stock_rent_residential', 'IDEA_demand_rent_residential',
'IDEA_pc_comercio',
'IDEA_pc_industria', 'IDEA_pc_oficina', 'IDEA_pc_otros',
'IDEA_pc_residencial', 'IDEA_pc_trast_parking', 'IDEA_ind_tienda',
'IDEA_ind_turismo', 'IDEA_ind_alimentacion', 'IDEA_ind_riqueza',
'IDEA_rent_alquiler', 'IDEA_ind_liquidez', 'PV_cert_energ',
'PV_provincia_Almería', 'PV_provincia_Castellón', 'PV_provincia_Murcia',
'PV_provincia_Otros', 'PV_provincia_Valencia',
'PV_longitud_descripcion2_Larga', 'PV_longitud_descripcion2_Media',
'PV_longitud_descripcion2_Ninguna', 'PV_clase_piso_Medios',
'PV_clase_piso_Nuevos', 'PV_clase_piso_Viejos', 'PV_tipo_Garaje',
'PV_tipo_Otros', 'PV_tipo_Piso', 'PV_num_banos_0', 'PV_num_banos_1',
'PV_num_banos_+1'], axis = 1)
X_real_test.columns
# Realizamos la predicción
y_final_pred = xgb_model.predict(X_real_test)
# Deshacemos el cambio
ultra_final_pred = np.exp(y_final_pred)-1
# Guardamos el resultado
# Definimos el df de solución aprovechando que tenemos el HY_id almacenado en df
df_solucion = pd.DataFrame({'HY_id':df['HY_id'], 'TM_Est':ultra_final_pred})
df_solucion.head(7)
# Guardamos la solución
df_solucion.to_csv('machine predictor_UH2019.txt',
header=True, index=False, sep='|', encoding='utf-8') | _____no_output_____ | MIT | NotebookFinalTrainTest.ipynb | Riferji/Cajamar-2019 |
Black Scholes Exercise 1: Naive implementation- Use cProfile and Line Profiler to look for bottlenecks and hotspots in the code | # Boilerplate for the example
import cProfile
import pstats
try:
import numpy.random_intel as rnd
except:
import numpy.random as rnd
# make xrange available in python 3
try:
xrange
except NameError:
xrange = range
SEED = 7777777
S0L = 10.0
S0H = 50.0
XL = 10.0
XH = 50.0
TL = 1.0
TH = 2.0
RISK_FREE = 0.1
VOLATILITY = 0.2
TEST_ARRAY_LENGTH = 1024
###############################################
def gen_data(nopt):
return (
rnd.uniform(S0L, S0H, nopt),
rnd.uniform(XL, XH, nopt),
rnd.uniform(TL, TH, nopt),
)
nopt=100000
price, strike, t = gen_data(nopt)
call = [0.0 for i in range(nopt)]
put = [-1.0 for i in range(nopt)]
price=list(price)
strike=list(strike)
t=list(t) | _____no_output_____ | MIT | 1_BlackScholes_naive.ipynb | IntelPython/workshop |
The Naive Black Scholes algorithm (looped) | from math import log, sqrt, exp, erf
invsqrt = lambda x: 1.0/sqrt(x)
def black_scholes(nopt, price, strike, t, rate, vol, call, put):
mr = -rate
sig_sig_two = vol * vol * 2
for i in range(nopt):
P = float( price [i] )
S = strike [i]
T = t [i]
a = log(P / S)
b = T * mr
z = T * sig_sig_two
c = 0.25 * z
y = invsqrt(z)
w1 = (a - b + c) * y
w2 = (a - b - c) * y
d1 = 0.5 + 0.5 * erf(w1)
d2 = 0.5 + 0.5 * erf(w2)
Se = exp(b) * S
call [i] = P * d1 - Se * d2
put [i] = call [i] - P + Se | _____no_output_____ | MIT | 1_BlackScholes_naive.ipynb | IntelPython/workshop |
Timeit and CProfile TestsWhat do you notice about the times?%timeit function(args)%prun function(args) Line_Profiler testsHow many times does the function items get called (hits)? | %load_ext line_profiler | _____no_output_____ | MIT | 1_BlackScholes_naive.ipynb | IntelPython/workshop |
Return Forecasting: Read Historical Daily Yen Futures DataIn this notebook, you will load historical Dollar-Yen exchange rate futures data and apply time series analysis and modeling to determine whether there is any predictable behavior. | # Futures contract on the Yen-dollar exchange rate:
# This is the continuous chain of the futures contracts that are 1 month to expiration
yen_futures = pd.read_csv(
Path("yen.csv"), index_col="Date", infer_datetime_format=True, parse_dates=True
)
yen_futures.head()
# Trim the dataset to begin on January 1st, 1990
yen_futures = yen_futures.loc["1990-01-01":, :]
yen_futures.head() | _____no_output_____ | ADSL | time_series_analysis.ipynb | EAC49/timeseries_homework |
Return Forecasting: Initial Time-Series Plotting Start by plotting the "Settle" price. Do you see any patterns, long-term and/or short? | # Plot just the "Settle" column from the dataframe:
yen_futures.Settle.plot(figsize=[15,10],title='Yen Future Settle Prices',legend=True) | _____no_output_____ | ADSL | time_series_analysis.ipynb | EAC49/timeseries_homework |
--- Decomposition Using a Hodrick-Prescott Filter Using a Hodrick-Prescott Filter, decompose the Settle price into a trend and noise. | import statsmodels.api as sm
# Apply the Hodrick-Prescott Filter by decomposing the "Settle" price into two separate series:
noise, trend = sm.tsa.filters.hpfilter(yen_futures['Settle'])
# Create a dataframe of just the settle price, and add columns for "noise" and "trend" series from above:
df = yen_futures['Settle'].to_frame()
df['noise'] = noise
df['trend'] = trend
df.tail()
# Plot the Settle Price vs. the Trend for 2015 to the present
df.loc['2015':].plot(y=['Settle', 'trend'], figsize= (15, 10), title = 'Settle vs. Trend')
# Plot the Settle Noise
df.noise.plot(figsize= (10, 5), title = 'Noise') | _____no_output_____ | ADSL | time_series_analysis.ipynb | EAC49/timeseries_homework |
--- Forecasting Returns using an ARMA Model Using futures Settle *Returns*, estimate an ARMA model1. ARMA: Create an ARMA model and fit it to the returns data. Note: Set the AR and MA ("p" and "q") parameters to p=2 and q=1: order=(2, 1).2. Output the ARMA summary table and take note of the p-values of the lags. Based on the p-values, is the model a good fit (p < 0.05)?3. Plot the 5-day forecast of the forecasted returns (the results forecast from ARMA model) | # Create a series using "Settle" price percentage returns, drop any nan"s, and check the results:
# (Make sure to multiply the pct_change() results by 100)
# In this case, you may have to replace inf, -inf values with np.nan"s
returns = (yen_futures[["Settle"]].pct_change() * 100)
returns = returns.replace(-np.inf, np.nan).dropna()
returns.tail()
import statsmodels.api as sm
# Estimate and ARMA model using statsmodels (use order=(2, 1))
from statsmodels.tsa.arima_model import ARMA
arma_model = ARMA(returns['Settle'], order=(2,1))
# Fit the model and assign it to a variable called results
results = arma_model.fit()
# Output model summary results:
results.summary()
# Plot the 5 Day Returns Forecast
pd.DataFrame(results.forecast(steps=5)[0]).plot(figsize= (10, 5), title='5 Day Returns Forcast')
| _____no_output_____ | ADSL | time_series_analysis.ipynb | EAC49/timeseries_homework |
--- Forecasting the Settle Price using an ARIMA Model 1. Using the *raw* Yen **Settle Price**, estimate an ARIMA model. 1. Set P=5, D=1, and Q=1 in the model (e.g., ARIMA(df, order=(5,1,1)) 2. P= of Auto-Regressive Lags, D= of Differences (this is usually =1), Q= of Moving Average Lags 2. Output the ARIMA summary table and take note of the p-values of the lags. Based on the p-values, is the model a good fit (p < 0.05)? 3. Construct a 5 day forecast for the Settle Price. What does the model forecast will happen to the Japanese Yen in the near term? | from statsmodels.tsa.arima_model import ARIMA
# Estimate and ARIMA Model:
# Hint: ARIMA(df, order=(p, d, q))
arima_model = ARIMA(yen_futures['Settle'], order=(5,1,1))
# Fit the model
arima_results = arima_model.fit()
# Output model summary results:
arima_results.summary()
# Plot the 5 Day Price Forecast
pd.DataFrame(arima_results.forecast(steps=5)[0]).plot(figsize= (10, 5), title='5 Day Futures Price Forcast') | _____no_output_____ | ADSL | time_series_analysis.ipynb | EAC49/timeseries_homework |
--- Volatility Forecasting with GARCHRather than predicting returns, let's forecast near-term **volatility** of Japanese Yen futures returns. Being able to accurately predict volatility will be extremely useful if we want to trade in derivatives or quantify our maximum loss. Using futures Settle *Returns*, estimate an GARCH model1. GARCH: Create an GARCH model and fit it to the returns data. Note: Set the parameters to p=2 and q=1: order=(2, 1).2. Output the GARCH summary table and take note of the p-values of the lags. Based on the p-values, is the model a good fit (p < 0.05)?3. Plot the 5-day forecast of the volatility. | from arch import arch_model
# Estimate a GARCH model:
garch_model = arch_model(returns, mean="Zero", vol="GARCH", p=2, q=1)
# Fit the model
garch_results = garch_model.fit(disp="off")
# Summarize the model results
garch_results.summary()
# Find the last day of the dataset
last_day = returns.index.max().strftime('%Y-%m-%d')
last_day
# Create a 5 day forecast of volatility
forecast_horizon = 5
# Start the forecast using the last_day calculated above
forecasts = garch_results.forecast(start=last_day, horizon=forecast_horizon)
# Annualize the forecast
intermediate = np.sqrt(forecasts.variance.dropna() * 252)
intermediate.head()
# Transpose the forecast so that it is easier to plot
final = intermediate.dropna().T
final.head()
# Plot the final forecast
final.plot(figsize= (10, 5), title='5 Day Forecast of Volatility') | _____no_output_____ | ADSL | time_series_analysis.ipynb | EAC49/timeseries_homework |
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information | # Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
# | _____no_output_____ | MIT | lab1/Part2_Music_Generation.ipynb | mukesh5237/introtodeeplearning |
Lab 1: Intro to TensorFlow and Music Generation with RNNs Part 2: Music Generation with RNNsIn this portion of the lab, we will explore building a Recurrent Neural Network (RNN) for music generation. We will train a model to learn the patterns in raw sheet music in [ABC notation](https://en.wikipedia.org/wiki/ABC_notation) and then use this model to generate new music. 2.1 Dependencies First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab. | # Import Tensorflow 2.0
#%tensorflow_version 2.x
import tensorflow as tf
# Download and import the MIT 6.S191 package
#!pip install mitdeeplearning
import mitdeeplearning as mdl
# Import all remaining packages
import numpy as np
import os
import time
import functools
from IPython import display as ipythondisplay
from tqdm import tqdm
#!apt-get install abcmidi timidity > /dev/null 2>&1
# Check that we are using a GPU, if not switch runtimes using Runtime > Change Runtime Type > GPU
#assert len(tf.config.list_physical_devices('GPU')) > 0
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU'))) | Num GPUs Available: 0
| MIT | lab1/Part2_Music_Generation.ipynb | mukesh5237/introtodeeplearning |
2.2 DatasetWe've gathered a dataset of thousands of Irish folk songs, represented in the ABC notation. Let's download the dataset and inspect it: | mdl.__file__
# Download the dataset
songs = mdl.lab1.load_training_data()
# Print one of the songs to inspect it in greater detail!
example_song = songs[0]
print("\nExample song: ")
print(example_song)
songs[0]
len(songs) | _____no_output_____ | MIT | lab1/Part2_Music_Generation.ipynb | mukesh5237/introtodeeplearning |
We can easily convert a song in ABC notation to an audio waveform and play it back. Be patient for this conversion to run, it can take some time. | # Convert the ABC notation to audio file and listen to it
mdl.lab1.play_song(example_song) | _____no_output_____ | MIT | lab1/Part2_Music_Generation.ipynb | mukesh5237/introtodeeplearning |
One important thing to think about is that this notation of music does not simply contain information on the notes being played, but additionally there is meta information such as the song title, key, and tempo. How does the number of different characters that are present in the text file impact the complexity of the learning problem? This will become important soon, when we generate a numerical representation for the text data. | # Join our list of song strings into a single string containing all songs
songs_joined = "\n\n".join(songs)
# Find all unique characters in the joined string
vocab = sorted(set(songs_joined))
print("There are", len(vocab), "unique characters in the dataset")
songs_joined
print(vocab) | ['\n', ' ', '!', '"', '#', "'", '(', ')', ',', '-', '.', '/', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', ':', '<', '=', '>', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '[', ']', '^', '_', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '|']
| MIT | lab1/Part2_Music_Generation.ipynb | mukesh5237/introtodeeplearning |
2.3 Process the dataset for the learning taskLet's take a step back and consider our prediction task. We're trying to train a RNN model to learn patterns in ABC music, and then use this model to generate (i.e., predict) a new piece of music based on this learned information. Breaking this down, what we're really asking the model is: given a character, or a sequence of characters, what is the most probable next character? We'll train the model to perform this task. To achieve this, we will input a sequence of characters to the model, and train the model to predict the output, that is, the following character at each time step. RNNs maintain an internal state that depends on previously seen elements, so information about all characters seen up until a given moment will be taken into account in generating the prediction. Vectorize the textBefore we begin training our RNN model, we'll need to create a numerical representation of our text-based dataset. To do this, we'll generate two lookup tables: one that maps characters to numbers, and a second that maps numbers back to characters. Recall that we just identified the unique characters present in the text. | ### Define numerical representation of text ###
# Create a mapping from character to unique index.
# For example, to get the index of the character "d",
# we can evaluate `char2idx["d"]`.
char2idx = {u:i for i, u in enumerate(vocab)}
# Create a mapping from indices to characters. This is
# the inverse of char2idx and allows us to convert back
# from unique index to the character in our vocabulary.
idx2char = np.array(vocab)
print(char2idx)
idx2char | _____no_output_____ | MIT | lab1/Part2_Music_Generation.ipynb | mukesh5237/introtodeeplearning |
This gives us an integer representation for each character. Observe that the unique characters (i.e., our vocabulary) in the text are mapped as indices from 0 to `len(unique)`. Let's take a peek at this numerical representation of our dataset: | print('{')
for char,_ in zip(char2idx, range(5)):
print(' {:4s}: {:3d},'.format(repr(char), char2idx[char]))
print(' ...\n}')
char2idx['A']
### Vectorize the songs string ###
'''TODO: Write a function to convert the all songs string to a vectorized
(i.e., numeric) representation. Use the appropriate mapping
above to convert from vocab characters to the corresponding indices.
NOTE: the output of the `vectorize_string` function
should be a np.array with `N` elements, where `N` is
the number of characters in the input string
'''
def vectorize_string(string):
return np.array([char2idx[string[i]] for i in range(len(string))])
vectorized_songs = vectorize_string(songs_joined)
vectorized_songs | _____no_output_____ | MIT | lab1/Part2_Music_Generation.ipynb | mukesh5237/introtodeeplearning |
We can also look at how the first part of the text is mapped to an integer representation: | print ('{} ---- characters mapped to int ----> {}'.format(repr(songs_joined[:10]), vectorized_songs[:10]))
# check that vectorized_songs is a numpy array
assert isinstance(vectorized_songs, np.ndarray), "returned result should be a numpy array" | 'X:1\nT:Alex' ---- characters mapped to int ----> [49 22 13 0 45 22 26 67 60 79]
| MIT | lab1/Part2_Music_Generation.ipynb | mukesh5237/introtodeeplearning |
Create training examples and targetsOur next step is to actually divide the text into example sequences that we'll use during training. Each input sequence that we feed into our RNN will contain `seq_length` characters from the text. We'll also need to define a target sequence for each input sequence, which will be used in training the RNN to predict the next character. For each input, the corresponding target will contain the same length of text, except shifted one character to the right.To do this, we'll break the text into chunks of `seq_length+1`. Suppose `seq_length` is 4 and our text is "Hello". Then, our input sequence is "Hell" and the target sequence is "ello".The batch method will then let us convert this stream of character indices to sequences of the desired size. | ### Batch definition to create training examples ###
def get_batch(vectorized_songs, seq_length, batch_size):
# the length of the vectorized songs string
n = vectorized_songs.shape[0] - 1
# randomly choose the starting indices for the examples in the training batch
idx = np.random.choice(n-seq_length, batch_size)
'''TODO: construct a list of input sequences for the training batch'''
input_batch = [vectorized_songs[idx[i]:idx[i]+seq_length] for i in range(batch_size)]
'''TODO: construct a list of output sequences for the training batch'''
output_batch = [vectorized_songs[idx[i]+1:idx[i]+seq_length+1] for i in range(batch_size)]
# x_batch, y_batch provide the true inputs and targets for network training
x_batch = np.reshape(input_batch, [batch_size, seq_length])
y_batch = np.reshape(output_batch, [batch_size, seq_length])
return x_batch, y_batch
# Perform some simple tests to make sure your batch function is working properly!
test_args = (vectorized_songs, 10, 2)
if not mdl.lab1.test_batch_func_types(get_batch, test_args) or \
not mdl.lab1.test_batch_func_shapes(get_batch, test_args) or \
not mdl.lab1.test_batch_func_next_step(get_batch, test_args):
print("======\n[FAIL] could not pass tests")
else:
print("======\n[PASS] passed all tests!") | [PASS] test_batch_func_types
[PASS] test_batch_func_shapes
[PASS] test_batch_func_next_step
======
[PASS] passed all tests!
| MIT | lab1/Part2_Music_Generation.ipynb | mukesh5237/introtodeeplearning |
For each of these vectors, each index is processed at a single time step. So, for the input at time step 0, the model receives the index for the first character in the sequence, and tries to predict the index of the next character. At the next timestep, it does the same thing, but the RNN considers the information from the previous step, i.e., its updated state, in addition to the current input.We can make this concrete by taking a look at how this works over the first several characters in our text: | x_batch, y_batch = get_batch(vectorized_songs, seq_length=5, batch_size=1)
for i, (input_idx, target_idx) in enumerate(zip(np.squeeze(x_batch), np.squeeze(y_batch))):
print("Step {:3d}".format(i))
print(" input: {} ({:s})".format(input_idx, repr(idx2char[input_idx])))
print(" expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx]))) | Step 0
input: 10 ('.')
expected output: 1 (' ')
Step 1
input: 1 (' ')
expected output: 13 ('1')
Step 2
input: 13 ('1')
expected output: 0 ('\n')
Step 3
input: 0 ('\n')
expected output: 51 ('Z')
Step 4
input: 51 ('Z')
expected output: 22 (':')
| MIT | lab1/Part2_Music_Generation.ipynb | mukesh5237/introtodeeplearning |
2.4 The Recurrent Neural Network (RNN) model Now we're ready to define and train a RNN model on our ABC music dataset, and then use that trained model to generate a new song. We'll train our RNN using batches of song snippets from our dataset, which we generated in the previous section.The model is based off the LSTM architecture, where we use a state vector to maintain information about the temporal relationships between consecutive characters. The final output of the LSTM is then fed into a fully connected [`Dense`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layer where we'll output a softmax over each character in the vocabulary, and then sample from this distribution to predict the next character. As we introduced in the first portion of this lab, we'll be using the Keras API, specifically, [`tf.keras.Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential), to define the model. Three layers are used to define the model:* [`tf.keras.layers.Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding): This is the input layer, consisting of a trainable lookup table that maps the numbers of each character to a vector with `embedding_dim` dimensions.* [`tf.keras.layers.LSTM`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM): Our LSTM network, with size `units=rnn_units`. * [`tf.keras.layers.Dense`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense): The output layer, with `vocab_size` outputs. Embedding vs one-hot encodingEmbedding - creates continuous representation of vocab, similar words stay close to each other, this representation is learnt, needs less dimenstionality than one-hot encoding to represent vocabone hot encoding - leads to curse of dimensionality for a large vocab Define the RNN modelNow, we will define a function that we will use to actually build the model. | def LSTM(rnn_units):
return tf.keras.layers.LSTM(
rnn_units,
return_sequences=True,
recurrent_initializer='glorot_uniform',
recurrent_activation='sigmoid',
stateful=True,
) | _____no_output_____ | MIT | lab1/Part2_Music_Generation.ipynb | mukesh5237/introtodeeplearning |
The time has come! Fill in the `TODOs` to define the RNN model within the `build_model` function, and then call the function you just defined to instantiate the model! | len(vocab)
### Defining the RNN Model ###
'''TODO: Add LSTM and Dense layers to define the RNN model using the Sequential API.'''
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
# Layer 1: Embedding layer to transform indices into dense vectors of a fixed embedding size
# None is the sequence length, just a place holder
tf.keras.layers.Embedding(vocab_size, embedding_dim, batch_input_shape=[batch_size, None]),
# Layer 2: LSTM with `rnn_units` number of units.
# TODO: Call the LSTM function defined above to add this layer.
LSTM(rnn_units),
# Layer 3: Dense (fully-connected) layer that transforms the LSTM output into the vocabulary size.
# TODO: Add the Dense layer.
# '''TODO: DENSE LAYER HERE'''
tf.keras.layers.Dense(units=vocab_size)
])
return model
# Build a simple model with default hyperparameters. You will get the chance to change these later.
model = build_model(len(vocab), embedding_dim=256, rnn_units=1024, batch_size=32) | _____no_output_____ | MIT | lab1/Part2_Music_Generation.ipynb | mukesh5237/introtodeeplearning |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.