markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Task 2 Display 5 records where launch sites begin with the string 'CCA' | %sql SELECT * FROM SPACEXDATASET WHERE LAUNCH_SITE LIKE 'CCA%' LIMIT 5 | * ibm_db_sa://gmb99703:***@dashdb-txn-sbox-yp-dal09-04.services.dal.bluemix.net:50000/BLUDB
Done.
| MIT | Week 2 - SQL/jupyter-labs-eda-sql-coursera.ipynb | pFontanilla/ibm-applied-datascience-capstone |
Task 3 Display the total payload mass carried by boosters launched by NASA (CRS) | %sql SELECT SUM(PAYLOAD_MASS__KG_) FROM SPACEXDATASET WHERE PAYLOAD LIKE '%CRS%' | * ibm_db_sa://gmb99703:***@dashdb-txn-sbox-yp-dal09-04.services.dal.bluemix.net:50000/BLUDB
Done.
| MIT | Week 2 - SQL/jupyter-labs-eda-sql-coursera.ipynb | pFontanilla/ibm-applied-datascience-capstone |
Task 4 Display average payload mass carried by booster version F9 v1.1 | %sql SELECT AVG(PAYLOAD_MASS__KG_) FROM SPACEXDATASET WHERE booster_version LIKE '%F9 v1.1%' | * ibm_db_sa://gmb99703:***@dashdb-txn-sbox-yp-dal09-04.services.dal.bluemix.net:50000/BLUDB
Done.
| MIT | Week 2 - SQL/jupyter-labs-eda-sql-coursera.ipynb | pFontanilla/ibm-applied-datascience-capstone |
Task 5 List the date when the first successful landing outcome in ground pad was acheived.*Hint:Use min function* | %sql SELECT MIN(DATE) FROM SPACEXDATASET WHERE landing__outcome = 'Success (ground pad)' | * ibm_db_sa://gmb99703:***@dashdb-txn-sbox-yp-dal09-04.services.dal.bluemix.net:50000/BLUDB
Done.
| MIT | Week 2 - SQL/jupyter-labs-eda-sql-coursera.ipynb | pFontanilla/ibm-applied-datascience-capstone |
Task 6 List the names of the boosters which have success in drone ship and have payload mass greater than 4000 but less than 6000 | %sql SELECT BOOSTER_VERSION FROM SPACEXDATASET WHERE landing__outcome = 'Success (drone ship)' AND 4000 < PAYLOAD_MASS__KG_ < 6000 | * ibm_db_sa://gmb99703:***@dashdb-txn-sbox-yp-dal09-04.services.dal.bluemix.net:50000/BLUDB
Done.
| MIT | Week 2 - SQL/jupyter-labs-eda-sql-coursera.ipynb | pFontanilla/ibm-applied-datascience-capstone |
Task 7 List the total number of successful and failure mission outcomes | %sql SELECT MISSION_OUTCOME, COUNT(MISSION_OUTCOME) FROM SPACEXDATASET GROUP BY MISSION_OUTCOME | * ibm_db_sa://gmb99703:***@dashdb-txn-sbox-yp-dal09-04.services.dal.bluemix.net:50000/BLUDB
Done.
| MIT | Week 2 - SQL/jupyter-labs-eda-sql-coursera.ipynb | pFontanilla/ibm-applied-datascience-capstone |
Task 8 List the names of the booster_versions which have carried the maximum payload mass. Use a subquery | %sql SELECT UNIQUE BOOSTER_VERSION FROM SPACEXDATASET WHERE PAYLOAD_MASS__KG_ = (SELECT MAX(PAYLOAD_MASS__KG_) FROM SPACEXDATASET) | * ibm_db_sa://gmb99703:***@dashdb-txn-sbox-yp-dal09-04.services.dal.bluemix.net:50000/BLUDB
Done.
| MIT | Week 2 - SQL/jupyter-labs-eda-sql-coursera.ipynb | pFontanilla/ibm-applied-datascience-capstone |
Task 9 List the failed landing_outcomes in drone ship, their booster versions, and launch site names for in year 2015 | %sql SELECT BOOSTER_VERSION, launch_site, landing__outcome FROM SPACEXDATASET WHERE LANDING__OUTCOME = 'Failure (drone ship)' AND YEAR(DATE) = 2015 | * ibm_db_sa://gmb99703:***@dashdb-txn-sbox-yp-dal09-04.services.dal.bluemix.net:50000/BLUDB
Done.
| MIT | Week 2 - SQL/jupyter-labs-eda-sql-coursera.ipynb | pFontanilla/ibm-applied-datascience-capstone |
Task 10 Rank the count of landing outcomes (such as Failure (drone ship) or Success (ground pad)) between the date 2010-06-04 and 2017-03-20, in descending order | %sql SELECT LANDING__OUTCOME, COUNT(LANDING__OUTCOME) FROM SPACEXDATASET WHERE DATE BETWEEN '2010-06-04' AND '2017-03-20' GROUP BY LANDING__OUTCOME ORDER BY COUNT(LANDING__OUTCOME) DESC | * ibm_db_sa://gmb99703:***@dashdb-txn-sbox-yp-dal09-04.services.dal.bluemix.net:50000/BLUDB
Done.
| MIT | Week 2 - SQL/jupyter-labs-eda-sql-coursera.ipynb | pFontanilla/ibm-applied-datascience-capstone |
Highly divisible triangular number Problem 12The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be:1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ...Let us list the factors of the first seven triangle numbers: 1: 1 3: 1,3 6: 1,2,3,6 10: 1,2,5,1015: 1,3,5,1521: 1,3,7,2128: 1,2,4,7,14,28We can see that 28 is the first triangle number to have over five divisors.What is the value of the first triangle number to have over five hundred divisors? Solution 12 | def factors(n):
f = []
for i in range(1,n+1):
if (n%i) == 0:
f.append(i)
return f
len(factors(25200))
def triangle_number(n):
tri_num = 0
for i in range(1,n+1):
tri_num += i
return tri_num
triangle_number(125150)
def find_tri_num_div(n):
x = 1
while(1):
t = triangle_number(x)
f = factors(t)
if len(f) > n:
return t
x += 1
find_tri_num_div(100) | _____no_output_____ | MIT | solutions/S0012.ipynb | trabdlkarim/UrkelOs |
Load the data and perform EDA.https://www.kaggle.com/pavansubhasht/ibm-hr-analytics-attrition-dataset1. Evaluate missing values2. Assess target class distribution3. Assess information value of individual features (correlation analysis and pairlot). | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
ibm = pd.read_csv('WA_Fn-UseC_-HR-Employee-Attrition.csv',index_col=0)
# Evaluate missing values
ibm.isnull().sum()
ibm.describe().transpose()
# Change data types for categorical variables
# Dummy code categorical features
# Recoding
ibm['BusinessTravel'][ibm['BusinessTravel'] == 'Non-Travel'] = 'Never'
ibm['BusinessTravel'][ibm['BusinessTravel'] == 'Travel_Rarely'] = 'Rarely'
ibm['BusinessTravel'][ibm['BusinessTravel'] == 'Travel_Frequently'] = 'Frequently'
ibm['Attrition'].replace('No',0,inplace=True)
ibm['Attrition'].replace('Yes',1,inplace=True)
ibm = pd.get_dummies(ibm)
ibm.info()
# Accessing target varaible distribution
print(ibm['Attrition'].mean())
ibm['Attrition'].hist(xrot=45.0)
# Pair Plot
from IPython.display import Image
import seaborn as sns
import matplotlib.pyplot as plt
sns_plot = sns.pairplot(ibm, hue = 'Attrition')
sns_plot.savefig("pairplot.png")
plt.clf() # Clean parirplot figure from sns
Image(filename='pairplot.png') # Show pairplot as image
# Correlation Analysis
sns.heatmap(ibm.corr(), cmap="Spectral")
# Correlation Analysis
ibm.corr()['Attrition'].sort_values(ascending=False) | _____no_output_____ | MIT | Notebook/Employee_Attrition_Prediction.ipynb | rjparkk/rjparkk.github.io |
4. Pre-process the dataset5. Split the data into training/test datasets (70/30)4 pts. | #Dropping variables
# ibm.drop(['Over18_Y'], axis=1, inplace=True)
# ibm.drop(['EmployeeCount'], axis=1, inplace=True)
# ibm.drop(['StandardHours'], axis=1, inplace=True)
# Preparing features and labels
X = ibm.drop('Attrition',axis=1).values
y = ibm['Attrition'].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3,random_state=1)
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test) | _____no_output_____ | MIT | Notebook/Employee_Attrition_Prediction.ipynb | rjparkk/rjparkk.github.io |
6. Build a sequential neural network with the following parameters: 3 hidden dense layers - 100, 50, 25 nodes respectively, activation function = 'relu', dropout = 0.5 for each layer).7. Use early stopping callback to prevent overfitting. | import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,Activation,Dropout
model = Sequential()
model.add(Dense(units=100,activation='relu'))
model.add(Dense(units=50,activation='relu'))
model.add(Dense(units=25,activation='relu'))
model.add(Dense(units=1,activation='sigmoid'))
# For a binary classification problem
model.compile(loss='binary_crossentropy', optimizer='adam')
model.fit(x=X_train,
y=y_train,
batch_size=128,
epochs=100,
validation_data=(X_test, y_test), verbose=1
) | Epoch 1/100
9/9 [==============================] - 0s 12ms/step - loss: 0.6206 - val_loss: 0.5047
Epoch 2/100
9/9 [==============================] - 0s 3ms/step - loss: 0.4449 - val_loss: 0.4384
Epoch 3/100
9/9 [==============================] - 0s 3ms/step - loss: 0.4050 - val_loss: 0.4446
Epoch 4/100
9/9 [==============================] - 0s 3ms/step - loss: 0.3983 - val_loss: 0.4251
Epoch 5/100
9/9 [==============================] - 0s 3ms/step - loss: 0.3836 - val_loss: 0.4141
Epoch 6/100
9/9 [==============================] - 0s 3ms/step - loss: 0.3731 - val_loss: 0.4061
Epoch 7/100
9/9 [==============================] - 0s 3ms/step - loss: 0.3626 - val_loss: 0.3982
Epoch 8/100
9/9 [==============================] - 0s 3ms/step - loss: 0.3526 - val_loss: 0.3913
Epoch 9/100
9/9 [==============================] - 0s 3ms/step - loss: 0.3477 - val_loss: 0.3883
Epoch 10/100
9/9 [==============================] - 0s 3ms/step - loss: 0.3405 - val_loss: 0.3834
Epoch 11/100
9/9 [==============================] - 0s 3ms/step - loss: 0.3309 - val_loss: 0.3811
Epoch 12/100
9/9 [==============================] - 0s 3ms/step - loss: 0.3244 - val_loss: 0.3764
Epoch 13/100
9/9 [==============================] - 0s 3ms/step - loss: 0.3172 - val_loss: 0.3693
Epoch 14/100
9/9 [==============================] - 0s 3ms/step - loss: 0.3106 - val_loss: 0.3662
Epoch 15/100
9/9 [==============================] - 0s 3ms/step - loss: 0.3052 - val_loss: 0.3653
Epoch 16/100
9/9 [==============================] - 0s 3ms/step - loss: 0.3029 - val_loss: 0.3638
Epoch 17/100
9/9 [==============================] - 0s 3ms/step - loss: 0.2909 - val_loss: 0.3721
Epoch 18/100
9/9 [==============================] - 0s 3ms/step - loss: 0.2907 - val_loss: 0.3616
Epoch 19/100
9/9 [==============================] - 0s 3ms/step - loss: 0.2842 - val_loss: 0.3607
Epoch 20/100
9/9 [==============================] - 0s 3ms/step - loss: 0.2771 - val_loss: 0.3579
Epoch 21/100
9/9 [==============================] - 0s 3ms/step - loss: 0.2752 - val_loss: 0.3539
Epoch 22/100
9/9 [==============================] - 0s 3ms/step - loss: 0.2652 - val_loss: 0.3531
Epoch 23/100
9/9 [==============================] - 0s 3ms/step - loss: 0.2608 - val_loss: 0.3564
Epoch 24/100
9/9 [==============================] - 0s 3ms/step - loss: 0.2539 - val_loss: 0.3599
Epoch 25/100
9/9 [==============================] - 0s 3ms/step - loss: 0.2473 - val_loss: 0.3608
Epoch 26/100
9/9 [==============================] - 0s 3ms/step - loss: 0.2405 - val_loss: 0.3678
Epoch 27/100
9/9 [==============================] - 0s 3ms/step - loss: 0.2359 - val_loss: 0.3690
Epoch 28/100
9/9 [==============================] - 0s 3ms/step - loss: 0.2277 - val_loss: 0.3771
Epoch 29/100
9/9 [==============================] - 0s 3ms/step - loss: 0.2231 - val_loss: 0.3824
Epoch 30/100
9/9 [==============================] - 0s 3ms/step - loss: 0.2197 - val_loss: 0.3811
Epoch 31/100
9/9 [==============================] - 0s 3ms/step - loss: 0.2226 - val_loss: 0.3844
Epoch 32/100
9/9 [==============================] - 0s 3ms/step - loss: 0.2107 - val_loss: 0.3866
Epoch 33/100
9/9 [==============================] - 0s 3ms/step - loss: 0.2250 - val_loss: 0.3812
Epoch 34/100
9/9 [==============================] - 0s 3ms/step - loss: 0.2045 - val_loss: 0.3840
Epoch 35/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1992 - val_loss: 0.3831
Epoch 36/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1838 - val_loss: 0.3894
Epoch 37/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1763 - val_loss: 0.3945
Epoch 38/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1669 - val_loss: 0.4016
Epoch 39/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1650 - val_loss: 0.4206
Epoch 40/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1835 - val_loss: 0.4356
Epoch 41/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1779 - val_loss: 0.4149
Epoch 42/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1619 - val_loss: 0.4220
Epoch 43/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1497 - val_loss: 0.4221
Epoch 44/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1370 - val_loss: 0.4334
Epoch 45/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1302 - val_loss: 0.4387
Epoch 46/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1272 - val_loss: 0.4537
Epoch 47/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1227 - val_loss: 0.4525
Epoch 48/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1267 - val_loss: 0.4700
Epoch 49/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1329 - val_loss: 0.4683
Epoch 50/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1184 - val_loss: 0.4694
Epoch 51/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1062 - val_loss: 0.4899
Epoch 52/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1023 - val_loss: 0.4893
Epoch 53/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0938 - val_loss: 0.4954
Epoch 54/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0992 - val_loss: 0.5353
Epoch 55/100
9/9 [==============================] - 0s 3ms/step - loss: 0.1150 - val_loss: 0.5232
Epoch 56/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0961 - val_loss: 0.5228
Epoch 57/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0902 - val_loss: 0.5294
Epoch 58/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0895 - val_loss: 0.5398
Epoch 59/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0790 - val_loss: 0.5374
Epoch 60/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0809 - val_loss: 0.5736
Epoch 61/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0799 - val_loss: 0.5639
Epoch 62/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0785 - val_loss: 0.5815
Epoch 63/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0748 - val_loss: 0.5737
Epoch 64/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0630 - val_loss: 0.5816
Epoch 65/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0624 - val_loss: 0.6182
Epoch 66/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0741 - val_loss: 0.6116
Epoch 67/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0582 - val_loss: 0.6305
Epoch 68/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0543 - val_loss: 0.6173
Epoch 69/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0471 - val_loss: 0.6169
Epoch 70/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0462 - val_loss: 0.6218
Epoch 71/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0402 - val_loss: 0.6338
Epoch 72/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0377 - val_loss: 0.6504
Epoch 73/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0347 - val_loss: 0.6626
Epoch 74/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0321 - val_loss: 0.6754
Epoch 75/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0305 - val_loss: 0.6815
Epoch 76/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0293 - val_loss: 0.6959
Epoch 77/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0286 - val_loss: 0.7075
Epoch 78/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0294 - val_loss: 0.7130
Epoch 79/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0269 - val_loss: 0.7256
Epoch 80/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0238 - val_loss: 0.7345
Epoch 81/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0243 - val_loss: 0.7529
Epoch 82/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0249 - val_loss: 0.7595
Epoch 83/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0211 - val_loss: 0.7841
Epoch 84/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0224 - val_loss: 0.7887
Epoch 85/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0224 - val_loss: 0.8029
Epoch 86/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0199 - val_loss: 0.7925
Epoch 87/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0164 - val_loss: 0.8088
Epoch 88/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0166 - val_loss: 0.8126
Epoch 89/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0137 - val_loss: 0.8208
Epoch 90/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0135 - val_loss: 0.8228
Epoch 91/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0130 - val_loss: 0.8390
Epoch 92/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0119 - val_loss: 0.8557
Epoch 93/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0110 - val_loss: 0.8705
Epoch 94/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0113 - val_loss: 0.8725
Epoch 95/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0107 - val_loss: 0.8759
Epoch 96/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0097 - val_loss: 0.8829
Epoch 97/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0088 - val_loss: 0.8968
Epoch 98/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0098 - val_loss: 0.9060
Epoch 99/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0090 - val_loss: 0.9155
Epoch 100/100
9/9 [==============================] - 0s 3ms/step - loss: 0.0079 - val_loss: 0.9204
| MIT | Notebook/Employee_Attrition_Prediction.ipynb | rjparkk/rjparkk.github.io |
8. Plot training and validation losses versus epochs.9. Print out model confusion matrix.10. Print out model classification report.11. Print out model ROC AUC. | model_loss = pd.DataFrame(model.history.history)
model_loss.plot()
# with Dropout
from tensorflow.keras.layers import Dropout
model = Sequential()
model.add(Dense(units=100,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=50,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=25,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=1,activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam')
model.fit(x=X_train,
y=y_train,
batch_size=128,
epochs=200,
validation_data=(X_test, y_test), verbose=1,
callbacks=[early_stop]
)
model_loss = pd.DataFrame(model.history.history)
model_loss.plot()
y_pred = model.predict_classes(X_test)
from sklearn.metrics import classification_report,confusion_matrix, roc_auc_score
print(classification_report(y_test,y_pred))
print(confusion_matrix(y_test,y_pred))
print('ROC AUC: ', roc_auc_score(y_test,y_pred)) | [[363 1]
[ 72 5]]
ROC AUC: 0.531093906093906
| MIT | Notebook/Employee_Attrition_Prediction.ipynb | rjparkk/rjparkk.github.io |
Raw Data visualisation and analysisThis notebook was designed to carry out the visualisation and analysis of the raw data--- - Author: Luis F Patino Velasquez - MA - Date: Jun 2020 - Version: 1.0 - Notes: Files used in this notebook are in netCDF format - Jupyter version: jupyter core : 4.7.1 jupyter-notebook : 6.4.0 qtconsole : 5.1.1 ipython : 7.25.0 ipykernel : 6.0.3 jupyter client : 6.1.12 jupyter lab : 3.0.16 nbconvert : 6.1.0 ipywidgets : 7.6.3 nbformat : 5.1.3 traitlets : 5.0.5 - Python version: 3.8.5 --- Setting Python Modules | # Imports for xclim and xarray
import xclim as xc
import pandas as pd
import numpy as np
import xarray as xr
import functools
# from functools import reduce
# File handling libraries
import time
import tempfile
from pathlib import Path
# Geospatial libraries
import geopandas
import rioxarray
from shapely.geometry import mapping
# import plotting stuff
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib import cm
import matplotlib.mlab as mlab
import seaborn as sns
# set colours
# plt.style.use('default')
plt.style.use("~/.local/lib/python3.8/site-packages/matplotlib/mpl-data/stylelib/lfpv.mplstyle")
%matplotlib inline
# Set some plotting defaults
plt.rcParams['figure.figsize'] = (15, 11)
plt.rcParams['figure.dpi'] = 50
# Mapping libraries
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
fldr_images = Path('/mnt/d/MRes_dataset/Images/Others')
sep = '-----------\n-----------'
print(sep)
def UK_clip(xarray_dataset, coord_lon_name, coord_lat_name, xarray_dataset_crs):
# Setting spatial dimmension in nc data
xarray_dataset.rio.set_spatial_dims(x_dim=coord_lon_name, y_dim=coord_lat_name, inplace=True)
xarray_dataset.rio.write_crs(xarray_dataset_crs, inplace=True)
# Set mask based on boundary
uk_admn = geopandas.read_file('/mnt/d/MRes_dataset/active_data/101_admin/uk_admin_boundary_py_nasa_pp_countryOutlineFromGiovanni.shp', crs="epsg:4326")
# Data for UK
uk_clipData = xarray_dataset.rio.clip(uk_admn.geometry.apply(mapping), uk_admn.crs, drop=False)
return(uk_clipData) | _____no_output_____ | MIT | py_notebooks/RawDataAnalysis.ipynb | GeoFelpave/MResDissertation_Aug2021 |
1. Reading the raw data 1.1. ERA5 | # Set directory to read and for outputs
fldr_src = Path('/mnt/d/MRes_dataset/search_data/era_copernicus_uk/')
# Create list with files
fls_lst = fldr_src.glob('**/era5_copernicus_DAY_prcp_*')
# Load multiple NetCDFs into a single xarray.Dataset
dataset_ERA = xr.open_mfdataset(paths=fls_lst, combine='by_coords', parallel=True)
dataset_ERA | _____no_output_____ | MIT | py_notebooks/RawDataAnalysis.ipynb | GeoFelpave/MResDissertation_Aug2021 |
1.2. GPM-IMERG | # Set directory to read and for outputs
fldr_src = Path('/mnt/d/MRes_dataset/search_data/gpm_imerg_nasa_uk/')
# Create list with files
fls_lst = fldr_src.glob('**/*')
# Load multiple NetCDFs into a single xarray.Dataset
dataset_GPM = xr.open_mfdataset(paths=fls_lst, combine='by_coords', parallel=True)
dataset_GPM | _____no_output_____ | MIT | py_notebooks/RawDataAnalysis.ipynb | GeoFelpave/MResDissertation_Aug2021 |
1.3. HadUK-Grid | # Set directory to read and for outputs
fldr_src = Path('/mnt/d/MRes_dataset/search_data/haduk_cedac_uk/')
# Create list with files
fls_lst = fldr_src.glob('**/*')
# Load multiple NetCDFs into a single xarray.Dataset
dataset_HAD = xr.open_mfdataset(paths=fls_lst, combine='by_coords', parallel=True)
dataset_HAD | _____no_output_____ | MIT | py_notebooks/RawDataAnalysis.ipynb | GeoFelpave/MResDissertation_Aug2021 |
2. Data Analysis 2.1. Functions | def UK_clip(xarray_dataset, coord_lon_name, coord_lat_name, xarray_dataset_crs):
"""
Return xarray with data for the UK only
:xarray_dataset: xarray
:coord_lon_name: string
:coord_lat_name: string
:xarray_dataset_crs: dictionary
:return: xarray
"""
# Setting spatial dimmension in nc data
xarray_dataset.rio.set_spatial_dims(x_dim=coord_lon_name, y_dim=coord_lat_name, inplace=True)
xarray_dataset.rio.write_crs(xarray_dataset_crs, inplace=True)
# Set mask based on boundary
uk_admn = geopandas.read_file('/mnt/d/MRes_dataset/active_data/101_admin/uk_admin_boundary_py_nasa_pp_countryOutlineFromGiovanni.shp', crs="epsg:4326")
# Data for UK
uk_clipData = xarray_dataset.rio.clip(uk_admn.geometry.apply(mapping), uk_admn.crs, drop=False)
return(uk_clipData)
def plot_setup(subplot_ref, data_source1, data_source2):
"""
Return mapplotlib figure
:subplot_ref: list of integers
:data_source1: string
:data_source2: string
:return: mapplotlib figure
"""
# x-axis labels
subplot_ref.grid(b=True, which='major', color='grey', linestyle='-', alpha=0.3)
subplot_ref.set_xticks(x)
subplot_ref.set_xticklabels([*range(2001,2020,1)])
# Set the tick positions
subplot_ref.set_xticks(x)
# Set the tick labels
subplot_ref.xaxis.set_tick_params(labelsize='x-large')
subplot_ref.yaxis.set_tick_params(labelsize='x-large')
# Set title and axis
subplot_ref.grid(b=True, which='major', color='grey', linestyle='-', alpha=0.3)
subplot_ref.set_ylabel('Precipitation (mm)', fontdict={'fontsize': 20, 'fontweight': 'normal'})
subplot_ref.set_xlabel('Years', fontdict={'fontsize': 20, 'fontweight': 'normal'})
# Set text
subplot_ref.text(0.95, 0.95, 'HadUK-Grid', horizontalalignment='center', verticalalignment='top',\
transform=subplot_ref.transAxes, fontsize='x-large', fontweight='bold',\
bbox=dict(facecolor='none', edgecolor='#a65628', boxstyle='round', linewidth=5.0))
if data_source2 == 'ERA':
subplot_ref.text(0.95, 0.92, ' ERA5 ', horizontalalignment='center', verticalalignment='top',\
transform=subplot_ref.transAxes, fontsize='x-large', fontweight='bold',\
bbox=dict(facecolor='none', edgecolor='#377eb8', boxstyle='round', linewidth=5.0))
else:
subplot_ref.text(0.95, 0.92, 'GPM-IMERG', horizontalalignment='center', verticalalignment='top',\
transform=subplot_ref.transAxes, fontsize='x-large', fontweight='bold',\
bbox=dict(facecolor='none', edgecolor='#4daf4a', boxstyle='round', linewidth=5.0))
def violin_clr(figure, colour):
for vp in figure['bodies']:
vp.set_facecolor(colour)
for partname in ('cbars','cmins','cmaxes','cmeans'):
vp = figure[partname]
vp.set_edgecolor(colour)
vp.set_linewidth(1)
def saving_image(subplot_ref, fldr_plot, file_name):
"""
Save image output in folder
:subplot_ref: list of integers
:fldr_plot: pathlib folder path
:file_name: string
"""
extent = subplot_ref.get_window_extent().transformed(fig.dpi_scale_trans.inverted())
fig.savefig((Path(fldr_plot / file_name)), bbox_inches=extent)
# Pad the saved area by 10% in the x-direction and 20% in the y-direction
fig.savefig((Path(fldr_plot / file_name)), bbox_inches=extent.expanded(1.1, 1.2)) | _____no_output_____ | MIT | py_notebooks/RawDataAnalysis.ipynb | GeoFelpave/MResDissertation_Aug2021 |
2.2. Yearly Average AnalysisHere we are plotting the mean yearly value for each of the datasets for the whole UK | # Get annual value from daily data
arr_yearPrcp_ERA = dataset_ERA.groupby('time.year').sum(dim='time')
arr_yearPrcp_GPM = dataset_GPM.groupby('time.year').sum(dim='time')
arr_yearPrcp_HAD = dataset_HAD.groupby('time.year').sum(dim='time')
# only use mainland UK data
arr_yearPrcp_ERAUK = UK_clip(arr_yearPrcp_ERA, 'longitude', 'latitude', "epsg:4326")
arr_yearPrcp_GPMUK = UK_clip(arr_yearPrcp_GPM, 'lon', 'lat', "epsg:4326")
# Convert data to pandas dataframe
df_yearPrcp_ERA = arr_yearPrcp_ERA.to_dataframe().reset_index()
df_yearPrcp_GPM = arr_yearPrcp_GPM.to_dataframe().reset_index()
df_yearPrcp_HAD = arr_yearPrcp_HAD.to_dataframe().reset_index()
####################################################
#I NEED TO ADD THE FUNCTION THAT JOINS THE DATAFRAMES
#####################################################
# For HADGrid-UK replace zero for NaN to avoid using zero in the mean value
df_yearPrcp_HAD = df_yearPrcp_HAD.replace(0, np.NaN)
df_yearPrcp_ERA
# Get the mean yearly value new = df_yearPrcp_ERA.groupby(['year']).agg({'tp': ['mean']}).reset_index()
df_MeanyearPrcp_ERA = df_yearPrcp_ERA.groupby('year', as_index=False)['tp'].mean()
df_MeanyearPrcp_GPM = df_yearPrcp_GPM.groupby('year', as_index=False)['precipitationCal'].mean()
df_MeanyearPrcp_HAD = df_yearPrcp_HAD.groupby('year', as_index=False)['rainfall'].mean()
# create dataframe with mean yearly value
dfs_lst = [df_MeanyearPrcp_ERA, df_MeanyearPrcp_GPM, df_MeanyearPrcp_HAD]
df_final = functools.reduce(lambda left,right: pd.merge(left,right,on='year'), dfs_lst)
df_final | _____no_output_____ | MIT | py_notebooks/RawDataAnalysis.ipynb | GeoFelpave/MResDissertation_Aug2021 |
* **Plotting the yearly average for the UK using all datasets** | # Create copy of dataframe
df_plot = df_final
# Rename columns
df_plot.rename(columns = {'tp':'prcp_ERA5', 'precipitationCal':'prcp_IMERG',
'rainfall':'prcp_HadGrid-UK'}, inplace = True)
# change year column to date format
df_plot['year'] = pd.to_datetime(df_plot['year'], format='%Y')
# Plot data
ERA = df_plot['prcp_ERA5'].tolist()
GPM = df_plot['prcp_IMERG'].tolist()
HAD = df_plot['prcp_HadGrid-UK'].tolist()
yrs = df_plot['year'].tolist()
# Create plot
fig, axs = plt.subplots(figsize=(15, 11))
axs.plot(yrs, ERA, label = 'prcp ERA5', marker='D')
axs.plot(yrs, GPM, label = 'prcp GPM-IMERG', marker='v')
axs.plot(yrs, HAD, label = 'prcp HadGrid-UK', marker='o')
axs.xaxis.set_tick_params(labelsize='large')
axs.yaxis.set_tick_params(labelsize='large')
# Set title and axis
axs.grid(b=True, which='major', color='grey', linestyle='-', alpha=0.3)
axs.set_ylabel('precipitation (mm)', fontdict={'fontsize': 18, 'fontweight': 'normal'})
axs.set_xlabel('years', fontdict={'fontsize': 18, 'fontweight': 'normal'})
# Set legend
axs.legend(bbox_to_anchor=(0, 1, 1, 0), loc='best', fontsize='large', ncol=3) | _____no_output_____ | MIT | py_notebooks/RawDataAnalysis.ipynb | GeoFelpave/MResDissertation_Aug2021 |
* **Creating climatology map for all datasets** | # Summ data by year
year_dataset = dataset_GPM.groupby('time.year').sum(dim='time')
# year_dataset_climat = UK_clip(year_dataset, 'longitude', 'latitude', "epsg:4326")
# year_dataset_climat = dataset_HAD.groupby('time.year').sum(dim='time')
# Change to pandas dataframe
df = year_dataset.to_dataframe().reset_index()
# Group by coordinate and average
grouped_df=df.groupby(['latitude','longitude']).mean()
grouped_df1 = grouped_df.reset_index()
grouped_prcp = grouped_df1.drop(['year'], axis = 1)
# Pivot dataframe ready for the plot
val_pivot_df = grouped_prcp.pivot(index='latitude', columns='longitude', values='tp')
# Plot
from mpl_toolkits.axes_grid1 import make_axes_locatable
fig, axs = plt.subplots(figsize=(8,15))
mm = Basemap(resolution='i',projection='merc',ellps='WGS84',llcrnrlat=49,urcrnrlat=61,llcrnrlon=-9,urcrnrlon=2,lat_ts=20,ax=axs)
lons = val_pivot_df.columns.values
lats = val_pivot_df.index.values
data_values = val_pivot_df.values
masked_data = np.ma.masked_invalid(data_values)
lon, lat = np.meshgrid(lons, lats)
xi, yi = mm(lon, lat)
cs = mm.pcolor(xi,yi,masked_data,shading='auto')
fig.colorbar(cs, ax=axs, shrink=0.8, pad=0.15, label='any_text')
# add shp file as coastline
# mm.readshapefile('/mnt/c/Users/C0060017/Documents/Taught_Material/MRes_Dissertation/Dissertation/MRes_dataset/active_data/101_admin/uk_admin_boundary_py_nasa_pp_countryOutlineFromGiovanni', 'uk_admin_boundary')
# Map properties set up
merid = mm.drawmeridians(
np.arange(-180, 180, 2),
labels=[False, False, False, True])
parall = mm.drawparallels(
np.arange(0, 160),
labels=[True, True, False, False])
plt.show()
# filterinfDataframe = df[(df['longitude'] == -9.0) & (df['latitude'] == 61.0) ]
# filterinfDataframe
| _____no_output_____ | MIT | py_notebooks/RawDataAnalysis.ipynb | GeoFelpave/MResDissertation_Aug2021 |
2.3. Data distributionHere we are plotting the distribution of the mean daily precipitation for each year - *The plotted dataset contains the daily mean value for each year at each grid cell* | # Get average value by season
ERA_season_mean = dataset_ERA.groupby('time.season').mean('time')
# Change to dataframe
df_era_season = ERA_season_mean.to_dataframe().reset_index()
test = df_era_season[(df_era_season["season"] == 'DJF')]
test2 = df_era_season[(df_era_season["season"] == 'MAM')]
test3 = df_era_season[(df_era_season["season"] == 'JJA')]
test4 = df_era_season[(df_era_season["season"] == 'SON')]
test_data = [test['tp'], test2['tp'], test3['tp'], test4['tp']]
x = [1,2,3,4]
print(df_era_season.shape[0])
fig, axes = plt.subplots(figsize=(30,15))
# axes.violinplot(dataset = [test['tp'],test2['tp'], test3['tp'], test4['tp']])
axes.violinplot([test['tp'],test2['tp'], test3['tp'], test4['tp']], showmeans=True, showmedians=False, showextrema=True, points=10000)
# x-axis labels
axes.set_xticks(x)
axes.set_xticklabels(['DJF', 'MAM','JJA', 'SON'])
plt.show()
# df = df_era_season.set_index(['season'])
# df
# grouped = df['tp'].groupby(level='season')
# grouped.boxplot(rot=45, fontsize=12, figsize=(8,10))
# Get average value by season
ERA_yearly_mean = dataset_ERA.groupby('time.year').mean('time')
GPM_yearly_mean = dataset_GPM.groupby('time.year').mean('time')
HAD_yearly_mean = dataset_HAD.groupby('time.year').mean('time')
# Change to dataframe
df_era_yearly = ERA_yearly_mean.to_dataframe().reset_index()
df_gpm_yearly = GPM_yearly_mean.to_dataframe().reset_index()
df_had_yearly = HAD_yearly_mean.to_dataframe().reset_index()
# For HadUK NaN values need to be removed
df_had_yearly_final = df_had_yearly.dropna(subset=['rainfall'], how='all')
# integer for x axis
x = [*range(1,len(df_era_yearly['year'].unique()) +1, 1)]
# Create list to store data for the graph
dataset_lst_ERA=[]
dataset_lst_GPM=[]
dataset_lst_HAD=[]
# Create graph datasets
for yr in [*range(2001,2020,1)]:
dataset_lst_ERA.append(df_era_yearly[(df_era_yearly["year"] == yr)]['tp'])
dataset_lst_GPM.append(df_gpm_yearly[(df_gpm_yearly["year"] == yr)]['precipitationCal'])
dataset_lst_HAD.append(df_had_yearly_final[(df_had_yearly_final["year"] == yr)]['rainfall'])
# Create plots
fig, axs = plt.subplots(2, 1, figsize=(50,50))
# HadUK-Grid and ERA5
vp_era = axs[0].violinplot(dataset=dataset_lst_ERA, showmeans=True, showmedians=False, showextrema=True)
vp_had = axs[0].violinplot(dataset=dataset_lst_HAD, showmeans=True, showmedians=False, showextrema=True)
plot_setup(axs[0],'HAD','ERA')
# change colour of violin o match other graphs
violin_clr(vp_had, '#a65628')
violin_clr(vp_era, '#377eb8')
# # saving image
# file_name = 'HADUK-ERA5_Year_Mean_Daily_Distribution.png'
# saving_image(axs[0], fldr_images, file_name)
# HadUK-Grid and GPM-IMERG
vp_gpm = axs[1].violinplot(dataset=dataset_lst_GPM, showmeans=True, showmedians=False, showextrema=True)
vp_had = axs[1].violinplot(dataset=dataset_lst_HAD, showmeans=True, showmedians=False, showextrema=True)
plot_setup(axs[1],'HAD','GPM-IMERG')
# change colour of violin o match other graphs
violin_clr(vp_had, '#a65628')
violin_clr(vp_gpm, '#4daf4a')
# # saving image
# file_name = 'HADUK-GPM-IMERG_Year_Mean_Daily_Distribution.png'
# saving_image(axs[1], fldr_images, file_name)
plt.show()
# Make sure it show a nice layout avoiding overlapping
plt.tight_layout() | _____no_output_____ | MIT | py_notebooks/RawDataAnalysis.ipynb | GeoFelpave/MResDissertation_Aug2021 |
2.3.1. Descriptive statisticsHere we get the individual tables showing the descriptive characteristics. | # Create dataframe using the data for each year - These data was used in the violin plots
dataset_lst_ERA
dataset_lst_GPM
dataset_lst_HAD
# Conver to pandas dataframe
ERA = pd.DataFrame(list(map(np.ravel, dataset_lst_ERA)))
GPM = pd.DataFrame(list(map(np.ravel, dataset_lst_GPM)))
HAD = pd.DataFrame(list(map(np.ravel, dataset_lst_HAD)))
# Get descriptive statistics for each year and all datasets
ERA_stats = ERA.apply(pd.Series.describe, axis=1)
GPM_stats = GPM.apply(pd.Series.describe, axis=1)
HAD_stats = HAD.apply(pd.Series.describe, axis=1)
dfs = [ERA_stats, GPM_stats, HAD_stats]
for df in dfs:
# Add years as column
df['years'] = [*range(2001,2020,1)]
# Shift column 'year' to first position
first_column = df.pop('years')
# insert column using insert(position,column_name,first_column) function
df.insert(0, 'years', first_column) | _____no_output_____ | MIT | py_notebooks/RawDataAnalysis.ipynb | GeoFelpave/MResDissertation_Aug2021 |
The Dispersion RelationThe _dispersion relation_ is the function that relates the frequency $\omega$ and the wavevector $k$. It characterizes each wave type and leads to the labels for the various type. - CMA diagram - phase velocity vs normalized frequency - normalized or not - density - angle - field strength - transverse motions of the electrons on cyclotron resonance sec.2.9.3 The plasma pulsation is :$$\omega_{p_s} = \sqrt{\frac{n_s q_s^2}{m_s \varepsilon_0}}$$ | def plasma_frequency(n, q, m):
'''
Returns the plasma angular frequency for a given species.
'''
omega_p = sqrt(n*q**2/(m*epsilon_0))
return omega_p
def cyclotron_frequency(q, m, B0):
'''
Returns the cyclotron angular frequency for a given species.
'''
omega_c = np.abs(q)*B0/m
return omega_c | _____no_output_____ | MIT | notebooks/Fusion_Basics/Dispersion Relation.ipynb | Hash--/documents |
Let's define a convenient object: a particle species. | class Species:
def __init__(self, m, q, description=None):
self.m = m
self.q = q
self.description = description
def omega_p(self, n):
return plasma_frequency(n, self.q, self.m)
def omega_c(self, B0):
return cyclotron_frequency(self.q, self.m, B0)
def __repr__(self):
return 'Specie:{}. Mass:{} kg, charge:{} C'.format(self.description, self.m, self.q)
electron = Species(electron_mass, -elementary_charge, description='Electron')
print(electron)
deuterium = Species(physical_constants['deuteron mass'][0], +elementary_charge, description='Deuterium')
print(deuterium) | Specie:Electron. Mass:9.10938356e-31 kg, charge:-1.6021766208e-19 C
Specie:Deuterium. Mass:3.343583719e-27 kg, charge:1.6021766208e-19 C
| MIT | notebooks/Fusion_Basics/Dispersion Relation.ipynb | Hash--/documents |
The cold plasma tensorThe cold plasma tensor is given by:$$\mathbf{K} = \left(\begin{matrix}K_\perp & K_\times & 0 \\-K_\times & K_\perp & 0 \\0 & 0 & K_\parallel\end{matrix}\right)$$with$$\begin{array}{lcl}K_\perp = S &=& 1 - \displaystyle \sum_k \frac{\omega_{pk}^2}{\omega^2 - \omega_{ck}^2}\\i K_\times = D &=& \displaystyle \sum_k \frac{\epsilon_k \omega_{ck} \omega_{pk}^2}{\omega \left( \omega^2 - \omega_{ck}^2\right)}\\K_\parallel = P &=& 1 - \displaystyle \sum_k \frac{\omega_{pk}^2}{\omega^2}\end{array}$$ | def K_perp(species, n, B0, f):
K_perp = 1
omega = 2*np.pi*f
for k, specie in enumerate(species):
K_perp -= specie.omega_p(n[k])**2 / (omega**2 - specie.omega_c(B0)**2)
return K_perp
def K_parallel(species, n, f):
K_parallel = 1
omega = 2*np.pi*f
for k,specie in enumerate(species):
K_parallel -= specie.omega_p(n[k])**2 / omega**2
return K_parallel
def K_cross(species, n, B0, f):
K_cross = 0
omega = 2*np.pi*f
for k, specie in enumerate(species):
K_cross += np.sign(specie.q) * specie.omega_c(B0) * specie.omega_p(n[k])**2 / (omega*(omega**2 - specie.omega_c(B0)**2))
return -1j*K_cross
plasma = (electron, deuterium)
n_e = 1e17 # m^-3
n_D = 1e17 # m^-3
n = (n_e, n_D)
B0 = 1 # T
f = 5e9 # Hz
print(K_perp(plasma, n, B0, f))
print(K_parallel(plasma, n, f))
print(K_cross(plasma, n, B0, f))
np.sign(electron.q)
freqs = np.logspace(6, 11, 1001)
loglog(freqs, abs(K_parallel(plasma, n, freqs)), lw=2)
loglog(freqs, abs(K_perp(plasma, n, B0, freqs)), lw=2)
loglog(freqs, abs(1j*K_cross(plasma, n, B0, freqs)), lw=2)
xlabel('f [Hz]', fontsize=16)
yticks(fontsize=16)
xticks(fontsize=16)
grid(True)
legend(('$K_\parallel$', '$K_\perp$', '$K_X$' ), fontsize=16)
axvline(deuterium.omega_c(B0)/(2*pi), lw=2, ls='--', color='k')
text(x=2.5e6, y=1e4, s='$\omega_{c,D}$', fontsize=16)
axvline(deuterium.omega_p(n_e)/(2*pi), lw=2, ls='--', color='g')
text(x=1e8, y=1e5, s='$\omega_{p,D}$', fontsize=16)
axvline(electron.omega_p(n_e)/(2*pi), lw=2, ls='--', color='g')
text(x=1e9, y=1e5, s='$\omega_{p,e}$', fontsize=16)
axvline(electron.omega_c(B0)/(2*pi), lw=2, ls='--', color='k')
text(x=1e10, y=1e1, s='$\omega_{c,e}$', fontsize=16)
def solve_dispersion_relation(plasma, n, B0, f, theta):
S = K_perp(plasma, n, B0, f)
P = K_parallel(plasma, n, f)
D = 1j*K_cross(plasma, n, B0, f)
R = S+D
L = S-D
A = S*np.sin(theta)**2 + P*np.cos(theta)**2
B = R*L*np.sin(theta)**2 + P*S*(1+np.cos(theta)**2)
C = P*R*L
p = (A,B,C)
n = np.roots(p)
return n
diel_index = np.array([solve_dispersion_relation(plasma, n, B0=3, f=f, theta=0) for f in freqs])
loglog(freqs, real(diel_index[:,0]), lw=2)
loglog(freqs, real(diel_index[:,1]), lw=2)
grid(True)
xlabel('f [Hz]', fontsize=16) | _____no_output_____ | MIT | notebooks/Fusion_Basics/Dispersion Relation.ipynb | Hash--/documents |
Dogs vs Cats with Keras--- Import Libraries | %reload_ext autoreload
%autoreload 2
%matplotlib inline
PATH = "../data/dogscats/dogscats/"
sz=224
batch_size=64
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing import image
from keras.layers import Dropout, Flatten, Dense
from keras.applications import ResNet50
from keras.models import Model, Sequential
from keras.layers import Dense, GlobalAveragePooling2D
from keras import backend as K
from keras.applications.resnet50 import preprocess_input
import matplotlib.pyplot as plt
| _____no_output_____ | MIT | notebooks/keras_lesson1.ipynb | AmanDaVinci/DeepLabs |
Load Data | train_data_dir = f'{PATH}train'
validation_data_dir = f'{PATH}valid'
train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input, shear_range=0.2, zoom_range=0.2, horizontal_flip=True)
test_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
train_generator = train_datagen.flow_from_directory(train_data_dir, target_size=(sz, sz),
batch_size=batch_size, class_mode='binary')
validation_generator = test_datagen.flow_from_directory(validation_data_dir, shuffle=False, target_size=(sz, sz),
batch_size=batch_size, class_mode='binary') | Found 23000 images belonging to 2 classes.
Found 2000 images belonging to 2 classes.
| MIT | notebooks/keras_lesson1.ipynb | AmanDaVinci/DeepLabs |
Build Model | base_model = ResNet50(weights='imagenet', include_top=False)
base_model.summary()
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(1, activation='sigmoid')(x)
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers: layer.trainable = False
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) | _____no_output_____ | MIT | notebooks/keras_lesson1.ipynb | AmanDaVinci/DeepLabs |
Train Model | %%time
model.fit_generator(train_generator, train_generator.n // batch_size, epochs=3, workers=4,
validation_data=validation_generator, validation_steps=validation_generator.n // batch_size)
len(model.layers)
split_at = 140
for layer in model.layers[:split_at]: layer.trainable = False
for layer in model.layers[split_at:]: layer.trainable = True
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
%%time
model.fit_generator(train_generator, train_generator.n // batch_size, epochs=1, workers=3,
validation_data=validation_generator, validation_steps=validation_generator.n // batch_size) | Epoch 1/1
359/359 [==============================] - 263s 733ms/step - loss: 0.0779 - acc: 0.9739 - val_loss: 0.2162 - val_acc: 0.9718
CPU times: user 9min 54s, sys: 38.2 s, total: 10min 33s
Wall time: 4min 25s
| MIT | notebooks/keras_lesson1.ipynb | AmanDaVinci/DeepLabs |
Model Evaluation | test_data_dir = f'{PATH}valid'
test_generator = test_datagen.flow_from_directory(test_data_dir, target_size=(sz,sz),
batch_size=batch_size, class_mode='binary')
test_generator.n
sample_x, sample_y = test_generator.next()
sample_x.shape, sample_y.shape
sample_pred = model.predict(x=sample_x, batch_size=32, verbose=1)
acc = np.array(sample_pred==sample_y)
sample_pred.shape, sample_y.shape
sample_pred = sample_pred.astype(int).flatten()
acc = (sample_pred == sample_y)
fig, ax = plt.subplots()
ax.plot(sample_pred[:32].astype(int), c='r')
ax.plot(sample_y[:32], c='b')
acc.mean() | _____no_output_____ | MIT | notebooks/keras_lesson1.ipynb | AmanDaVinci/DeepLabs |
Brand ClassificationSource : https://www.dqlab.id/Typed by : Aulia Khalqillah Import Libraries | import datetime
import pandas as pd
import matplotlib.pyplot as plt | _____no_output_____ | MIT | top5brand_classification.ipynb | auliakhalqillah/top5brand-classification |
Load data | dataset = pd.read_csv('retail_raw_reduced.csv')
dataset | _____no_output_____ | MIT | top5brand_classification.ipynb | auliakhalqillah/top5brand-classification |
Info data | dataset.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 5000 entries, 0 to 4999
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 order_id 5000 non-null int64
1 order_date 5000 non-null object
2 customer_id 5000 non-null int64
3 city 5000 non-null object
4 province 5000 non-null object
5 product_id 5000 non-null object
6 brand 5000 non-null object
7 quantity 5000 non-null int64
8 item_price 5000 non-null int64
dtypes: int64(4), object(5)
memory usage: 351.7+ KB
| MIT | top5brand_classification.ipynb | auliakhalqillah/top5brand-classification |
Exploratory Data Analysis Generate new columns of order_month and Gross Marchendise Volume (GMV) | dataset['order_month'] = dataset['order_date'].apply(lambda x: datetime.datetime.strptime(x, "%Y-%m-%d").strftime('%Y-%m'))
dataset['gmv'] = dataset['item_price']*dataset['quantity']
dataset | _____no_output_____ | MIT | top5brand_classification.ipynb | auliakhalqillah/top5brand-classification |
Select top 5 brands based on its total of quantity in December 2019 | top_brands = (dataset[dataset['order_month']=='2019-12'].groupby('brand')['quantity']
.sum()
.reset_index()
.sort_values(by='quantity',ascending=False)
.reset_index()
.drop('index',axis=1)
.head(5))
top_brands | _____no_output_____ | MIT | top5brand_classification.ipynb | auliakhalqillah/top5brand-classification |
Generate new dataframe for top 5 brands in December 2019 | dataset_top5brand_dec = dataset[
(dataset['order_month']=='2019-12') & (dataset['brand'].isin(top_brands['brand'].to_list()))
].reset_index().drop('index',axis=1)
dataset_top5brand_dec | _____no_output_____ | MIT | top5brand_classification.ipynb | auliakhalqillah/top5brand-classification |
High value | max_brand = dataset_top5brand_dec.groupby(['order_date','brand'])['quantity'].sum().unstack().idxmax().index
max_order_date = dataset_top5brand_dec.groupby(['order_date','brand'])['quantity'].sum().unstack().idxmax().values
max_quantity = dataset_top5brand_dec.groupby(['order_date','brand'])['quantity'].sum().unstack().max().values
max_quantity_value = ({
'brand' : max_brand,
'order_date': max_order_date,
'max_quantity': max_quantity
})
max_quantity_datset = pd.DataFrame(max_quantity_value)
idx_max_qty = max_quantity_datset['max_quantity'].argmax()
max_quantity_datset
max_quantity_datset.iloc[idx_max_qty] | _____no_output_____ | MIT | top5brand_classification.ipynb | auliakhalqillah/top5brand-classification |
A total of quantity of brands in December 2019 | dataset_top5brand_dec.groupby(['order_date','brand'])['quantity'].sum().unstack()
dataset_top5brand_dec.groupby(['order_date','brand'])['quantity'].sum().unstack().plot(marker='.', cmap='plasma', figsize=(10,5))
plt.title('Daily Sold Quantity Dec 2019 Breakdown by Brands',loc='center',pad=30, fontsize=15, color='blue')
plt.xlabel('Order Date', fontsize = 12)
plt.ylabel('Quantity',fontsize = 12)
plt.grid(color='darkgray', linestyle=':', linewidth=0.5)
plt.ylim(ymin=0)
plt.legend(loc='best', bbox_to_anchor=(1.2, 1), shadow=True, ncol=1)
plt.annotate('Highest Quantity', xy=(7, 310), xytext=(8, 300),
weight='bold', color='red',
arrowprops=dict(arrowstyle='->',
connectionstyle="arc3",
color='red'))
plt.tight_layout()
plt.show() | _____no_output_____ | MIT | top5brand_classification.ipynb | auliakhalqillah/top5brand-classification |
Plot number of sold products for each brand in December 2019 | dataset_top5brand_dec.groupby('brand')['product_id'].nunique().sort_values(ascending=False).plot(kind='bar', color='green', figsize=(10,5))
plt.title('Number of Sold Products per Brand, December 2019',loc='center',pad=30, fontsize=15, color='blue')
plt.xlabel('Brand', fontsize = 15)
plt.ylabel('Number of Products',fontsize = 15)
plt.ylim(ymin=0)
plt.xticks(rotation=0)
plt.tight_layout()
plt.show() | _____no_output_____ | MIT | top5brand_classification.ipynb | auliakhalqillah/top5brand-classification |
Generate new data frame of total of quantity for each product | dataset_top5brand_dec_per_product = dataset_top5brand_dec.groupby(['brand','product_id'])['quantity'].sum().reset_index()
dataset_top5brand_dec_per_product | _____no_output_____ | MIT | top5brand_classification.ipynb | auliakhalqillah/top5brand-classification |
Add a columns for quantity group (>=100 or < 100) | dataset_top5brand_dec_per_product['quantity_group'] = dataset_top5brand_dec_per_product['quantity'].apply(
lambda x: '>= 100' if x>=100 else '< 100'
)
dataset_top5brand_dec_per_product.sort_values('quantity',ascending=False,inplace=True)
dataset_top5brand_dec_per_product | _____no_output_____ | MIT | top5brand_classification.ipynb | auliakhalqillah/top5brand-classification |
How much products in each brand? | s_sort = dataset_top5brand_dec_per_product.groupby('brand')['product_id'].nunique().sort_values(ascending=False)
s_sort
dataset_top5brand_dec_per_product_by_quantity = dataset_top5brand_dec_per_product.groupby(['brand','quantity_group'])['product_id'].nunique().reindex(index=s_sort.index, level='brand').unstack()
dataset_top5brand_dec_per_product_by_quantity
dataset_top5brand_dec_per_product_by_quantity.plot(kind='bar', stacked=True, figsize=(10,5))
plt.title('Number of Sold Products per Brand, December 2019',loc='center',pad=30, fontsize=15, color='blue')
plt.xlabel('Brand', fontsize = 15)
plt.ylabel('Number of Products',fontsize = 15)
plt.ylim(ymin=0)
plt.xticks(rotation=0)
plt.show() | _____no_output_____ | MIT | top5brand_classification.ipynb | auliakhalqillah/top5brand-classification |
6 products of Brand P were sold more than 100 pcs, which is the highest sales number compared others products and brands. Otherwise, the Brand C was sold less than 100 pcs. | plt.hist(dataset_top5brand_dec.groupby('product_id')['item_price'].median(), bins=20, stacked=True, range=(1,2000000), color='green', edgecolor='black')
plt.title('Distribution of Price Median per Product\nTop 5 Brands in Dec 2019', fontsize=15, color='blue')
plt.xlabel('Price Median (1000000)', fontsize = 12)
plt.ylabel('Number of Products', fontsize = 12)
plt.xlim(xmin=0,xmax=2000000)
plt.tight_layout()
plt.show() | _____no_output_____ | MIT | top5brand_classification.ipynb | auliakhalqillah/top5brand-classification |
Based on median calculation, a lot of selling products has range of price from 250000 - 750000. That means, many products from various brands are purchased less than 1000000. Calculate total of quantity, total of GMV, and median of item price for each product. | data_per_product_top5brand_dec = dataset_top5brand_dec.groupby('product_id').agg({'quantity': 'sum', 'gmv':'sum', 'item_price':'median'}).reset_index()
data_per_product_top5brand_dec
plt.scatter(data_per_product_top5brand_dec['quantity'],data_per_product_top5brand_dec['gmv'], marker='+', color='red')
plt.title('Correlation of Quantity and GMV per Product\nTop 5 Brands in December 2019',fontsize=15, color='blue')
plt.xlabel('Quantity', fontsize = 12)
plt.ylabel('GMV (in Millions)',fontsize = 12)
plt.xlim(xmin=0,xmax=300)
plt.ylim(ymin=0,ymax=200000000)
labels, locations = plt.yticks()
plt.yticks(labels, (labels/1000000).astype(int))
plt.tight_layout()
plt.show() | _____no_output_____ | MIT | top5brand_classification.ipynb | auliakhalqillah/top5brand-classification |
The correlation between quantity number of product was purchased and GMV from top 5 brands in December 2019, a lot of products were sold less than 50 pcs. It indicates the GMV is not high enough for each brand. However, there are some quantities of products were sold more than 50 pcs. | plt.scatter(data_per_product_top5brand_dec['item_price'],data_per_product_top5brand_dec['quantity'], marker='o', color='green')
plt.title('Correlation of Price Median and Quantity\nTop 5 Brands in December 2019',fontsize=15, color='blue')
plt.xlabel('Price Median (1000000)', fontsize = 12)
plt.ylabel('Quantity',fontsize = 12)
plt.xlim(xmin=0,xmax=2000000)
plt.ylim(ymin=0,ymax=250)
plt.tight_layout()
plt.show() | _____no_output_____ | MIT | top5brand_classification.ipynb | auliakhalqillah/top5brand-classification |
Bioenergy consumption for each fuelGross inland consumption from Eurostat energy balances | import pandas as pd
import os
import datetime
csv_input_dir = 'data'
csv_output_dir = datetime.datetime.today().strftime('%Y-%m-%d')
if not os.path.exists(csv_output_dir):
os.mkdir(csv_output_dir)
df = pd.read_csv(os.path.join(os.path.abspath(csv_input_dir), 'eurostat_2002_2018_tj.csv'), decimal=',')
df
for country in ['CZ', 'AT', 'DK', 'NL', 'PL', 'SK']:
label = country.lower()
cdf = df.loc[df['country'] == country, ['country', 'year', 'fuel', 'gross_inland_consumption']].pivot_table(values='gross_inland_consumption', index='year', columns='fuel')
cdf.to_csv(os.path.join(os.path.abspath(csv_output_dir), f'{label}_selected_fuels_consumption_tj.csv'), decimal=',') | _____no_output_____ | MIT | 2020/selection/fuels.ipynb | jandolezal/balances |
Combine trajectory data List of tasks accomplished in this Jupyter Notebook:- Output 4 dataframe combining all animal trajectories: Fed animals acclimation phase, Fed animals experiment phase, Starved animals acclimation phase, and Starved animals experiment phase | import numpy as np
import pandas as pd
import eleanor_constants as EL
df = pd.read_csv("./data/experiment_IDs/cleaned_static_data.csv")
for val in ["acclimate", "experiment"]:
for food, tag in EL.fed.items():
df_food = df[df['starved'] == tag]
master_df = pd.DataFrame()
for index, row in df_food.iterrows():
animal = row["animal_ID"]
readname = "./data/trajectories/video_calculations/"+animal+"-"+val+".csv"
temp = pd.read_csv(readname)
temp["animal_ID"] = animal
temp["treatment_odor"] = row["treatment_odor"]
master_df = pd.concat([master_df, temp], sort=False)
master_df.drop(["interpolated", "manual_tracker_fix", "objid", "pixel_height", "pixel_width",
"measurement_x", "measurement_y", "position_x", "position_y", "bin_ID",
"turn", "larvae_length_mm",
"pos_x_mm", "pos_y_mm"],
axis=1, inplace=True)
master_df.to_csv("./data/trajectories/summary/modeling_"+\
food+"_"+val+"_all_animals.csv", index=None)
print("--- All files finished ---") | --- All files finished ---
| MIT | 6_combine_trajectory_data_for_modeling.ipynb | riffelllab/Mosquito-larval-analyses-2 |
**問10 format()関数について** 複数の変数定義と、`format()`関数について学びましょう。以下のコードを実行してみましょう。`format()`関数も使用する組み込み関数の一つです。慣れておきましょう。とりあえず、プログラムを実行してみましょう。スクリプト名:training10.py | # 複数の変数を定義する方法
x_data, y_data = 100, 1000
print("x_data:", x_data, "y_data : ", y_data) | x_data: 100 y_data : 1000
| MIT | source/training10.ipynb | hskm07/pybeginner_training100 |
上記の例では、`変数1, 変数2, ... = 値1, 値2, ...`と定義すると、変数1には値1、変数2には値2、...という感じで値が代入されます。 | # 複数の変数を定義する方法
x_string, y_string, z_number = "python", "vba", 10*10
print("x_string:", x_string, "y_string : ", y_string, "z_number : ", z_number) | x_string: python y_string : vba z_number : 100
| MIT | source/training10.ipynb | hskm07/pybeginner_training100 |
****** format()関数の使い方 | # 練習1
msg = "私の年齢は{0}歳で、出身地は{1}です。趣味は{2}です。".format(29,"東京都","釣り")
print(msg) | 私の年齢は29歳で、出身地は東京都です。趣味は釣りです。
| MIT | source/training10.ipynb | hskm07/pybeginner_training100 |
***フォーマット関数 : `文字列{}.format(引数...)`***波カッコで囲まれた{}部分は、置換フィールドと呼ばれ、引数で{}の部分を置換します。上記の例は、"私の年齢は{0}歳で、出身地は{1}です。趣味は{2}です。".format(29,"東京都","釣り"){0} --> 引数1: 29{1} --> 引数2: "東京都"{2} --> 引数3: "釣り"という感じで値が置換されます。 | # 練習2
hello = "私は株式会社サンプルに{0}年に入社しました。職種は{1}です。得意なことは{2}と{3}です。".format(2020, "営業", "走ること", "Python")
print(hello) | 私は株式会社サンプルに2020年に入社しました。職種は営業です。得意なことは走ることとPythonです。
| MIT | source/training10.ipynb | hskm07/pybeginner_training100 |
****** for文を使って、文字を一文字ずつ取り出す | print("\"for文\"で文字を一文字ずつ取り出します")
# len()関数で変数msgの長さを取得
ln = len(msg)
for i in range(ln):
print("{0}番目の文字は、{1}です。".format(i, msg[i])) | "for文"で文字を一文字ずつ取り出します
0番目の文字は、私です。
1番目の文字は、のです。
2番目の文字は、年です。
3番目の文字は、齢です。
4番目の文字は、はです。
5番目の文字は、2です。
6番目の文字は、9です。
7番目の文字は、歳です。
8番目の文字は、でです。
9番目の文字は、、です。
10番目の文字は、出です。
11番目の文字は、身です。
12番目の文字は、地です。
13番目の文字は、はです。
14番目の文字は、東です。
15番目の文字は、京です。
16番目の文字は、都です。
17番目の文字は、でです。
18番目の文字は、すです。
19番目の文字は、。です。
20番目の文字は、趣です。
21番目の文字は、味です。
22番目の文字は、はです。
23番目の文字は、釣です。
24番目の文字は、りです。
25番目の文字は、でです。
26番目の文字は、すです。
27番目の文字は、。です。
| MIT | source/training10.ipynb | hskm07/pybeginner_training100 |
Data Science Unit 1 Sprint Challenge 2 Storytelling with DataIn this sprint challenge you'll work with a dataset from **FiveThirtyEight's article, [Every Guest Jon Stewart Ever Had On ‘The Daily Show’](https://fivethirtyeight.com/features/every-guest-jon-stewart-ever-had-on-the-daily-show/)**! Part 0 — Run this starter codeYou don't need to add or change anything here. Just run this cell and it loads the data for you, into a dataframe named `df`.(You can explore the data if you want, but it's not required to pass the Sprint Challenge.) | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/daily-show-guests/daily_show_guests.csv')
df.rename(columns={'YEAR': 'Year', 'Raw_Guest_List': 'Guest'}, inplace=True)
def get_occupation(group):
if group in ['Acting', 'Comedy', 'Musician']:
return 'Acting, Comedy & Music'
elif group in ['Media', 'media']:
return 'Media'
elif group in ['Government', 'Politician', 'Political Aide']:
return 'Government and Politics'
else:
return 'Other'
df['Occupation'] = df['Group'].apply(get_occupation) | _____no_output_____ | MIT | DS_Unit_1_Sprint_Challenge_2.ipynb | aapte11/DS-Sprint-02-Storytelling-With-Data |
Part 1 — What's the breakdown of guests’ occupations per year?For example, in 1999, what percentage of guests were actors, comedians, or musicians? What percentage were in the media? What percentage were in politics? What percentage were from another occupation?Then, what about in 2000? In 2001? And so on, up through 2015.So, **for each year of _The Daily Show_, calculate the percentage of guests from each occupation:**- Acting, Comedy & Music- Government and Politics- Media- Other Hints:1. Use pandas to make a **crosstab** of **`Year`** & **`Occupation`**. ([This documentation](http://pandas.pydata.org/pandas-docs/stable/reshaping.htmlcross-tabulations) has examples and explanation.)2. To get percentages instead of counts, use crosstab's **`normalize`** parameter to normalize over each _row._ ([This documentation](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html) describes the parameter and its options.)3. You'll know you've calculated the crosstab correctly when the percentage of "Acting, Comedy & Music" guests is 90.36% in 1999, and 45% in 2015. | df.head() | _____no_output_____ | MIT | DS_Unit_1_Sprint_Challenge_2.ipynb | aapte11/DS-Sprint-02-Storytelling-With-Data |
**PART 1: CROSSTAB** | cross = pd.crosstab(df.Year, df.Occupation, normalize = 'index')
cross
| _____no_output_____ | MIT | DS_Unit_1_Sprint_Challenge_2.ipynb | aapte11/DS-Sprint-02-Storytelling-With-Data |
Part 2 — Recreate this explanatory visualization: | from IPython.display import display, Image
url = 'https://fivethirtyeight.com/wp-content/uploads/2015/08/hickey-datalab-dailyshow.png'
example = Image(url, width=500)
display(example) | _____no_output_____ | MIT | DS_Unit_1_Sprint_Challenge_2.ipynb | aapte11/DS-Sprint-02-Storytelling-With-Data |
**Hint:** use the crosstab you calculated in part 1!**Expectations:** Your plot should include:- 3 lines visualizing "occupation of guests, by year." The shapes of the lines should look roughly identical to 538's example. Each line should be a different color. (But you don't need to use the _same_ colors as 538.)- Legend or labels for the lines. (But you don't need each label positioned next to its line or colored like 538.)- Title in the upper left: _"Who Got To Be On 'The Daily Show'?"_ with more visual emphasis than the subtitle. (Bolder and/or larger font.)- Subtitle underneath the title: _"Occupation of guests, by year"_Any visual element not specifically mentioned in the expectations is an optional bonus, but it's _not_ required to pass the Sprint Challenge. | cross.index
import matplotlib.style as style
style.available
cross100 = 100*cross
fig, ax = plt.subplots(facecolor = 'white', figsize = (8,6))
style.use("fivethirtyeight") # Doesn't work
year = cross100.index
media = cross100['Media']
gov = cross100['Government and Politics']
entertainment = cross100['Acting, Comedy & Music']
ax.plot(year, media, color = 'purple', linewidth = 3, label = 'Media' )
ax.plot(year, gov, color = 'orangered', linewidth = 3)
ax.plot(year, entertainment, color = 'dodgerblue', linewidth = 3)
ax.tick_params(axis = 'x', labelrotation = 0, colors = 'black', pad = 4)
x_ticks = [2000,2004,2008,2012]
ax.set_xticks(x_ticks)
ax.tick_params(axis = 'y', labelrotation = 0, colors = 'black')
y_ticks = [0,25,50,75,100]
ax.set_yticks(y_ticks)
ax.legend().set_visible(True)
# plt.annotate()
plt.annotate("Media", xy = (25,25))
plt.title("Who Got To Be On 'The Daily Show'?", fontweight='bold', loc = 'left')
# plt.xlabel('Guest', fontweight='bold')
# plt.ylabel('Number of Appearances', fontweight='bold')
# ax.text(0,50,s="Who Got To Be On 'The Daily Show'?", fontsize=18, weight='bold') #Doesn't work
# ax.text(-1.5,42,s="Occupation of guests, by year", fontsize=16) #Doesn't work
# plt.figtext(20,20,s="Media", color = 'purple') #Doesn't work
plt.show() | _____no_output_____ | MIT | DS_Unit_1_Sprint_Challenge_2.ipynb | aapte11/DS-Sprint-02-Storytelling-With-Data |
**plt.text and plt.style did not work in this case so I couldn't place text as needed. ** | !pip install --upgrade seaborn
import seaborn as sns
sns.__version__
import matplotlib.pyplot as plt
five_thirty_eight = [
"#30a2da",
"#fc4f30",
"#e5ae38",
"#6d904f",
"#8b8b8b",
]
sns.set_palette(five_thirty_eight)
sns.palplot(sns.color_palette())
plt.show() | _____no_output_____ | MIT | DS_Unit_1_Sprint_Challenge_2.ipynb | aapte11/DS-Sprint-02-Storytelling-With-Data |
**Attempting the problem in Seaborn****UPDATE: Same text issues with seaborn** | year = cross100.index
media = cross100['Media']
gov = cross100['Government and Politics']
entertainment = cross100['Acting, Comedy & Music']
ax1 = sns.lineplot(x=year, y=media, color = 'purple')
ax2 = sns.lineplot(x=year, y=gov, color = 'orangered')
ax3 = sns.lineplot(x=year, y=entertainment, color = 'dodgerblue')
ax1.set(xticks=[2000, 2004, 2008, 2012])
ax2.set(yticks=[0,25,50,75,100])
ax1.set(ylabel = '')
ax3.set(xlabel = '')
plt.show()
| _____no_output_____ | MIT | DS_Unit_1_Sprint_Challenge_2.ipynb | aapte11/DS-Sprint-02-Storytelling-With-Data |
Part 3 — Who were the top 10 guests on _The Daily Show_?**Make a plot** that shows their names and number of appearances.**Hint:** you can use the pandas `value_counts` method.**Expectations:** This can be a simple, quick plot: exploratory, not explanatory. If you want, you can add titles and change aesthetics, but it's _not_ required to pass the Sprint Challenge. | top_ten = df.Guest.value_counts()[0:10]
fig, ax = plt.subplots(facecolor = 'white', figsize = (8,6))
ax = top_ten.plot.bar(width = 0.9, color = 'limegreen')
ax.tick_params(axis = 'x', labelrotation = 90, colors = 'black', pad = 2, bottom = 'on')
ax.tick_params(axis = 'y', labelrotation = 0, colors = 'black')
y_ticks = [0,5,10,15,20,25]
ax.set_yticks(y_ticks)
ax.legend().set_visible(False)
ax.set_facecolor("cornsilk")
plt.title('Top Ten Guest Apperances on the Daily Show', fontweight='bold')
plt.xlabel('Guest', fontweight='bold')
plt.ylabel('Number of Appearances', fontweight='bold')
# plt.axhline(y = -0.25, color = 'black', linewidth = 1.3, alpha = .7)
# plt.axvline(x = -0.5, color = 'black', linewidth = 1.3, alpha = .7)
for x, y in enumerate(top_ten):
ax.text(x-.15, y+.3, str(y), color = 'blue') # str(y)
plt.show() | _____no_output_____ | MIT | DS_Unit_1_Sprint_Challenge_2.ipynb | aapte11/DS-Sprint-02-Storytelling-With-Data |
Introducción a SympyAdemais das variables numéricas existen as variables simbólicas que permiten calcularlímites, derivadas, integrais etc., como se fai habitualmente nas clases de matemáticas.Para poder facer estas operacións, habituais nun curso de Cálculo, é preciso ter instalada a libraría **Sympy**.Ao contrario que o módulo **Math** ou o módulo **Numpy** que acabamos de revisar na práctica anterior, o módulo **Sympy** non traballa cunha estrutura de datos baseado en números (xa sexan de tipo enteiro ou dobre) senón que traballa con obxectos que posúen atributos e métodos que tratan de reproducir o comportamento matemático de variables, funcións, rexións, ecuacións, etc. coas que se traballa habitualmente nas disciplinas da álxebra e o cálculo diferencial e integral.Para empregar directamente este guión de prácticas dende unha instalación de Python con *Anaconda*, basta con facer clic na aplicación 'Jupyter notebook' que xa está instalada por defecto (para máis detalles: https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/execute.html). Obxectivos:- Uso de variables simbólicas- Suposicións e requerimentos das variables - Manipulación de expresións sinxelas en varias variables Instalación e carga do móduloPara facer que estea dispoñible o módulo **Sympy**, hai que instalalo usando a ferramente `pip` (ou `conda` se estades a usar entornos de traballo diferenciados). No caso do uso de *Microsoft Azute Notebooks* (https://notebooks.azure.com/), empregaríase a seguinte instalación: | !pip -q install sympy | [33mYou are using pip version 19.3.1, however version 20.0.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
| MIT | practicas/introduccion-sympy.ipynb | maprieto/CalculoMultivariable |
Para dispoñer do módulo **Sympy** e importalo para o resto do guión de prácticas, usaremos: | import sympy as sp | _____no_output_____ | MIT | practicas/introduccion-sympy.ipynb | maprieto/CalculoMultivariable |
Variables simbólicasPara traballar en modo simbólico é necesario definir variables simbólicas e para faceristo usaremos o función `sp.Symbol`. Vexamos algúns exemplos do seu uso: | x = sp.Symbol('x') # define a variable simbólica x
y = sp.Symbol('y') # define a variable simbólica y
f = 3*x + 5*y # agora temos definida a expresion simbólica f
print(f)
a, b, c = sp.symbols('a:c') # define como simbólicas as variables a, b, c.
expresion = a**3 + b**2 + c
print(expresion) | 3*x + 5*y
a**3 + b**2 + c
| MIT | practicas/introduccion-sympy.ipynb | maprieto/CalculoMultivariable |
Por claridade na implementación e nos cálculos, será habitual que o nome da variable simbólica e o nome do obxecto **Sympy** no que se alamacena coincidan, pero isto non ter porque ser así: | a = sp.Symbol('x')
print(a)
a.name | x
| MIT | practicas/introduccion-sympy.ipynb | maprieto/CalculoMultivariable |
Debemos ter claso que agora as variables `x` ou `y` definidas antes non son números, nin tampouco pertencen aos obxectos definidos co módulo **Numpy** revisado na práctica anterior. Todas as variables simbólicas son obxectos da clase `sp.Symbol` e os seus atributos e métodos son completamente diferentes aos que aparecían ás variables numéricas e vectores de **Numpy**: | print(type(x))
dir(x) | <class 'sympy.core.symbol.Symbol'>
| MIT | practicas/introduccion-sympy.ipynb | maprieto/CalculoMultivariable |
Con **Sympy** pódense definir constantes enteiras ou números racioanais (todas de forma simbólica) de xeito doado usando o comando `sp.Integer` ou `sp.Rational`. Por exemplo, podemos definir a constante simbólica $1/3$. Se fixeramos o mesmo con números representados por defecto en Python, obteríamos resultados moi diferentes. Observa tamén a diferenza que existe entre o tipode dato asignado no espazo de traballo | a = sp.Rational('1/3')
b = sp.Integer('6')/sp.Integer('3')
c = 1/3
d = 1.0/3.0
print(a)
print(b)
print(c)
print(d)
print(type(a))
print(type(b))
print(type(c))
print(type(d))
print(a)
print(b) | 1/3
2
0
0.333333333333
<class 'sympy.core.numbers.Rational'>
<class 'sympy.core.numbers.Integer'>
<type 'int'>
<type 'float'>
1/3
2
| MIT | practicas/introduccion-sympy.ipynb | maprieto/CalculoMultivariable |
Outra forma sinxela de manexar valores constante mediante obxectos do módulo **Sympy** é usar a función `sp.S`. Unha vez feitos todos os cálculos simbólicos, se precisamos obter o valor numérico, empregaríase a función `sp.N` ou ben directamente `float`: | a = sp.S(2)
b = sp.S(6)
c = a/b
d = sp.N(c)
e = float(c)
print(type(a))
print(type(b))
print(type(c))
print(type(d))
print(type(e))
print(c)
print(d)
print('{0:.15f}'.format(e)) | <class 'sympy.core.numbers.Integer'>
<class 'sympy.core.numbers.Integer'>
<class 'sympy.core.numbers.Rational'>
<class 'sympy.core.numbers.Float'>
<type 'float'>
1/3
0.333333333333333
0.333333333333333
| MIT | practicas/introduccion-sympy.ipynb | maprieto/CalculoMultivariable |
Ao longo do curso usaremos asiduamente dous números reais que podes definir como constantes simbólicas: $\pi$ e o numéro $e$. Do mesmo xeito, para operar con variables ou constantes simbólicas, debemos empregar funcións que sexan capaces de manipular este tipo de obxectos, todas elas implementadas no módulo **Sympy** (por exemplo, `sp.sin`, `sp.cos`, `sp.log`, etc) | import numpy as np
print(np.pi)
print(type(np.pi))
p=sp.pi # definición da constante pi
print(sp.cos(p))
e = sp.E # definición do número e
print(sp.log(e))
print(sp.N(sp.pi,1000))
print(type(sp.N(sp.pi,100))) | 3.14159265359
<type 'float'>
-1
1
3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067982148086513282306647093844609550582231725359408128481117450284102701938521105559644622948954930381964428810975665933446128475648233786783165271201909145648566923460348610454326648213393607260249141273724587006606315588174881520920962829254091715364367892590360011330530548820466521384146951941511609433057270365759591953092186117381932611793105118548074462379962749567351885752724891227938183011949129833673362440656643086021394946395224737190702179860943702770539217176293176752384674818467669405132000568127145263560827785771342757789609173637178721468440901224953430146549585371050792279689258923542019956112129021960864034418159813629774771309960518707211349999998372978049951059731732816096318595024459455346908302642522308253344685035261931188171010003137838752886587533208381420617177669147303598253490428755468731159562863882353787593751957781857780532171226806613001927876611195909216420198
<class 'sympy.core.numbers.Float'>
| MIT | practicas/introduccion-sympy.ipynb | maprieto/CalculoMultivariable |
Suposicións sobre as variablesCando se define unha variable simbólica se lle pode asignar certa información adicional sobre o tipo de valores que pode acadar, ou as suposicións que se lle van a aplicar. Por exemplo, podemos decidir antes de facer calquera cálculo se a variable toma valores enteiros ou reais, se é positiva ou negativa, maior que un certo número, etc. Este tipo de información engádese no momento da definición da variable simbólica como un argumento opcional. | x = sp.Symbol('x', nonnegative = True) # A raíz cadrada dun número non negativo é real
y = sp.sqrt(x)
print(y.is_real)
x = sp.Symbol('x', integer = True) # A potencia dun número enteiro é enteira
y = x**sp.S(2)
print(y.is_integer)
a = sp.Symbol('a')
b = sp.sqrt(a)
print(b.is_real)
a = sp.Symbol('a')
b = a**sp.S(2)
print(b.is_integer) | True
True
None
None
| MIT | practicas/introduccion-sympy.ipynb | maprieto/CalculoMultivariable |
Posto que os cálculos simbólicos son consistentes en **Sympy**, se poden tamén facer comprobacións sobre se algunhas desigualdades son certas ou non, sempre e cando se teña coidado nas suposicións que se fagan ao definir as variables simbólicas | x = sp.Symbol('x', real = True)
p = sp.Symbol('p', positive = True)
q = sp.Symbol('q', real = True)
y = sp.Abs(x) + p # O valor absoluto
z = sp.Abs(x) + q
print(y > 0)
print(z > 0) | True
q + Abs(x) > 0
| MIT | practicas/introduccion-sympy.ipynb | maprieto/CalculoMultivariable |
Manipulación de expresións simbólicas Do mesmo xeito que o módulo **Sympy** nos permite definir variables simbólicas, tamén podemos definir expresións matemáticas a partir destas e manipulalas, factorizándoas, expandíndoas, simplificalas, ou mesmo imprimilas dun xeito similar a como o faríamos con lápiz e papel | x,y = sp.symbols('x,y', real=True)
expr = (x-3)*(x-3)**2*(y-2)
expr_long = sp.expand(expr) # Expandir expresión
print(expr_long) # Imprimir de forma estándar
sp.pprint(expr_long) # Imprimir de forma semellante a con lápiz e papel
expr_short = sp.factor(expr)
print(expr_short) # Factorizar expresión
expr = -3+(x**2-6*x+9)/(x-3)
expr_simple = sp.simplify(expr) # Simplificar expresión
sp.pprint(expr)
print(expr_simple) | x**3*y - 2*x**3 - 9*x**2*y + 18*x**2 + 27*x*y - 54*x - 27*y + 54
3 3 2 2
x ⋅y - 2⋅x - 9⋅x ⋅y + 18⋅x + 27⋅x⋅y - 54⋅x - 27⋅y + 54
(x - 3)**3*(y - 2)
2
x - 6⋅x + 9
-3 + ────────────
x - 3
x - 6
| MIT | practicas/introduccion-sympy.ipynb | maprieto/CalculoMultivariable |
Dada unha expresión en **Sympy** tamén se pode manipulala, substituindo unhas variables simbólica por outras ou mesmo reemprazando as variables simbólicas por constantes. Para facer este tipo de substitucións emprégase a función `subs` e os valores a utilizar na substitución veñen definidos por un diccionario de Python: | x,y = sp.symbols('x,y', real=True)
expr = x*x + x*y + y*x + y*y
res = expr.subs({x:1, y:2}) # Substitutición das variables simbólicas por constantes
print(res)
expr_sub = expr.subs({x:1-y}) # Subsitución de variable simbólica por unha expresión
sp.pprint(expr_sub)
print(sp.simplify(expr_sub)) | 9
2 2
y + 2⋅y⋅(-y + 1) + (-y + 1)
1
| MIT | practicas/introduccion-sympy.ipynb | maprieto/CalculoMultivariable |
**Exercicio 2.1** Define a expresión dada pola suma dos termos seguintes:$$a+a^2+a^3+\ldots+a^N,$$onde $a$ é unha variable real arbitraria e $N$ e un valor enteiro positivo. | # O TEU CÓDIGO AQUÍ | _____no_output_____ | MIT | practicas/introduccion-sympy.ipynb | maprieto/CalculoMultivariable |
**Exercicio 2.2** Cal é o valor exacto da anterior expresión cando $N=15$ e $a=5/6$? Cal é valor numérico en coma flotante? | # O TEU CÓDIGO AQUÍ | _____no_output_____ | MIT | practicas/introduccion-sympy.ipynb | maprieto/CalculoMultivariable |
Problem statementGiven a sorted array that may have duplicate values, use *binary search* to find the **first** and **last** indexes of a given value.For example, if you have the array `[0, 1, 2, 2, 3, 3, 3, 4, 5, 6]` and the given value is `3`, the answer will be `[4, 6]` (because the value `3` occurs first at index `4` and last at index `6` in the array).The expected complexity of the problem is $O(log(n))$. | def first_and_last_index(arr, number):
"""
Given a sorted array that may have duplicate values, use binary
search to find the first and last indexes of a given value.
Args:
arr(list): Sorted array (or Python list) that may have duplicate values
number(int): Value to search for in the array
Returns:
a list containing the first and last indexes of the given value
"""
# TODO: Write your first_and_last function here
# Note that you may want to write helper functions to find the start
# index and the end index
pass | _____no_output_____ | MIT | Course/Data structures and algorithms/3.Basic algorithm/1.Basic algorithms/5.First and last index.ipynb | IulianOctavianPreda/Udacity |
Hide Solution | def first_and_last_index(arr, number):
# search first occurence
first_index = find_start_index(arr, number, 0, len(arr) - 1)
# search last occurence
last_index = find_end_index(arr, number, 0, len(arr) - 1)
return [first_index, last_index]
def find_start_index(arr, number, start_index, end_index):
# binary search solution to search for the first index of the array
if start_index > end_index:
return -1
mid_index = start_index + (end_index - start_index)//2
if arr[mid_index] == number:
current_start_pos = find_start_index(arr, number, start_index, mid_index - 1)
if current_start_pos != -1:
start_pos = current_start_pos
else:
start_pos = mid_index
return start_pos
elif arr[mid_index] < number:
return find_start_index(arr, number, mid_index + 1, end_index)
else:
return find_start_index(arr, number, start_index, mid_index - 1)
def find_end_index(arr, number, start_index, end_index):
# binary search solution to search for the last index of the array
if start_index > end_index:
return -1
mid_index = start_index + (end_index - start_index)//2
if arr[mid_index] == number:
current_end_pos = find_end_index(arr, number, mid_index + 1, end_index)
if current_end_pos != -1:
end_pos = current_end_pos
else:
end_pos = mid_index
return end_pos
elif arr[mid_index] < number:
return find_end_index(arr, number, mid_index + 1, end_index)
else:
return find_end_index(arr, number, start_index, mid_index - 1)
| _____no_output_____ | MIT | Course/Data structures and algorithms/3.Basic algorithm/1.Basic algorithms/5.First and last index.ipynb | IulianOctavianPreda/Udacity |
Below are several different test cases you can use to check your solution. | def test_function(test_case):
input_list = test_case[0]
number = test_case[1]
solution = test_case[2]
output = first_and_last_index(input_list, number)
if output == solution:
print("Pass")
else:
print("Fail")
input_list = [1]
number = 1
solution = [0, 0]
test_case_1 = [input_list, number, solution]
test_function(test_case_1)
input_list = [0, 1, 2, 3, 3, 3, 3, 4, 5, 6]
number = 3
solution = [3, 6]
test_case_2 = [input_list, number, solution]
test_function(test_case_2)
input_list = [0, 1, 2, 3, 4, 5]
number = 5
solution = [5, 5]
test_case_3 = [input_list, number, solution]
test_function(test_case_3)
input_list = [0, 1, 2, 3, 4, 5]
number = 6
solution = [-1, -1]
test_case_4 = [input_list, number, solution]
test_function(test_case_4) | _____no_output_____ | MIT | Course/Data structures and algorithms/3.Basic algorithm/1.Basic algorithms/5.First and last index.ipynb | IulianOctavianPreda/Udacity |
print("ssss213") | _____no_output_____ | MIT | Untitled2.ipynb | mohamadhayeri9/tensorflow_example |
|
Project: Ventilation in the CCU EDA: Ventilator Mode in the CCU Cohort C.V. Cosgriff NYU CCU Data Science Group__Question:__ Can you guys please see how many of the 756 patients received receive SIMV or IMV as the mode of mechanical ventilation. A very interesting (and relatively simple) analysis would be to compare length of stay, mortality, ventilator free days and MV duration between those undergoing SIMV/IMV and other modes. Analysis Plan* Extract the CCU Metavision Cohort with basic demographic data* Identify the `itemid` for ventilator mode* Extract the ventilator mode items for each patient on the first day* Decide how to summarise if multiple modes exist* Assign patients to each group and compare unadjusted mortality* Build logistic regression model for hospital mortality* Build Poisson model for length of stay 0 - Environment | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import psycopg2
dbname = 'mimic'
schema_name = 'mimiciii'
db_schema = 'SET search_path TO {0};'.format(schema_name)
con = psycopg2.connect(database=dbname) | _____no_output_____ | MIT | ventilation/mode_analysis.ipynb | cosgriffc/mimic-ccu |
1 - CCU Cohort Extraction | query = db_schema + '''
SELECT ie.icustay_id, ie.hadm_id, ie.subject_id, ie.dbsource
, ie.first_careunit, ie.intime, ie.outtime, ie.los
, ied.admission_age, ied.gender, ied.ethnicity
, ied.first_icu_stay, oa.oasis AS oasis_score
, elix.elixhauser_vanwalraven AS elixhauser_score
, vd.starttime AS vent_start, vd.endtime AS vent_end
, ad.hospital_expire_flag
FROM icustays ie
LEFT JOIN icustay_detail ied
ON ie.icustay_id = ied.icustay_id
LEFT JOIN admissions ad
ON ie.hadm_id = ad.hadm_id
LEFT JOIN elixhauser_ahrq_score elix
ON ie.hadm_id = elix.hadm_id
LEFT JOIN oasis oa
ON ie.icustay_id = oa.icustay_id
LEFT JOIN ventdurations vd
ON ie.icustay_id = vd.icustay_id;
'''
cohort_df = pd.read_sql(query, con)
print(cohort_df.shape)
display(cohort_df.head())
cohort_df = cohort_df.loc[cohort_df.dbsource == 'metavision', :]
cohort_df = cohort_df.loc[cohort_df.first_careunit == 'CCU', :]
cohort_df = cohort_df.loc[cohort_df.admission_age >= 16, :]
cohort_df = cohort_df.drop('dbsource', axis=1)
cohort_df.drop_duplicates(subset='icustay_id').shape | _____no_output_____ | MIT | ventilation/mode_analysis.ipynb | cosgriffc/mimic-ccu |
2 - Identify Ventilator Mode Items | query = db_schema + '''
SELECT itemid, label, dbsource, linksto
FROM d_items
WHERE LOWER(label) LIKE '%mode%'
AND dbsource='metavision';
'''
d_search = pd.read_sql_query(query, con)
display(d_search) | _____no_output_____ | MIT | ventilation/mode_analysis.ipynb | cosgriffc/mimic-ccu |
It appears the `itemid` is __223849__. 3 - Extract Ventilation Modes | query = db_schema + '''
WITH vent_mode_day1 AS (
SELECT ce.icustay_id, ce.charttime - ie.intime AS offset
, ce.value
FROM icustays ie
LEFT JOIN chartevents ce
ON ie.icustay_id = ce.icustay_id
WHERE ce.itemid = 223849
)
SELECT vm.icustay_id, vm.value AS vent_mode_24h
FROM vent_mode_day1 vm
WHERE vm.offset <= interval '24' hour;
'''
vm_df = pd.read_sql(query, con)
display(vm_df.head()) | _____no_output_____ | MIT | ventilation/mode_analysis.ipynb | cosgriffc/mimic-ccu |
Lets look at the distribution of different ventilation modes in this data. | vm_df.groupby(vm_df.vent_mode_24h).count().plot(kind='bar', figsize=(12,6)) | _____no_output_____ | MIT | ventilation/mode_analysis.ipynb | cosgriffc/mimic-ccu |
Format Data | def permute(image):
image = torch.Tensor(image)
image = image.permute(3,0,1,2).numpy()
return image
DATA_PATH = '../data/brats_dataset/raw_data/'
OUT_PATH = '../data/brats_dataset/processed_data_2d/'
TABLE_PATH = '../data/split_tables/brats_2d/'
os.makedirs(TABLE_PATH,exist_ok=True)
patient_list = [i for i in os.listdir(DATA_PATH) if i.find('g_')!=-1]
n_slices_width = 128
for patient in tqdm(patient_list):
img_flair = np.array(nib.load(DATA_PATH+patient+'/'+patient+'_flair.nii.gz').dataobj)
img_t1 = np.array(nib.load(DATA_PATH+patient+'/'+patient+'_t1.nii.gz').dataobj)
img_t1ce = np.array(nib.load(DATA_PATH+patient+'/'+patient+'_t1ce.nii.gz').dataobj)
img_t2 = np.array(nib.load(DATA_PATH+patient+'/'+patient+'_t2.nii.gz').dataobj)
seg = np.array(nib.load(DATA_PATH+patient+'/'+patient+'_seg.nii.gz').dataobj)
img = np.stack([img_flair,img_t1,img_t1ce,img_t2],axis=0)
seg = seg.reshape(1,seg.shape[0],seg.shape[1],seg.shape[2])
seg[seg==4] = 3
os.makedirs(OUT_PATH+patient,exist_ok=True)
for i in range(img.shape[-1]):
temp = img[:,:,:,i]
temp_y = seg[:,:,:,i]
#save
np.save(OUT_PATH+patient+f'/{i}_voxels.npy',temp)
np.save(OUT_PATH+patient+f'/{i}_labels.npy',temp_y)
| 93%|█████████▎| 342/369 [22:45<01:42, 3.78s/it] | BSD-2-Clause | notebooks/.ipynb_checkpoints/0_format_BRATS_data-checkpoint.ipynb | neurips2021vat/Variance-Aware-Training |
Prepare split tables | patient_list = [OUT_PATH[1:]+i for i in os.listdir(OUT_PATH) if i.find('.')==-1]
print(f'Total number of patients: {len(patient_list)}')
patient_arr = []
records = []
for patient in patient_list:
records += [patient+'/'+i for i in os.listdir('.'+patient) if i.find('voxels')!=-1]
patient_arr += [patient]*len([patient+'/'+i for i in os.listdir('.'+patient) if i.find('voxels')!=-1])
records = np.array(records)
patient_arr = np.array(patient_arr)
#create test
kf = GroupKFold(n_splits=2)
for (train,test) in kf.split(records,records,patient_arr):
records_test = records[test]
#create test
split = {
'test': records_test.tolist(),
}
with open(f'{TABLE_PATH}test_split_table.json', 'w') as outfile:
json.dump(split, outfile)
break
patient_arr = patient_arr[train]
records = records[train]
#create train and validation
kf = GroupKFold(n_splits=2)
for (train,test) in kf.split(records,records,patient_arr):
records_test = records[test]
#create test
split = {
'test': records_test.tolist(),
}
with open(f'{TABLE_PATH}test_split_table.json', 'w') as outfile:
json.dump(split, outfile)
break
patient_arr = patient_arr[train]
records = records[train]
#create train and validation
n_patients = [1,2,4,8]
patients_unique = np.unique(patient_arr)
for i in n_patients:
train_patients = patients_unique[:i]
train_records = np.empty(0)
for patient in train_patients.tolist():
train_records = np.append(train_records,records[patient_arr==patient],axis=0)
val_patients = patients_unique[-2:]
val_records = np.empty(0)
for patient in val_patients.tolist():
val_records = np.append(val_records,records[patient_arr==patient],axis=0)
split = {
'train': train_records.tolist(),
'val': val_records.tolist(),
'pretrain': records.tolist(),
}
with open(f'{TABLE_PATH}{i}_split_table.json', 'w') as outfile:
json.dump(split, outfile)
#create UB
train_patients = patients_unique[:patients_unique.shape[0]//2]
train_records = np.empty(0)
for patient in train_patients.tolist():
train_records = np.append(train_records,records[patient_arr==patient],axis=0)
val_patients = patients_unique[patients_unique.shape[0]//2:]
val_records = np.empty(0)
for patient in val_patients.tolist():
val_records = np.append(val_records,records[patient_arr==patient],axis=0)
split = {
'train': train_records.tolist(),
'val': val_records.tolist(),
}
with open(f'{TABLE_PATH}UB_split_table.json', 'w') as outfile:
json.dump(split, outfile)
| _____no_output_____ | BSD-2-Clause | notebooks/.ipynb_checkpoints/0_format_BRATS_data-checkpoint.ipynb | neurips2021vat/Variance-Aware-Training |
___ ___ Merging, Joining, and ConcatenatingThere are 3 main ways of combining DataFrames together: Merging, Joining and Concatenating. In this lecture we will discuss these 3 methods with examples.____ Example DataFrames | import pandas as pd
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3])
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],
'D': ['D4', 'D5', 'D6', 'D7']},
index=[4, 5, 6, 7])
df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],
'B': ['B8', 'B9', 'B10', 'B11'],
'C': ['C8', 'C9', 'C10', 'C11'],
'D': ['D8', 'D9', 'D10', 'D11']},
index=[8, 9, 10, 11])
df1
df2
df3 | _____no_output_____ | MIT | res/Python-for-Data-Analysis/Pandas/Merging, Joining, and Concatenating .ipynb | Calvibert/machine-learning-exercises |
ConcatenationConcatenation basically glues together DataFrames. Keep in mind that dimensions should match along the axis you are concatenating on. You can use **pd.concat** and pass in a list of DataFrames to concatenate together: | pd.concat([df1,df2,df3])
pd.concat([df1,df2,df3],axis=1) | _____no_output_____ | MIT | res/Python-for-Data-Analysis/Pandas/Merging, Joining, and Concatenating .ipynb | Calvibert/machine-learning-exercises |
_____ Example DataFrames | left = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
left
right | _____no_output_____ | MIT | res/Python-for-Data-Analysis/Pandas/Merging, Joining, and Concatenating .ipynb | Calvibert/machine-learning-exercises |
___ MergingThe **merge** function allows you to merge DataFrames together using a similar logic as merging SQL Tables together. For example: | pd.merge(left,right,how='inner',on='key') | _____no_output_____ | MIT | res/Python-for-Data-Analysis/Pandas/Merging, Joining, and Concatenating .ipynb | Calvibert/machine-learning-exercises |
Or to show a more complicated example: | left = pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'],
'key2': ['K0', 'K1', 'K0', 'K1'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],
'key2': ['K0', 'K0', 'K0', 'K0'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
pd.merge(left, right, on=['key1', 'key2'])
pd.merge(left, right, how='outer', on=['key1', 'key2'])
pd.merge(left, right, how='right', on=['key1', 'key2'])
pd.merge(left, right, how='left', on=['key1', 'key2']) | _____no_output_____ | MIT | res/Python-for-Data-Analysis/Pandas/Merging, Joining, and Concatenating .ipynb | Calvibert/machine-learning-exercises |
JoiningJoining is a convenient method for combining the columns of two potentially differently-indexed DataFrames into a single result DataFrame. | left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index=['K0', 'K1', 'K2'])
right = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']},
index=['K0', 'K2', 'K3'])
left.join(right)
left.join(right, how='outer') | _____no_output_____ | MIT | res/Python-for-Data-Analysis/Pandas/Merging, Joining, and Concatenating .ipynb | Calvibert/machine-learning-exercises |
Version 6.0ground truth using "denosing"find out the different pairs and only output those different things 1. Preparation | from google.colab import drive
drive.mount('/content/drive')
root = 'drive/MyDrive/LM/'
!pip install sentencepiece
!pip install transformers -q
!pip install wandb -q
# Importing stock libraries
import numpy as np
import pandas as pd
import time
from tqdm import tqdm
import os
import regex as re
import sys
sys.path.append('/content/drive/MyDrive/LM/')
from global_param import MyConfig
import nltk
nltk.download("punkt")
from nltk.tokenize.treebank import TreebankWordDetokenizer
detokenizer = TreebankWordDetokenizer()
import torch
from torch import cuda
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader, RandomSampler, SequentialSampler
# Importing the T5 modules from huggingface/transformers
from transformers import T5Tokenizer, T5ForConditionalGeneration
# WandB – Import the wandb library
import wandb
# Login to wandb to log the model run and all the parameters
# 7229adacb32965027d73056a6927efd0365a00bc
!wandb login
myconfig = MyConfig()
# Checking out the GPU we have access to. This is output is from the google colab version.
!nvidia-smi
# # Setting up the device for GPU usage
device = 'cuda' if cuda.is_available() else 'cpu'
print("Device is: ", device)
# Set random seeds and deterministic pytorch for reproducibility
#SEED = 42
SEED = myconfig.SEED
torch.manual_seed(SEED) # pytorch random seed
np.random.seed(SEED) # numpy random seed
torch.backends.cudnn.deterministic = True
# Global Parameter
model_version = "6.3"
load_version = "6.2"
initial_epoch = 0
# WandB – Initialize a new run
wandb.init(project="counterfactual"+model_version)
# WandB – Config is a variable that holds and saves hyperparameters and inputs
# Defining some key variables that will be used later on in the training
# config = wandb.config # Initialize config
# config.TRAIN_BATCH_SIZE = 16 # input batch size for training (default: 64)
# config.VALID_BATCH_SIZE = 32 # input batch size for testing (default: 1000)
# config.TRAIN_EPOCHS = 51 # number of epochs to train (default: 10)
# config.VAL_EPOCHS = 1
# config.LEARNING_RATE = 1e-4 # learning rate (default: 0.01)
# config.SEED = 42 # random seed (default: 42)
# config.SOURCE_LEN = 150
# config.TARGET_LEN = 110
# WandB – Config is a variable that holds and saves hyperparameters and inputs
# Defining some key variables that will be used later on in the training
config = wandb.config # Initialize config
config.TRAIN_BATCH_SIZE = 16 # input batch size for training (default: 64)
config.VALID_BATCH_SIZE = 32 # input batch size for testing (default: 1000)
#config.TRAIN_EPOCHS = myconfig.TRAIN_EPOCHS # number of epochs to train (default: 10)
config.TRAIN_EPOCHS = 41
config.VAL_EPOCHS = myconfig.VAL_EPOCHS
config.LEARNING_RATE = myconfig.LEARNING_RATE # learning rate (default: 0.01)
config.SEED = myconfig.SEED # random seed (default: 42)
config.SOURCE_LEN = 150
config.TARGET_LEN = 70
config.LOAD_PATH = root+'models/model'+load_version+'.tar'
config.SAVE_PATH = root+'models/model'+model_version+'.tar'
PRETRAINED_MODEL_NAME = myconfig.PRETRAINED_MODEL_NAME
# tokenzier for encoding the text
t5_tokenizer = T5Tokenizer.from_pretrained(PRETRAINED_MODEL_NAME)
# Defining the model. We are using t5-base model and added a Language model layer on top for generation of Summary.
# Further this model is sent to device (GPU/TPU) for using the hardware.
model = T5ForConditionalGeneration.from_pretrained(PRETRAINED_MODEL_NAME)
model = model.to(device)
# Defining the optimizer that will be used to tune the weights of the network in the training session.
optimizer = torch.optim.Adam(params = model.parameters(), lr=config.LEARNING_RATE) | _____no_output_____ | MIT | huggingface_t5_6_3.ipynb | skywalker00001/Conterfactual-Reasoning-Project |
2. Load dataframe | #training df
small_path = root + '/TimeTravel/cleaned_small_2.0.xlsx'
small_df = pd.read_excel(small_path)
#small_df.head()
print(len(small_df))
small_df.head(3)
#valid df
large_path = root + '/TimeTravel/cleaned_large_2.0.xlsx'
large_df = pd.read_excel(large_path)
#large_df.head()
print(len(large_df))
small_ids = []
for i in range(len(small_df)):
small_ids.append(small_df.loc[i, 'story_id'])
print(len(small_ids))
large_df = large_df[~large_df.story_id.isin(small_ids)]
large_df = large_df.reset_index(drop=True) # must reset index after delete rows
print(len(large_df))
# select data not in training set
part_large_cleaned_df = large_df[0:100]
#part_large_cleaned_df = large_cleaned_df[0:1000]
part_large_cleaned_df = part_large_cleaned_df.reset_index(drop=True)
print(len(part_large_cleaned_df))
| _____no_output_____ | MIT | huggingface_t5_6_3.ipynb | skywalker00001/Conterfactual-Reasoning-Project |
3. Dataset and Dataloader | # Creating a custom dataset for reading the dataframe and loading it into the dataloader to pass it to the neural network at a later stage for finetuning the model and to prepare it for predictions
class CustomDataset(Dataset):
def __init__(self, dataframe, tokenizer, input_len, output_len):
self.tokenizer = tokenizer
self.data = dataframe
self.input_len = input_len
self.output_len = output_len
self.input = self.data.input1
self.output = self.data.output1
def __len__(self):
return len(self.data)
def __getitem__(self, index):
input = str(self.input[index])
# input = ' '.join(input.split())
output = str(self.output[index])
# output = ' '.join(output.split())
source = self.tokenizer.encode_plus(input, max_length= self.input_len, padding='max_length', return_tensors='pt')
target = self.tokenizer.encode_plus(output, max_length= self.output_len, padding='max_length', return_tensors='pt')
source_ids = source['input_ids'].squeeze()
source_mask = source['attention_mask'].squeeze()
target_ids = target['input_ids'].squeeze()
target_mask = target['attention_mask'].squeeze()
return {
'source_ids': source_ids.to(dtype=torch.long),
'source_mask': source_mask.to(dtype=torch.long),
'target_ids': target_ids.to(dtype=torch.long),
'target_ids_y': target_ids.to(dtype=torch.long)
}
train_df = small_df
valid_df = part_large_cleaned_df
trainingset = CustomDataset(dataframe=train_df, tokenizer=t5_tokenizer, input_len=config.SOURCE_LEN , output_len=config.TARGET_LEN )
validset = CustomDataset(dataframe=valid_df, tokenizer=t5_tokenizer, input_len=config.SOURCE_LEN , output_len=config.TARGET_LEN )
# max_sou_len = 0
# max_tar_len = 0
# for i in range(len(small_df)):
# input = small_df.loc[i, 'input1']
# output = small_df.loc[i, 'output1']
# source = t5_tokenizer.encode_plus(input, return_tensors='pt')['input_ids'].squeeze()
# target = t5_tokenizer.encode_plus(output, return_tensors='pt')['input_ids'].squeeze()
# max_sou_len = max(max_sou_len, len(source))
# max_tar_len = max(max_tar_len, len(target))
# print(max_sou_len)
# print(max_tar_len)
# max_sou_len = 0
# max_tar_len = 0
# for i in range(len(large_df)):
# input = large_df.loc[i, 'input1']
# output = large_df.loc[i, 'output1']
# source = t5_tokenizer.encode_plus(input, return_tensors='pt')['input_ids'].squeeze()
# target = t5_tokenizer.encode_plus(output, return_tensors='pt')['input_ids'].squeeze()
# max_sou_len = max(max_sou_len, len(source))
# max_tar_len = max(max_tar_len, len(target))
# print(max_sou_len)
# print(max_tar_len)
# pick up a data sample
sample_idx = 4
sample = trainingset[sample_idx]
source_ids = sample["source_ids"]
source_mask = sample["source_mask"]
target_ids = sample["target_ids"]
target_ids_y = sample["target_ids_y"]
print(source_ids)
print(train_df.loc[sample_idx, 'output1'])
sen = t5_tokenizer.decode(target_ids, skip_special_tokens=False) # skip_special_tokens=True will be completely same.
print(sen)
sen = t5_tokenizer.decode(source_ids, skip_special_tokens=False) # skip_special_tokens=True will be completely same.
print(sen)
# DataLoader
train_params = {
'batch_size': config.TRAIN_BATCH_SIZE,
'shuffle': True,
'num_workers': 2
}
val_params = {
'batch_size': config.VALID_BATCH_SIZE,
'shuffle': False,
'num_workers': 2
}
training_loader = DataLoader(trainingset, **train_params)
val_loader = DataLoader(validset, **val_params)
print(len(training_loader))
print(len(val_loader)) | _____no_output_____ | MIT | huggingface_t5_6_3.ipynb | skywalker00001/Conterfactual-Reasoning-Project |
4. Define train() and val() | def save_model(epoch, model, optimizer, loss, PATH):
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss
}, PATH)
def load_model(PATH):
checkpoint = torch.load(PATH)
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
epoch = checkpoint['epoch']
loss = checkpoint['loss']
return model, optimizer, epoch, loss
# Creating the training function. This will be called in the main function. It is run depending on the epoch value.
# The model is put into train mode and then we wnumerate over the training loader and passed to the defined network
def train(epoch, tokenizer, model, device, loader, optimizer):
model.train()
for i,data in enumerate(loader):
#len(loader)=10xx
ids = data['source_ids'].to(device, dtype = torch.long)
mask = data['source_mask'].to(device, dtype = torch.long)
y = data['target_ids'].to(device, dtype = torch.long)
# padded ids (pad=0) are set to -100, which means ignore for loss calculation
y[y[: ,:] == tokenizer.pad_token_id ] = -100
label_ids = y.to(device)
outputs = model(input_ids = ids, attention_mask = mask, labels=label_ids)
loss = outputs[0]
#logit = outputs[1]
if i%50 == 0:
wandb.log({"Training Loss": loss.item()})
if i%600==0:
print(f'Epoch: {epoch}, Loss: {loss.item()}')
optimizer.zero_grad()
loss.backward()
optimizer.step()
# xm.optimizer_step(optimizer)
# xm.mark_step()
if (epoch % 5 == 0):
save_model(epoch, model, optimizer, loss.item(), config.SAVE_PATH)
def validate(tokenizer, model, device, loader):
model.eval()
predictions = []
actuals = []
raws = []
final_loss = 0
with torch.no_grad():
for i, data in enumerate(loader):
y = data['target_ids'].to(device, dtype = torch.long)
ids = data['source_ids'].to(device, dtype = torch.long)
mask = data['source_mask'].to(device, dtype = torch.long)
'''
generated_ids = model.generate(
input_ids = ids,
attention_mask = mask,
num_beams=2,
max_length=config.TARGET_LEN,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True
)
'''
generated_ids = model.generate(
input_ids = ids,
attention_mask = mask,
num_beams=2,
max_length=config.TARGET_LEN,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True
)
loss = model(input_ids=ids, attention_mask=mask, labels=y).loss
final_loss += loss
raw = [tokenizer.decode(i, skip_special_tokens=False) for i in ids]
preds = [tokenizer.decode(i, skip_special_tokens=False) for i in generated_ids]
target = [tokenizer.decode(i, skip_special_tokens=False)for i in y]
if i%3==0:
print(f'valid Completed {(i+1)* config.VALID_BATCH_SIZE}')
raws.extend(raw)
predictions.extend(preds)
actuals.extend(target)
return raws, predictions, actuals, final_loss | _____no_output_____ | MIT | huggingface_t5_6_3.ipynb | skywalker00001/Conterfactual-Reasoning-Project |
5. main() | import time
# Helper function to print time between epochs
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
# if need, load model
loss = 0
if (load_version != None and load_version != ""):
model, optimizer, initial_epoch, loss = load_model(config.LOAD_PATH)
print(loss)
# Log metrics with wandb
#wandb.watch(model, log="all")
# Training loop
print('Initiating Fine-Tuning for the model on counterfactual dataset:')
for epoch in range(initial_epoch, initial_epoch+config.TRAIN_EPOCHS):
#for epoch in tqdm(range(config.TRAIN_EPOCHS)):
start_time = time.time()
train(epoch, t5_tokenizer, model, device, training_loader, optimizer)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
print(f'Epoch: {epoch:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
# Mark the run as finished
wandb.finish()
# Load model
# model = T5ForConditionalGeneration.from_pretrained(PRETRAINED_MODEL_NAME)
# model = model.to(device)
# optimizer = torch.optim.Adam(params = model.parameters(), lr=config.LEARNING_RATE)
# model, optimizer, epoch, loss = load_model(config.LOAD_PATH)
| _____no_output_____ | MIT | huggingface_t5_6_3.ipynb | skywalker00001/Conterfactual-Reasoning-Project |
6. Inference | # # load model
# model, optimizer, initial_epoch, loss = load_model(config.LOAD_PATH)
# print(loss)
# Validation loop and saving the resulting file with predictions and acutals in a dataframe.
# Saving the dataframe as predictions.csv
print('Now inferecing:')
start_time = time.time()
raws, predictions, actuals,final_loss = validate(t5_tokenizer, model, device, val_loader)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
print(f'Time: {epoch_mins}m {epoch_secs}s')
final_df = pd.DataFrame({'input_text': raws, 'ground_truth': actuals, 'generated_text': predictions})
#final_df.to_csv(root + 'results/' + 'output' + model_version + '.csv')
final_df.to_excel(root + 'results/' + 'output' + model_version + '.xlsx')
print('Output Files generated for review')
print(f'Final Loss is: {final_loss:.5f}')
print(len(actuals))
| _____no_output_____ | MIT | huggingface_t5_6_3.ipynb | skywalker00001/Conterfactual-Reasoning-Project |
7. check the samples with same original ending and edited ending | # import pandas as pd
# import regex as re
result_df = pd.read_excel(root + 'results/' + 'output_beam1' + model_version + '.xlsx')
result_df.head()
print(len(result_df))
or_pat = re.compile(r'(original_ending: )(.*)$')
ed_pat = re.compile(r'(edited_ending: )(.*)$')
pipei = re.search(ed_pat, result_df.iloc[0].generated_text)
# pipei = re.search(or_pat, result_df.iloc[0].raw_text)
print(pipei.group(2))
re_pat = re.compile(r'(original_ending: )(.*)$') # regular expression, pick the text after "original_ending: "
#orig = = re.search(re_pat, te).group(2)
or_text = [] # or for original_ending
ed_text = [] # ed for edited_ending
for i in range(len(result_df)):
or_text.append(re.search(or_pat, result_df.loc[i, "raw_text"]).group(2))
ed_text.append(re.search(ed_pat, result_df.loc[i, "generated_text"]).group(2))
print(len(or_text))
print(len(ed_text))
comparison = [i==j for i, j in zip(or_text, ed_text)]
print(comparison)
count = pd.value_counts(comparison)
print(count)
result_df[comparison].head(10)
same_df = result_df[comparison]
same_df.reset_index(drop=True)
same_df.to_excel(root + 'results/' + 'output_same_b1' + model_version + '.xlsx') | _____no_output_____ | MIT | huggingface_t5_6_3.ipynb | skywalker00001/Conterfactual-Reasoning-Project |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.